id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.16504
A remark on randomization of a general function of negative regularity
In the study of partial differential equations (PDEs) with random initial data and singular stochastic PDEs with random forcing, we typically decompose a classically ill-defined solution map into two steps, where, in the first step, we use stochastic analysis to construct various stochastic objects. The simplest kind of such stochastic objects is the Wick powers of a basic stochastic term (namely a random linear solution, a stochastic convolution, or their sum). In the case of randomized initial data of a general function of negative regularity for studying nonlinear wave equations (NLW), we show necessity of imposing additional Fourier-Lebesgue regularity for constructing Wick powers by exhibiting examples of functions slightly outside $L^2(\mathbb T^d)$ such that the associated Wick powers do not exist. This shows that probabilistic well-posedness theory for NLW with general randomized initial data fails in negative Sobolev spaces (even with renormalization). Similar examples also apply to stochastic NLW and stochastic nonlinear heat equations with general white-in-time stochastic forcing, showing necessity of appropriate Fourier-Lebesgue $\gamma$-radonifying regularity in the construction of the Wick powers of the associated stochastic convolution.
Tadahiro Oh, Mamoru Okamoto, Oana Pocovnicu, Nikolay Tzvetkov
2023-09-28T15:11:41Z
http://arxiv.org/abs/2309.16504v2
# A remark on randomization of a general function of negative regularity ###### Abstract. In the study of partial differential equations (PDEs) with random initial data and singular stochastic PDEs with random forcing, we typically decompose a classically ill-defined solution map into two steps, where, in the first step, we use stochastic analysis to construct various stochastic objects. The simplest kind of such stochastic objects is the Wick powers of a basic stochastic term (namely a random linear solution, a stochastic convolution, or their sum). In the case of randomized initial data of a general function of negative regularity for studying nonlinear wave equations (NLW), we show necessity of imposing additional Fourier-Lebesgue regularity for constructing Wick powers by exhibiting examples of functions slightly outside \(L^{2}(\mathbb{T}^{d})\) such that the associated Wick powers do not exist. This shows that probabilistic well-posedness theory for NLW with general randomized initial data fails in negative Sobolev spaces (even with renormalization). Similar examples also apply to stochastic NLW and stochastic nonlinear heat equations with general white-in-time stochastic forcing, showing necessity of appropriate Fourier-Lebesgue \(\gamma\)-radonifying regularity in the construction of the Wick powers of the associated stochastic convolution. Key words and phrases:probabilistic well-posedness; Wick power; random initial data; stochastic forcing; Fourier-Lebesgue space 2020 Mathematics Subject Classification: 35R60, 35L05, 60H15 ###### Contents * 1 Introduction * 1.1 Randomization of a general function * 1.2 On general stochastic forcing * 2 Proof of Theorem 1.1 * 2.1 Construction of Wick powers * 2.2 Counterexample ## 1. Introduction Over the last decade, there has been a significant progress in the study of random dispersive PDEs, broadly interpreted with random initial data and/or stochastic forcing. This study was initiated by Bourgain [5, 6] in the construction of invariant Gibbs measures for the nonlinear Schrodinger equations (NLS) and was further developed by Burq and the fourth author [10, 11] in the context of the nonlinear wave equations (NLW); see also [15]. See [4] for a survey on the subject. In recent years, we have witnessed a rapid progress [45, 23, 46, 40, 25, 34, 32, 41, 42, 43, 48, 51, 28] in probabilistic well-posedness theory for nonlinear dispersive PDEs in the _singular_ setting,1 culminated in the paracontrolled approach to NLW [24, 35, 7, 36, 8] and the introduction of random averaging operators and random tensors for NLS [17, 18]. We point out that, in the singular setting, with the exception of the stochastic KdV equation studied in [16, 31] (see also [21]), all the known probabilistic well-posedness results on nonlinear dispersive PDEs (including those with stochastic forcing) are essentially limited to * Gaussian free field initial data, white noise initial data, or their smoothed (or differentiated) versions in the case of random initial data; see, for example, (1.5) below. * space-time white noise or its smoothed (or differentiated) version in the case of stochastic forcing;2 see, for example, [32, 48]. Footnote 2: We restrict our discussion to the white-in-time case. In this paper, we investigate issues related to general randomized initial data and general stochastic forcing in the singular setting. In particular, we consider the deterministic NLW on the periodic torus \(\mathbb{T}^{d}=(\mathbb{R}/2\pi\mathbb{Z})^{d}\):3 Footnote 3: The equation (1.1) is also referred to as the nonlinear Klein-Gordon equation. We, however, simply refer to (1.1) as NLW in the following. Moreover, we only consider real-valued functions in the following. For a renormalization in the complex-valued case, see [44]. \[\partial_{t}^{2}u+(1-\Delta)u+u^{k}=0 \tag{1.1}\] with random initial data, and the stochastic NLW (SNLW) on \(\mathbb{T}^{d}\): \[\partial_{t}^{2}u+(1-\Delta)u+u^{k}=\Phi\xi, \tag{1.2}\] where \(k\geq 2\) is an integer and the unknown function \(u\) is real-valued. In (1.2), \(\xi\) denotes the (Gaussian) space-time white noise whose space-time covariance is formally given by \[\mathbb{E}\big{[}\xi(t_{1},x_{1})\xi(t_{2},x_{2})\big{]}=\delta(t_{1}-t_{2}) \delta(x_{1}-x_{2}),\] and \(\Phi\) is a linear operator on \(L^{2}(\mathbb{T}^{d})\). For conciseness of the presentation, we will only discuss details in the case of random initial data in the following. Analogous results hold in the case of general stochastic forcing; see Subsection 1.2. ### Randomization of a general function In [10, 11], Burq and the fourth author studied well-posedness of NLW (1.1) with randomization of general functions as initial data. In the current setting of \(\mathbb{T}^{d}\), given a pair of deterministic functions4 Footnote 4: By convention, we endow \(\mathbb{T}^{d}\) with the normalized Lebesgue measure \(dx_{\mathbb{T}^{d}}=(2\pi)^{-d}dx\). \[u_{0}=\sum_{n\in\mathbb{Z}^{d}}a_{n}e^{in\cdot x}\qquad\text{and}\qquad u_{1} =\sum_{n\in\mathbb{Z}^{d}}b_{n}e^{in\cdot x} \tag{1.3}\] with the constraint \(a_{-n}=\overline{a_{n}}\) and \(b_{-n}=\overline{b_{n}}\), \(n\in\mathbb{Z}^{d}\), we consider the randomized initial data \((u_{0}^{\omega},u_{1}^{\omega})\) given by \[u_{0}^{\omega}=\sum_{n\in\mathbb{Z}^{d}}g_{n}(\omega)a_{n}e^{in\cdot x}\qquad \text{and}\qquad u_{1}^{\omega}=\sum_{n\in\mathbb{Z}^{d}}h_{n}(\omega)b_{n}e^ {in\cdot x}, \tag{1.4}\] where the series \(\{g_{n}\}_{n\in\mathbb{Z}^{d}}\) and \(\{h_{n}\}_{n\in\mathbb{Z}^{d}}\) are two families of independent standard complex-valued Gaussian random variables conditioned that \(g_{-n}=\overline{g_{n}}\) and \(h_{-n}=\overline{h_{n}}\), \(n\in\mathbb{Z}^{d}\). In [10, 11], the authors considered a more general class of random variables \(\{g_{n},h_{n}\}_{n\in\mathbb{Z}^{d}}\), satisfying some (exponential) moment bound. In the following, however, we restrict our attention to the Gaussian case. Given \(s\in\mathbb{R}\), let \(\mathcal{H}^{s}(\mathbb{T}^{d})=H^{s}(\mathbb{T}^{d})\times H^{s-1}(\mathbb{T }^{d})\), where \(H^{s}(\mathbb{T}^{d})\) denotes the standard \(L^{2}\)-based Sobolev space on \(\mathbb{T}^{d}\), endowed with the norm: \[\|f\|_{H^{s}(\mathbb{T}^{d})}=\|\langle n\rangle^{s}\widehat{f}(n)\|_{\ell^{2 }(\mathbb{Z}^{d})},\qquad\langle\,\cdot\,\rangle=(1+|\cdot|^{2})^{\frac{1}{2}}.\] It is well known that if \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\), then the randomized pair \((u_{0}^{\omega},u_{1}^{\omega})\) is almost surely in \(\mathcal{H}^{s}(\mathbb{T}^{d})\). Moreover, if \((u_{0},u_{1})\notin\mathcal{H}^{s+\varepsilon}(\mathbb{T}^{d})\) for some \(\varepsilon>0\), then \((u_{0}^{\omega},u_{1}^{\omega})\notin\mathcal{H}^{s+\varepsilon}(\mathbb{T}^{ d})\) almost surely; see [10, Lemma B.1]. While there is no smoothing upon randomization in terms of differentiability in general, this randomization provides better integrability; if \(u_{0}\in H^{s}(\mathbb{T}^{d})\), then the randomized function \(u_{0}^{\omega}\) almost surely belongs to \(W^{s,p}(\mathbb{T}^{d})\) for any finite \(p\geq 1\). This gain of integrability plays a crucial role in proving probabilistic well-posedness of (1.1) for randomized initial data of supercritical but _non-negative_5 regularity, where the Cauchy problem is known to be (deterministically) ill-posed [14, 37, 22]. See [10, 11, 9, 39] for probabilistic well-posedness results on \(\mathbb{T}^{d}\). See also [29, 50, 38] for analogous results on \(\mathbb{R}^{d}\). Footnote 5: Here, we consider the regularity for \(u_{0}\). Next, we consider the case of negative regularity. In this case, the known probabilistic well-posedness results [45, 40, 37, 49, 20] are limited to the random initial data of the following form: \[\varphi_{0}^{\omega}=\sum_{n\in\mathbb{Z}^{d}}\frac{g_{n}(\omega)}{\langle n \rangle^{1+\alpha}}e^{in\cdot x}\qquad\text{and}\qquad\varphi_{1}^{\omega}= \sum_{n\in\mathbb{Z}^{d}}\frac{h_{n}(\omega)}{\langle n\rangle^{\alpha}}e^{ in\cdot x}. \tag{1.5}\] When \(\alpha=0\), \(\varphi_{0}^{\omega}\) corresponds to the massive Gaussian free field on \(\mathbb{T}^{d}\), while \(\varphi_{1}^{\omega}\) corresponds to the (Gaussian) white noise on \(\mathbb{T}^{d}\). See [6, 15, 46, 17, 18] in the case of NLS. It is easy to see that \((\varphi_{0}^{\omega},\varphi_{1}^{\omega})\in\mathcal{H}^{\sigma}(\mathbb{T} ^{d})\setminus\mathcal{H}^{\alpha+1-\frac{d}{2}}(\mathbb{T}^{d})\) for any \[\sigma<s(d,\alpha)\stackrel{{\rm def}}{{=}}\alpha+1-\frac{d}{2} \tag{1.6}\] and thus we restrict our attention to \(\alpha\leq\frac{d}{2}-1\) (such that \(\sigma<0\)). In this case, the random linear solution \(Z(t)\) defined by \[Z(t)=\cos(t\langle\nabla\rangle)\varphi_{0}^{\omega}+\frac{\sin(t\langle \nabla\rangle)}{\langle\nabla\rangle}\varphi_{1}^{\omega} \tag{1.7}\] is merely a distribution (for each \(t\in\mathbb{R}\)). Indeed, by letting \(Z_{N}=\mathbf{P}_{N}Z\), where \(\mathbf{P}_{N}\) denotes the frequency projection onto frequencies \(\{|n|\leq N\}\), it follows from \(\alpha\leq\frac{d}{2}-1\) that6 Footnote 6: Due to the translation-invariance of the law of \(Z(t,x)\), \(\sigma_{N}\) is independent of \(t\in\mathbb{R}\) and \(x\in\mathbb{T}^{d}\). \[\sigma_{N}\stackrel{{\rm def}}{{=}}\mathbb{E}\big{[}Z_{N}^{2}(t, x)\big{]}=\sum_{\begin{subarray}{c}n\in\mathbb{Z}^{d}\\ |n|\leq N\end{subarray}}\frac{\mathbf{1}_{|n|\leq N}}{\langle n\rangle^{2(1+ \alpha)}}\longrightarrow\infty, \tag{1.8}\] as \(N\to\infty\). As a result, we expect that a solution \(u(t)\) is a distribution (for fixed \(t\)) and thus the nonlinearity \(u^{k}(t)\) in (1.1) does not make sense, which necessitates us to renormalize the nonlinearity. See the introductions in [45, 23, 37]. In the following, for simplicity, we assume that \(\alpha\) is sufficiently close to \(\frac{d}{2}-1\). Given \(j\in\mathbb{N}\), define the truncated Wick power: \[:\!Z_{N}^{j}(t,x)\!:\,=H_{j}(Z_{N}(t,x);\sigma_{N}), \tag{1.9}\] where \(H_{j}\) is the Hermite polynomial of degree \(j\) and \(\sigma_{N}\) is as in (1.8). Then, arguing as in [24, 40, 37], the truncated Wick power \(:\!Z_{N}^{j}\!:\) converges to a limit, denoted by \(:\!Z^{j}\!:\), in \(C([0,T];W^{js(\alpha,d)-\varepsilon,\infty}(\mathbb{T}^{d}))\) for any \(\varepsilon>0\), almost surely, as \(N\to\infty\), where \(s(d,\alpha)\) is as in (1.6). Then, the basic strategy to study probabilistic local well-posedness of (the renormalized version of) (1.1), at least when \(\alpha\) is sufficiently close to \(\frac{d}{2}-1\), is to write a solution \(u\) in the first order expansion \(u=Z+v\) and study the equation satisfied by \(v\): \[\partial_{t}^{2}v+(1-\Delta)v+\mathcal{N}_{k}(v+Z)=0 \tag{1.10}\] with the zero initial data. Here, \(\mathcal{N}_{k}\) denotes the Wick renormalized nonlinearity given by \[\mathcal{N}_{k}(v+Z)=\sum_{j=0}^{k}\binom{k}{j}v^{k-j}:\!Z^{j}\!:. \tag{1.11}\] We point out that the main task in this argument is the construction of the Wick powers \(:\!Z^{j}\!:\). Once this is achieved, local well-posedness of (1.10) for \(v\) follows from a standard contraction argument via Sobolev's inequality and/or the Strichartz estimates. Let us now consider randomization of general functions of negative regularity. Given \(s<0\), fix \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\setminus\mathcal{H}^{0}( \mathbb{T}^{d})\) and let \((u_{0}^{\omega},u_{1}^{\omega})\) be its randomization defined in (1.4). We then define the random linear solution \(z\) by \[\begin{split} z(t)&=\cos(t\langle\nabla\rangle)u_{0 }^{\omega}+\frac{\sin(t\langle\nabla\rangle)}{\langle\nabla\rangle}u_{1}^{ \omega}\\ &=\sum_{n\in\mathbb{Z}^{d}}\bigg{(}\cos(t\langle n\rangle)g_{n}( \omega)a_{n}+\frac{\sin(t\langle n\rangle)}{\langle n\rangle}h_{n}(\omega)b_{ n}\bigg{)}e^{in\cdot x}.\end{split} \tag{1.12}\] Given \(N\in\mathbb{N}\), we set \(z_{N}=\mathbf{P}_{N}z\) and \[\alpha_{N}(t)\stackrel{{\mathrm{def}}}{{=}}\mathbb{E}\big{[}z_{ N}^{2}(t,x)\big{]}=\sum_{|n|\leq N}\bigg{(}\cos^{2}(t\langle n\rangle)|a_{n}|^{2}+ \frac{\sin^{2}(t\langle n\rangle)}{\langle n\rangle^{2}}|b_{n}|^{2}\bigg{)}\] which is divergent in general as \(N\to\infty\) since \((u_{0},u_{1})\notin\mathcal{H}^{0}(\mathbb{T}^{3})\). Note that \(\alpha_{N}\) depends on time in general. For example, if we take \(u_{1}=\langle\nabla\rangle u_{0}\), then \[\alpha_{N}=\sum_{|n|\leq N}|a_{n}|^{2}\longrightarrow\infty,\] as \(N\to\infty\). As in (1.9), given \(j\in\mathbb{N}\), we define the truncated Wick power: \[:\!z_{N}^{j}(t,x)\!:\,=H_{j}(z_{N}(t,x);\alpha_{N}(t)). \tag{1.13}\] The following result shows that for general \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\setminus\mathcal{H}^{0}( \mathbb{T}^{d})\), the truncated Wick powers \(:\!z_{N}^{j}\!:\) do not converge even as distributions and that we need to impose additional Fourier-Lebesgue regularity. Given \(s\in\mathbb{R}\) and \(1\leq p\leq\infty\), we define the Fourier-Lebesgue space \(\mathcal{F}L^{s,p}(\mathbb{T}^{d})\) via the norm: \[\|f\|_{\mathcal{F}L^{s,p}}=\|\langle n\rangle^{s}\widehat{f}(n)\|_{\ell^{p}( \mathbb{Z}^{d})}\] and we set \(\tilde{\mathcal{F}L}^{s,p}(\mathbb{T}^{d})=\mathcal{F}L^{s,p}(\mathbb{T}^{d}) \times\mathcal{F}L^{s-1,p}(\mathbb{T}^{d})\). We state our main result. **Theorem 1.1**.: (i) _Given \(s<0\), fix \((u_{0},u_{1})\in\mathcal{H}^{s}(\mathbb{T}^{d})\setminus\mathcal{H}^{0}( \mathbb{T}^{d})\) and let \((u_{0}^{\omega},u_{1}^{\omega})\) be its randomization defined in (1.4). Given an integer \(j\geq 2\), let \(:\!z_{N}^{j}\!:\) be the truncated Wick power defined in (1.13). Let \(\sigma\leq js\) and \(p>2\). Suppose that one of the following conditions holds:_ * \(\sigma\geq-\frac{d}{2}\) _and_ \(2<p<p_{d,j,\sigma}\stackrel{{\mathrm{def}}}{{=}}\frac{2dj}{dj+2\sigma}\)_, or_ * \(\sigma\leq-\frac{d}{2}\) _and_ \(2<p\leq\frac{2j}{j-1}\)__\((=p_{d,j,-\frac{d}{2}})\) _If, in addition, we have \((u_{0},u_{1})\in\vec{\mathcal{F}L}^{0,p}(\mathbb{T}^{d})\), then, given any finite \(r\geq 1\) and \(T>0\), the sequence \(\{\colon z_{N}^{j}\colon\}_{N\in\mathbb{N}}\) converges to a limit, denoted by \(\colon z^{j}\colon\), in \(C([0,T];W^{\sigma,r}(\mathbb{T}^{d}))\) almost surely, as \(N\to\infty\)._ (ii) _Given an integer \(j\geq 2\), there exists \((u_{0},u_{1})\in\big{(}\bigcap_{s<0}\mathcal{H}^{s}(\mathbb{T}^{d})\big{)} \setminus\vec{\mathcal{F}L}^{0,\frac{2j}{j-1}}(\mathbb{T}^{d})\) such that the following statements hold for any \(\sigma\in\mathbb{R}\), almost surely:_ * _Given any_ \(t\in\mathbb{R}\) _and_ \(T>0\)_, the truncated Wick power_ \(\colon z_{N}^{j}(t)\colon\) _defined in (_1.13_) does not converge to any limit in_ \(C([0,T];H^{\sigma}(\mathbb{T}^{d}))\) _or_ \(C([0,T];\mathcal{D}^{\prime}(\mathbb{T}^{d}))\)_._ * _The sequence_ \(\mathcal{I}(\colon z_{N}^{j}\colon)\) _does not converge to any limit in_ \(C([0,T];H^{\sigma}(\mathbb{T}^{d}))\) _or_ \(C([0,T];\mathcal{D}^{\prime}(\mathbb{T}^{d}))\)_, where_ \(\mathcal{I}\) _denotes the wave Duhamel integral operator defined by_ \[\mathcal{I}(F)(t)=\int_{0}^{t}\frac{\sin((t-t^{\prime})\langle\nabla\rangle)} {\langle\nabla\rangle}F(t^{\prime})dt^{\prime}.\] (1.14) _In particular, the Wick renormalized NLW_ \[\partial_{t}^{2}v+(1-\Delta)v+\mathcal{N}_{k}(v+z)=0, \tag{1.15}\] _where \(\mathcal{N}_{k}\) is as in (1.11), is probabilistically ill-posed with respect to randomization of general functions in the sense that the standard solution theory such as the first order expansion or its variant based on a higher order expansion fails._ When \(j=2\), Theorem 1.1 (ii.a) and (ii.b) hold for the pair \((u_{0},u_{1})=(u_{0},\langle\nabla\rangle u_{0})\) with _any_\(u_{0}\in\mathcal{D}^{\prime}(\mathbb{T}^{d})\setminus\mathcal{F}L^{0,4}( \mathbb{T}^{d})\). See Remark 2.4. Since \(z\in H^{s}(\mathbb{T}^{d})\), we expect that \(\colon z^{j}(t)\colon\) has at best regularity \(js\), and thus the condition \(\sigma\leq js\,(<0)\) in (i.a) is a natural one to impose. Note that \(\mathcal{F}L^{0,p_{d,j,\sigma}}(\mathbb{T}^{d})\) scales like \(H^{\frac{\sigma}{j}}(\mathbb{T}^{d})\) where \(p_{d,j,\sigma}\) is as in Theorem 1.1 (i), and by Holder's inequality, we have \[\|u_{0}\|_{H^{\frac{\sigma}{j}}}\lesssim\|u_{0}\|_{\mathcal{F}L^{0,p}}\] for any \(1\leq p<p_{d,j,\sigma}\) for \(-\frac{d}{2}\leq\sigma\leq s\) (here, \(\sigma,s\) are negative). Theorem 1.1 (ii) is of particular interest since it shows existence of initial data \((u_{0},u_{1})\) which barely misses being in \(\mathcal{H}^{0}(\mathbb{T}^{d})\) but for which the standard probabilistic well-posedness theory fails. In particular, Theorem 1.1 (ii) shows that the claim in [40, Remark 1.2] is not correct. In the context of the cubic NLW on \(\mathbb{T}^{3}\) studied in [40], Theorem 1.1 (i) provides the following probabilistic local well-posedness result. **Corollary 1.2**.: _Let \(d=3\) and \(k=3\). Given \(-\frac{1}{6}<s<0\), let \((u_{0},u_{1})\in\big{(}\mathcal{H}^{s}(\mathbb{T}^{d})\cap\vec{\mathcal{F}L}^ {0,p}(\mathbb{T}^{d})\big{)}\setminus\mathcal{H}^{0}(\mathbb{T}^{d})\) for some \(2<p<p_{3,3,3s}\), where \(p_{d,j,\sigma}\) is as in (i.a) in Theorem 1.1 (i), and let \((u_{0}^{\omega},u_{1}^{\omega})\) be its randomization defined in (1.4). Then, almost surely, there exist \(T_{\omega}>0\) and a unique solution \(v\) to the Wick renormalized NLW (1.15) on the time interval \([0,T_{\omega}]\)._ Theorem 1.1 (i) yields that \(\colon z^{j}\colon\) almost surely belongs to \(C([0,T];W^{js,r}(\mathbb{T}^{d}))\) for any finite \(r\geq 1\) and \(T>0\), \(j=1,2,3\). In particular, we have \(\mathcal{I}(\colon z^{3}\colon)\in C([0,T];W^{3s+1,r}(\mathbb{T}^{3}))\) almost surely, where \(3s+1>\frac{1}{2}\) (i.e. subcritical regularity for the \(3\)-\(d\) cubic NLW). Then, Corollary 1.2 follows from a standard contraction argument with the Strichartz estimates and the product estimates in [23, Lemma 3.4]. We omit details. **Remark 1.3**.: (i) There is a gap between the sufficient conditions given in Theorem 1.1 (i) and the necessary condition given in Theorem 1.1 (ii) for convergence of the truncated Wick powers \(\colon z_{N}^{j}\colon\). Moreover, Theorem 1.1 (i) only discusses the construction of the Wick powers \(\colon z^{j}\colon\). In order to better understand probabilistic well-posedness with general randomized initial data of negative regularity, it is important to study multilinear smoothing under the Duhamel integral operator \(\mathcal{I}\) (as in [7, 48, 8]; see also [6]) and more complex stochastic objects which appear in higher order expansions (see [24, 40, 7]) in the case of general random initial data. For conciseness of the presentation, we do not pursue these issues in this paper. (ii) In [15], Colliander and the first author studied probabilistic well-posedness of the cubic NLS on \(\mathbb{T}\) with the random initial data \(\varphi_{0}^{\omega}\) in (1.5). A quick investigation suggests that, in order to consider randomization of a general function of negative regularity as initial data, additional Fourier-Lebesgue is needed. Hence, it is worthwhile to investigate if an analogue of Theorem 1.1 holds for NLS on \(\mathbb{T}^{d}\). (iii) Over the last decade, there have also been intense research activities (see, for example, [29, 1, 2, 50, 33, 3, 19]) on probabilistic well-posedness of nonlinear dispersive PDEs on the Euclidean space \(\mathbb{R}^{d}\), where random initial data is often given by the Wiener randomization of a given function on \(\mathbb{R}^{d}\), analogous to (1.4); see [1, 2] for the Wiener randomization procedure. So far, there is no probabilistic well-posedness result with respect to the Wiener randomization of a general function of negative Sobolev regularity (without an extra assumption such as radiality),7 and thus it is of interest to study if additional Fourier-Lebesgue regularity is needed for probabilistic well-posedness for NLW or NLS on \(\mathbb{R}^{d}\) with respect to the Wiener randomization of a general function of negative Sobolev regularity. (iv) A triviality result in the study of singular random PDEs says that if one starts with regularized random initial data (or regularized stochastic forcing) but without renormalization on a nonlinearity, then as one removes regularization, the corresponding solutions converge to a trivial function or a linear solution. See [40, 34] for such triviality results on random NLW. See also [26, 47, 12, 13] for triviality results for other dispersive PDEs (even in the deterministic setting). It is an intriguing question to see if the triviality results in [40, 34] extend to the case of general random initial data in (1.4). Footnote 7: There is a recent paper [27], where the authors claim almost sure local well-posedness of the quintic NLS on \(\mathbb{R}\) with respect to the Wiener randomization of a function below \(L^{2}(\mathbb{R})\), but unfortunately, their proof of this result is not correct. Their argument is based on the probabilistic bilinear Strichartz estimate ([27, Proposition 2.8]), where one of the functions is assumed to be _deterministic_, and it is obviously false to apply such an estimate in a Picard iteration argument, starting with a random linear solution. ### On general stochastic forcing Let us consider SNLW (1.2). For simplicity, we consider the zero initial data and assume that \(\Phi\) is a Fourier multiplier operator; namely, \(\Phi(f)=\phi\ast f\) for some distribution \(\phi\) on \(\mathbb{T}^{d}\). The basic stochastic object in the study of (1.2) is the stochastic convolution \(\Psi\) defined by \[\Psi(t)=\int_{0}^{t}\frac{\sin((t-t^{\prime})\langle\nabla\rangle)}{\langle \nabla\rangle}\Phi\xi(dt^{\prime})=\sum_{n\in\mathbb{Z}^{d}}\frac{\widehat{ \phi}_{n}I_{n}(t)}{\langle n\rangle}e^{in\cdot x}, \tag{1.16}\] where \(I_{n}(t)\) is the Wiener integral given by \[I_{n}(t)=\int_{0}^{t}\sin((t-t^{\prime})\langle n\rangle)d\beta_{n}(t^{\prime }). \tag{1.17}\] Here, \(\left\{\beta_{n}\right\}_{n\in\mathbb{Z}^{d}}\) is defined by \(\beta_{n}(t)=\left\langle\xi,\mathbf{1}_{[0,t]}\cdot e^{in\cdot x}\right\rangle _{t,x}\), where \(\xi\) is the space-time white noise and \(\langle\cdot,\cdot\rangle_{t,x}\) denotes the duality pairing on \(\mathbb{R}_{+}\times\mathbb{T}^{d}\). Namely, \(\left\{\beta_{n}\right\}_{n\in\mathbb{Z}^{d}}\) is a family of mutually independent complex-valued Brownian motions conditioned that \(\beta_{-n}=\overline{\beta_{n}}\), \(n\in\mathbb{Z}^{d}\). As a consequence, we see that \(\{I_{n}(t)\}_{n\in\mathbb{Z}^{d}}\) is a family of mutually independent mean-zero complex-valued Gaussian random variables with variance \(\sim t\), conditioned that \(I_{-n}(t)=\overline{I_{n}(t)}\), \(n\in\mathbb{Z}^{d}\) When \(\widehat{\phi}_{n}=\langle n\rangle^{-\alpha}\), the regularity properties of \(\Psi\) in (1.16) is essentially the same as the random linear solution \(Z\) defined in (1.7) with the random initial data \((\varphi_{0}^{\omega},\varphi_{1}^{\omega})\) in (1.5), and thus the Wick power \(:\!\Psi^{k}\!:\) can be defined via a limiting procedure, just as in \(:\!Z^{k}\!:\), provided that \(\alpha\) is sufficiently close to \(\frac{d}{2}-1\). Before we move onto the general case, we recall the notion of \(\gamma\)-radonifying operators. We say that a Fourier multiplier operator \(\Phi\) is a \(\gamma\)-radonifying operator from \(L^{2}(\mathbb{T}^{d})\) to \(\mathcal{F}L^{s,p}(\mathbb{T}^{d})\) if \(\phi\in\mathcal{F}L^{s,p}(\mathbb{T}^{d})\), where \(\phi\) is the convolution kernel of \(\Phi\). See [21, (1.11) and Appendix A] for a further discussion and references. Then, a slight modification of the proof of Theorem 1.1 yields the following result. Recall that we assume that \(\Phi\) is a Fourier multiplier operator. **Theorem 1.4**.: _Given \(s<0\), let \(\Phi\) be a Hilbert-Schmidt operator from \(L^{2}(\mathbb{T}^{d})\) into \(H^{s-1}(\mathbb{T}^{d})\) and \(\Psi\) be as in (1.16). Given an integer \(j\geq 2\), let \(:\!\Psi_{N}^{j}\!:\) be the truncated Wick power defined as in (1.13)\((\)with \(\Psi_{N}=\mathbf{P}_{N}\Psi\) in place of \(z_{N})\). Let \(\sigma\leq js\) and \(p>2\). Suppose that one of the following conditions holds\(:\)_ * \(\sigma\geq-\frac{d}{2}\) _and_ \(2<p<p_{d,j,\sigma}\stackrel{{\rm def}}{{=}}\frac{2dj}{dj+2\sigma}\)_, or_ * \(\sigma\leq-\frac{d}{2}\) _and_ \(2<p\leq\frac{2j}{j-1}\)__\((=p_{d,j,-\frac{d}{2}})\)_._ _If, in addition, \(\Phi\) is a \(\gamma\)-radonifying operator from \(L^{2}(\mathbb{T}^{d})\) to \(\mathcal{F}L^{-1,p}(\mathbb{T}^{d})\), then, given any finite \(r\geq 1\) and \(T>0\), the sequence \(\{:\!\Psi_{N}^{j}\!:\}_{N\in\mathbb{N}}\) converges to a limit, denoted by \(:\!\Psi^{j}\!:\), in \(C([0,T];W^{\sigma,r}(\mathbb{T}^{d}))\) almost surely, as \(N\to\infty\)._ (ii) _Given an integer \(j\geq 2\), there exists a Hilbert-Schmidt operator \(\Phi\) from \(L^{2}(\mathbb{T}^{d})\) into \(H^{s-1}(\mathbb{T}^{d})\) for any \(s<0\), which is not a \(\gamma\)-radonifying operator from \(L^{2}(\mathbb{T}^{d})\) into \(\mathcal{F}L^{-1,\frac{2j}{j-1}}(\mathbb{T}^{d})\) such that the following statements hold for any \(\sigma\in\mathbb{R}\), almost surely\(:\)_ * _Given any_ \(t\in\mathbb{R}\) _and_ \(T>0\)_, the truncated Wick power_ \(:\!\Psi_{N}^{j}(t)\!:\) _does not converge to any limit in_ \(C([0,T];H^{\sigma}(\mathbb{T}^{d}))\) _or_ \(C([0,T];\mathcal{D}^{\prime}(\mathbb{T}^{d}))\)_._ * _The sequence_ \(\mathcal{I}(:\!\Psi_{N}^{j}:)\) _does not converge to any limit in_ \(C([0,T];H^{\sigma}(\mathbb{T}^{d}))\) _or_ \(C([0,T];\mathcal{D}^{\prime}(\mathbb{T}^{d}))\)_, where_ \(\mathcal{I}\) _is as in (_1.14_)._ _In particular, the Wick renormalized SNLW_ \[\partial_{t}^{2}v+(1-\Delta)v+\mathcal{N}_{k}(v+\Psi)=0,\] _where \(\mathcal{N}_{k}\) is as in (1.11), is ill-posed in the sense that the standard solution theory such as the first order expansion or its variant based on a higher order expansion fails._ By noting that \(\widehat{\phi}_{n}\) in (1.16) essentially plays a role of \(b_{n}\) in (1.4), Theorem 1.4 follows from a straightforward modification of the proof of Theorem 1.1 and thus we omit details. See [21] for an example, where local well-posedness of the stochastic cubic NLS on \(\mathbb{T}\) was established for singular noises by imposing an appropriate Fourier-Lebesgue \(\gamma\)-radonifying regularity. **Remark 1.5**.: (i) When \(j=2\), Theorem 1.4 (ii.a) and (ii.b) hold for _any_ Fourier multiplier operator \(\Phi\) on \(L^{2}(\mathbb{T}^{d})\) which is not \(\gamma\)-radonifying from \(L^{2}(\mathbb{T}^{d})\) into \(\mathcal{F}L^{-1,4}(\mathbb{T}^{d})\). (ii) Consider the following stochastic nonlinear heat equation (SNLH): \[\partial_{t}u+(1-\Delta)u+u^{k}=\Phi\xi. \tag{1.18}\] Let \(\Psi_{\rm heat}\) be the associated stochastic convolution: \[\Psi_{\rm heat}(t)=\int_{0}^{t}e^{(t-t^{\prime})(\Delta-1)}\Phi\xi(dt^{\prime} )=\sum_{n\in\mathbb{Z}^{d}}\widehat{\phi}_{n}J_{n}(t)e^{in\cdot x}. \tag{1.19}\] Here, \(J_{n}(t)\) is the Wiener integral given by \[J_{n}(t)=\int_{0}^{t}e^{-(t-t^{\prime})\langle n\rangle^{2}}d\beta_{n}(t^{\prime }),\] where \(\beta_{n}\) is as in (1.17). It is easy to see that \(\{J_{n}(t)\}_{n\in\mathbb{Z}^{d}}\) is a family of mutually independent mean-zero complex-valued Gaussian random variables with variance \(\sim\langle n\rangle^{-1}\), conditioned that \(J_{-n}(t)=\overline{J_{n}(t)}\), \(n\in\mathbb{Z}^{d}\), and hence that an analogue of Theorem 1.4 also holds for \(\Psi_{\text{heat}}\) in (1.19) and SNLH (1.18). When \(j=2\), Part (i) of this remark also holds in this case. ## 2. Proof of Theorem 1.1 ### Construction of Wick powers In this subsection, we present the proof of Theorem 1.1 (i). We first recall the following orthogonality relation for the Hermite polynomials ([30, Lemma 1.1.1]). **Lemma 2.1**.: _Let \(f\) and \(g\) be mean-zero jointly Gaussian random variables with variances \(\sigma_{f}\) and \(\sigma_{g}\). Then, we have_ \[\mathbb{E}\big{[}H_{k}(f;\sigma_{f})H_{m}(g;\sigma_{g})\big{]}=\delta_{km}k! \big{\{}\mathbb{E}[fg]\big{\}}^{k}.\] Let \((u_{0},u_{1})\notin\mathcal{H}^{0}(\mathbb{T}^{d})\) be as in (1.3). Given \(n\in\mathbb{Z}^{d}\) and \(t\in\mathbb{R}\), define \(\gamma_{n}(t)\) by \[\gamma_{n}(t)=\cos^{2}(t\langle n\rangle)|a_{n}|^{2}+\frac{\sin^{2}(t\langle n \rangle)}{\langle n\rangle^{2}}|b_{n}|^{2}. \tag{2.1}\] Then, from Lemma 2.1 with (1.12), we have \[\begin{split}\mathbb{E}\Big{[}|\mathcal{F}_{x}(\colon z_{N}^{j}( t)\colon)(n)|^{2}\Big{]}&=\int_{\mathbb{T}_{x}^{d}\times\mathbb{T}_{y}^{d}} \mathbb{E}\Big{[}:z_{N}^{j}(t,x)\colon\overline{z_{N}^{j}(t,y)\colon}\Big{]}e ^{-in\cdot(x-y)}dxdy\\ &=j!\int_{\mathbb{T}_{x}^{d}\times\mathbb{T}_{y}^{d}}\bigg{(}\prod _{\ell=1}^{j}\sum_{\begin{subarray}{c}n_{\ell}\in\mathbb{Z}^{d}\\ |n_{\ell}|\leq N\end{subarray}}\gamma_{n_{\ell}}(t)e^{in_{\ell}\cdot(x-y)} \bigg{)}e^{-in\cdot(x-y)}dxdy\\ &=j!\sum_{\begin{subarray}{c}n_{\ell}\in\mathbb{Z}^{d}\\ n=n_{1}+\cdots+n_{j}\\ |n_{\ell}|\leq N\end{subarray}}\prod_{\ell=1}^{j}\gamma_{n_{\ell}}(t).\end{split} \tag{2.2}\] Thus, from (2.2) and (2.1), we have, for any \(\sigma\in\mathbb{R}\), \[\begin{split}\mathbb{E}\Big{[}\|:& z_{N}^{j}(t) \colon\|_{H^{\sigma}}^{2}\Big{]}=\sum_{n\in\mathbb{Z}^{d}}\langle n\rangle^{2 \sigma}\mathbb{E}\Big{[}|\mathcal{F}_{x}(\colon z_{N}^{j}(t)\colon)(n)|^{2} \Big{]}\\ &=j!\sum_{\begin{subarray}{c}|n_{\ell}|\leq N\\ \ell=1,\ldots,j\end{subarray}}\langle n_{1}+\cdots+n_{j}\rangle^{2\sigma}\prod _{\ell=1}^{j}\bigg{(}\cos^{2}(t\langle n_{\ell}\rangle)|a_{n_{\ell}}|^{2}+ \frac{\sin^{2}(t\langle n_{\ell}\rangle)}{\langle n_{\ell}\rangle^{2}}|b_{n_{ \ell}}|^{2}\bigg{)}.\end{split} \tag{2.3}\] When \(\sigma\geq 0\), we have \(\langle n_{1}+\cdots+n_{j}\rangle^{2\sigma}\lesssim\prod_{\ell=1}^{j}\langle n _{\ell}\rangle^{2\sigma}\). However, when \(\sigma<0\), such an inequality is false, which allows us to show that the right-hand side is divergent for a suitable choice of \((u_{0},u_{1})\). Before presenting the proof of Theorem 1.1 (i), let us first consider the random initial data \((\varphi_{0}^{\omega},\varphi_{1}^{\omega})\) in (1.5). In the construction of the Wick powers of the truncated random linear solution \(Z_{N}=\mathbf{P}_{N}Z\), where \(Z\) is as in (1.7), the right-hand side of (2.3) (dropping \(j!\)) is given by the following iterated discrete convolutions: \[\sum_{\begin{subarray}{c}|n_{\ell}|\leq N\\ \ell=1,\ldots,j\end{subarray}}\langle n_{1}+\cdots+n_{j}\rangle^{2\sigma}\prod_ {\ell=1}^{j}\frac{1}{\langle n_{\ell}\rangle^{2(1+\alpha)}}. \tag{2.4}\] By iteratively carrying out summations (via Lemma 3.4 in [35]), we see that (2.4) is uniformly bounded in \(N\in\mathbb{N}\) for \(\sigma<js(d,\alpha)\leq 0\), where \(s(d,\alpha)\) is as in (1.6), provided that \(s(d,\alpha)\) is sufficiently close to \(0\). By viewing \((\varphi_{0}^{\omega},\varphi_{1}^{\omega})\) in (1.5) as the randomization of a pair \((\varphi_{0},\varphi_{1})\) whose Fourier coefficients are given by \(\langle n\rangle^{-1-\alpha}\) and \(\langle n\rangle^{-\alpha}\), respectively, we indeed used the Fourier-Lebesgue regularity of \((\varphi_{0},\varphi_{1})\) in bounding (2.4). **Remark 2.2**.: When \(2\sigma<-d\), we can bound (2.4) by \[\sup_{n\in\mathbb{Z}^{d}}\sum_{n=n_{1}+\cdots+n_{j}}\prod_{\ell=1}^{j}\frac{1 }{\langle n_{\ell}\rangle^{2(1+\alpha)}}, \tag{2.5}\] which yields a necessary condition \(2j(1+\alpha)>(j-1)d\) for summability of (2.5). For example, when \(d=3\), \(j=3\), and \(\alpha=0\), this condition is violated which is consistent with the non-existence of the cubic Wick power of the Gaussian free field. Let us go back to the case of general randomized initial data (1.4) and present the proof of Theorem 1.1 (i). We first consider the case \(-\frac{d}{2}\leq\sigma\leq js\). Given small \(\varepsilon_{0}>0\), set finite \(q\geq 1\) by \[\frac{1}{q}=-\frac{2\sigma}{d}-\varepsilon_{0}\quad\text{such that}\quad 2 \sigma q<-d. \tag{2.6}\] Note that we used the condition \(-\frac{d}{2}\leq\sigma<0\) to guarantee that \(q\) in (2.6) satisfies \(q\geq 1\). Then, from (2.3) and Holder's inequality with (2.6), we have \[\mathbb{E}\Big{[}\|:\!z_{N}^{j}(t)\!:\|_{H^{\sigma}}^{2}\Big{]}\lesssim\bigg{\|} \sum_{n=n_{1}+\cdots+n_{j}}\prod_{\ell=1}^{j}\left(|a_{n_{\ell}}|^{2}+\frac{| b_{n_{\ell}}|^{2}}{\langle n_{\ell}\rangle^{2}}|\right)\bigg{\|}_{\ell_{n}^{q}}. \tag{2.7}\] In the following, we iteratively apply Young's inequality. Let \(p_{0}=q^{\prime}\) and \[\frac{1}{p_{\ell}}+1=\frac{1}{p/2}+\frac{1}{p_{\ell+1}},\quad\ell=0,1,\ldots,j-2 \tag{2.8}\] with \(p_{j-1}=\frac{p}{2}>1\). Then, from (2.8) and (2.6), we have \[\frac{1}{p}=\frac{1}{2}-\frac{1}{2jq}=\frac{dj+2\sigma}{2dj}+\varepsilon_{1}, \tag{2.9}\] where \(\varepsilon_{1}=\frac{1}{2j}\varepsilon_{0}\). Let \(c_{n}=|a_{n}|+\frac{|b_{n}|}{\langle n\rangle}\) such that \(\|c_{n}\|_{\ell^{p}_{n}}\sim\|(u_{0},u_{1})\|_{\mathcal{F}\!\!\mathcal{L}^{0,p}}\). Then, by iteratively applying Young's inequality to (2.7), we obtain \[\begin{split}\mathbb{E}\Big{[}\|:z_{N}^{j}(t)\colon\|_{H^{\sigma} }^{2}\Big{]}&\lesssim\|(u_{0},u_{1})\|_{\mathcal{F}\!\! \mathcal{L}^{0,p}}^{2}\bigg{\|}\sum_{m_{j-1}=n_{1}+\cdots+n_{j-1}}\prod_{\ell= 1}^{j-1}\Big{(}|a_{n_{\ell}}|^{2}+\frac{|b_{n_{\ell}}|^{2}}{\langle n_{\ell} \rangle^{2}}|\Big{)}\bigg{\|}_{\ell^{p_{1}}_{m_{j-1}}}\\ &\lesssim\|(u_{0},u_{1})\|_{\mathcal{F}\!\!\mathcal{L}^{0,p}}^{4} \bigg{\|}\sum_{m_{j-2}=n_{1}+\cdots+n_{j-2}}\prod_{\ell=1}^{j-2}\Big{(}|a_{n_{ \ell}}|^{2}+\frac{|b_{n_{\ell}}|^{2}}{\langle n_{\ell}\rangle^{2}}|\Big{)} \bigg{\|}_{\ell^{p_{2}}_{m_{j-2}}}\\ &\lesssim\cdots\lesssim\|(u_{0},u_{1})\|_{\mathcal{F}\!\! \mathcal{L}^{0,p}}^{2(j-1)}\bigg{\|}|a_{n_{1}}|^{2}+\frac{|b_{n_{1}}|^{2}}{ \langle n_{1}\rangle^{2}}\bigg{\|}_{\ell^{p_{1}}_{n_{1}}}\\ &\lesssim\|(u_{0},u_{1})\|_{\mathcal{F}\!\!\mathcal{L}^{0,p}}^{2 j}<\infty,\end{split} \tag{2.10}\] uniformly in \(N\in\mathbb{N}\), provided that \((u_{0},u_{1})\in\mathcal{F}\!\!\mathcal{L}^{0,p}(\mathbb{T}^{d})\) for some \(2<p<\frac{2dj}{dj+2\sigma}\). Next, we consider the case \(\sigma<-\frac{d}{2}\). In this case, from (2.7), we have \[\mathbb{E}\Big{[}\|:z_{N}^{j}(t)\colon\|_{H^{\sigma}}^{2}\Big{]}\lesssim\bigg{\|} \sum_{n=n_{1}+\cdots+n_{j}}\prod_{\ell=1}^{j}\bigg{(}|a_{n_{\ell}}|^{2}+\frac{ |b_{n_{\ell}}|^{2}}{\langle n_{\ell}\rangle^{2}}|\bigg{)}\bigg{\|}_{\ell^{ \infty}_{n}}.\] With \(p_{0}=\infty\), we recursively define \(p_{\ell}\) as in (2.8). Then, from (2.9) with \(q=1\), we have \(p=\frac{2j}{j-1}\). Then, by iteratively applying Young's inequality as in (2.10), we obtain \[\mathbb{E}\Big{[}\|:z_{N}^{j}(t)\colon\|_{H^{\sigma}}^{2}\Big{]}\lesssim\|(u_ {0},u_{1})\|_{\mathcal{F}\!\!\mathcal{L}^{0,\frac{2j}{j-1}}}^{2j}<\infty, \tag{2.11}\] uniformly in \(N\in\mathbb{N}\). Once we have the uniform bound (2.10) or (2.11), almost sure convergence of \(:z_{N}^{j}:\) in \(C([0,T];W^{\sigma,r}(\mathbb{T}^{d}))\) for any finite \(r\geq 1\) claimed in Theorem 1.1 (i) follows from a standard argument, involving the Wiener chaos estimate (see [52, Proposition 2.4]) and Kolmogorov's continuity criterion-type argument, and hence we omit details. See, for example, [23, 24, 40, 37]. ### Counterexample In this subsection, we present the proof of Theorem 1.1 (ii). We define \(u_{0}\) on \(\mathbb{T}^{d}\) whose Fourier coefficient at the frequency \(n=(n^{(1)},\ldots,n^{(d)})\in\mathbb{Z}^{d}\) is given by \[a_{n}=\widetilde{a}_{n^{(1)}}\ldots\widetilde{a}_{n^{(d)}}, \tag{2.12}\] where \(\widetilde{a}_{n^{(i)}}\), \(i=1,\ldots,d\), is defined by \[\widetilde{a}_{n^{(i)}}=\begin{cases}m^{-\frac{j-1}{2j}},&\text{if there is $m\in\mathbb{N}$ such that $|n^{(i)}|=2^{m}$,}\\ 0,&\text{otherwise.}\end{cases} \tag{2.13}\] We set \(u_{1}=\langle\nabla\rangle u_{0}\). Then, we have \[\begin{split}\|(u_{0},u_{1})\|_{\mathcal{H}^{s}}^{2}& \sim\sum_{n=(n^{(1)},\ldots,n^{(d)})}\langle n\rangle^{2s}| \widetilde{a}_{n^{(1)}}\ldots\widetilde{a}_{n^{(d)}}|^{2}\lesssim\prod_{i=1}^{ d}\sum_{n^{(i)}=1}^{\infty}\langle n^{(i)}\rangle^{\frac{2s}{d}}| \widetilde{a}_{n^{(i)}}|^{2}\\ &\lesssim\Big{(}\sum_{m=1}^{\infty}2^{\frac{2s}{d}m}m^{-\frac{j-1} {j}}\Big{)}^{d}<\infty\end{split}\] for any \(s<0\). Moreover, we have \[\|(u_{0},u_{1})\|_{\mathcal{F}\!\!L^{0},\frac{2j}{j-1}}\gtrsim\prod_{i=1}^{d}\sum _{n^{(i)}=1}^{\infty}|\widetilde{a}_{n^{(i)}}|^{\frac{2j}{j-1}}=\Big{(}\sum_{m=1 }^{\infty}m^{-1}\Big{)}^{d}=\infty.\] Hence, we conclude that \((u_{0},u_{1})\in\big{(}\bigcap_{s<0}\mathcal{H}^{s}(\mathbb{T}^{d})\big{)} \setminus\mathcal{F}\!\!L^{0,\frac{2j}{j-1}}(\mathbb{T}^{d})\). Let \(t\in\mathbb{R}\). From (2.3), we have \[\begin{split}\mathbb{E}\Big{[}\|:& z_{N}^{j}(t) \colon\|_{H^{\sigma}}^{2}\Big{]}\sim\sum_{\begin{subarray}{c}|n_{\ell}|\leq N \\ \ell=1,\ldots,j\end{subarray}}\langle n_{1}+\cdots+n_{j}\rangle^{2\sigma}\prod_ {\ell=1}^{j}|a_{n_{\ell}}|^{2}\\ &\geq\sum_{|n_{1}|,\ldots,|n_{j-1}|\leq N}\mathbf{1}_{|n_{1}+ \cdots+n_{j-1}|\leq N}\Big{(}\prod_{\ell=1}^{j-1}|a_{n_{\ell}}|^{2}\Big{)}|a_ {n_{1}+\cdots+n_{j-1}}|^{2},\end{split} \tag{2.14}\] where the second step follows from considering the contribution only for \(n_{1}+\cdots+n_{j}=0\). For \(i=1,\ldots,d\) and \(\ell=1,\ldots,j-1\), set \[\mathfrak{N}_{\ell}^{(i)}:=n_{1}^{(i)}+\cdots+n_{\ell}^{(i)}. \tag{2.15}\] Noting that \[\bigg{\{}n=(n^{(1)},\ldots,n^{(d)})\in\mathbb{Z}^{d}:\max_{i=1,\ldots,d}|n^{(i )}|\leq\frac{N}{\sqrt{d}}\bigg{\}}\subset\{n\in\mathbb{Z}^{d}:|n|\leq N\},\] it follows from (2.12) and (2.15) that \[\text{RHS of \eqref{eq:n_1}}\geq\prod_{i=1}^{d}\bigg{(}\sum_{0\leq n_{1}^{(i)} \leq\cdots\leq n_{j-1}^{(i)}\leq\frac{N}{\sqrt{d}}}\Big{(}\prod_{\ell=1}^{j-1} |\widetilde{a}_{n_{\ell}^{(i)}}|^{2}\Big{)}|\widetilde{a}_{\mathfrak{N}_{j-1} ^{(i)}}|^{2}\bigg{)}. \tag{2.16}\] When \(j=2\) (i.e. \(\frac{2j}{j-1}=4\)), it follows from (2.13) and (2.16) that \[\text{RHS of \eqref{eq:n_1}}\gtrsim\prod_{i=1}^{d}\Big{(}\sum_{n_{1}^{(i)}=0}^{ \frac{N}{\sqrt{d}}}|\widetilde{a}_{n_{1}^{(i)}}|^{4}\Big{)}\gtrsim\Big{(}\sum_ {m=2}^{\log_{2}\frac{N}{\sqrt{d}}|}m^{-1}\Big{)}^{d}\sim(\log\log N)^{d}, \tag{2.17}\] where \([x]\) denotes the integer part of \(x\in\mathbb{R}\). Now, we consider the case \(j\geq 3\). We first state a lemma whose proof is presented at the end of this section. **Lemma 2.3**.: _Let \(j\geq 3\). Then, there exist \(N(j)\in\mathbb{N}\) and small \(c_{j}>0\) such that_ \[\sum_{\begin{subarray}{c}n_{2}^{(i)},\ldots,n_{j-1}^{(i)}\in\mathbb{N}\\ 4\leq n_{2}^{(i)}\leq\cdots\leq n_{j-1}^{(i)}\leq\frac{N}{\sqrt{d}}\end{subarray}} \mathbf{1}_{4\leq n_{1}^{(i)}\leq c_{j}\frac{N}{\sqrt{d}}}\cdot\Big{(}\prod_{ \ell=2}^{j-1}|\widetilde{a}_{n_{\ell}^{(i)}}|^{2}\Big{)}|\widetilde{a}_{ \mathfrak{N}_{j-1}^{(i)}}|^{2}\gtrsim(\log_{2}n_{1}^{(i)})^{-1+\frac{j-1}{j}}, \tag{2.18}\] _uniformly in \(N\geq N(j)\) and \(i=1,\ldots,d\)._ By Lemma 2.3, we obtain \[\begin{split}\text{RHS of \eqref{eq:RHS}}&\gtrsim\prod_{i=1}^{d} \bigg{(}\sum_{n_{1}^{(i)}=4}^{c_{j}\frac{N}{\sqrt{d}}}|\widetilde{a}_{n_{1}^{( i)}}|^{2}(\log_{2}n_{1}^{(i)})^{-1+\frac{j-1}{j}}\bigg{)}\\ &\sim\bigg{(}\sum_{m=2}^{\lceil\log_{2}c_{j}\frac{N}{\sqrt{d}} \rceil}m^{-1}\bigg{)}^{d}\sim(\log\log N)^{d},\end{split} \tag{2.19}\] where the last step hold for any sufficiently large \(N\gg 1\) (depending on \(j\)). Therefore, from (2.14), (2.16) (2.17), and (2.19), we conclude that \[\mathbb{E}\Big{[}\|:\!z_{N}^{j}(t)\!:\|_{H^{\sigma}}^{2}\Big{]}\geq\mathbb{E} \Big{[}|\mathcal{F}_{x}(:\!z_{N}^{j}(t)\!:\!)(0)|^{2}\Big{]}\gtrsim(\log\log N) ^{d}\longrightarrow\infty,\] as \(N\to\infty\). At this point, we can repeat the argument in [32, Subsection 4.4] with Kolmogorov's three series theorem and zero-one law and conclude Theorem 1.1 (ii.a). We omit details. Since we only estimated the contribution from the zeroth frequency, we also conclude non-convergence in \(C([0,T];\mathcal{D}^{\prime}(\mathbb{T}^{d}))\). Noting that the wave Duhamel integral operator does give any smoothing at the zeroth frequency, we also obtain Theorem 1.1 (ii.b) We conclude this paper by presenting the proof of Lemma 2.3. Proof of Lemma 2.3.: We restrict the sum on the left-hand side of (2.18) to \[4\leq n_{\ell}^{(i)}\leq 2^{-2^{\frac{j}{j-2}}}\frac{N}{j\sqrt{d}}=:M_{j-2} \tag{2.20}\] for \(\ell=1,\ldots,j-2\) (but not for \(\ell=j-1\)). With (2.15), this in particular implies \[2^{2^{\frac{j}{j-2}}}(\mathfrak{N}_{j-2}^{(i)}+n_{j-2}^{(i)})\leq\frac{N}{ \sqrt{d}}\leq\mathfrak{N}_{j-2}^{(i)}+\frac{N}{\sqrt{d}}. \tag{2.21}\] Noting that \(ab\geq a+b\) for \(a,b\geq 2\), it follows from (2.21) that \[\big{(}\log_{2}(\mathfrak{N}_{j-2}^{(i)}+n_{j-2}^{(i)})\big{)}^{-1+\frac{2}{ j}}\geq 2\bigg{(}\log_{2}\Big{(}\mathfrak{N}_{j-2}^{(i)}+\frac{N}{\sqrt{d}} \Big{)}\bigg{)}^{-1+\frac{2}{j}}. \tag{2.22}\] Hence, from (2.13) (in particular, \(\widetilde{a}_{n}\) restricted to \(n=2^{m}\), \(m\in\mathbb{N}\), is decreasing) and (2.22), we have \[\sum_{n_{j-1}^{(i)}=n_{j-2}^{(i)}}^{\frac{N}{\sqrt{d}}}|\widetilde {a}_{n_{j-1}^{(i)}}|^{2}|\widetilde{a}_{\mathfrak{N}_{j-2}^{(i)}+n_{j-1}^{(i) }}|^{2}\geq\sum_{n_{j-1}^{(i)}=n_{j-2}^{(i)}}^{\frac{N}{\sqrt{d}}}|\widetilde{ a}_{\mathfrak{N}_{j-2}^{(i)}+n_{j-1}^{(i)}}|^{4}\] \[\geq\sum_{m=\lceil\log_{2}(\mathfrak{N}_{j-2}^{(i)}+\frac{N}{ \sqrt{d}})\rceil+1}^{[\log_{2}(\mathfrak{N}_{j-2}^{(i)}+\frac{N}{\sqrt{d}})]}m ^{-2+\frac{2}{j}}\gtrsim(\log_{2}n_{j-2}^{(i)})^{-1+\frac{2}{j}}.\] When \(j=3\), we stop the calculation here. When \(j\geq 4\), we further impose \[4\leq n_{\ell}^{(i)}\leq 2^{-2^{\frac{j}{j-3}}}M_{j-2}=:M_{j-3} \tag{2.23}\] for \(\ell=1,\ldots,j-3\) (but not for \(\ell=j-2\)), where \(M_{j-2}\) is as in (2.20). Then, we have \[\sum_{n_{j-2}^{(i)}=n_{j-3}^{(i)}}^{M_{j-2}}|\widetilde{a}_{n_{j-2}^{(i)}}|^{2}( \log_{2}n_{j-2}^{(i)})^{-1+\frac{2}{j}}\gtrsim\sum_{m=[\log_{2}n_{j-3}^{(i)}]+1 }^{\left[\log_{2}M_{j-2}\right]}m^{-\frac{i-1}{j}}m^{-1+\frac{2}{j}}\gtrsim( \log_{2}n_{j-3}^{(i)})^{-1+\frac{3}{j}}, \tag{2.24}\] where the last step follows from (2.23) (which implies \(\big{(}\log_{2}n_{j-3}^{(i)}\big{)}^{-1+\frac{3}{j}}\geq 2\big{(}\log_{2}M_{j- 2}\big{)}^{-1+\frac{3}{j}}\)). In general, suppose \(j\geq k\geq 5\) and we assume that we have repeated the procedure above \(k-3\) times. In this case, we further impose a condition \[4\leq n_{\ell}^{(i)}\leq 2^{-2^{\frac{j}{j-(k-1)}}}M_{j-(k-2)}=:M_{j-(k-1)} \tag{2.25}\] for \(\ell=1,\ldots,j-(k-1)\) (but not for \(\ell=j-(k-2)\)). The condition (2.25) guarantees \(\big{(}\log_{2}n_{j-(k-1)}^{(i)}\big{)}^{-1+\frac{k-1}{j}}\geq 2\big{(}\log_{2}M _{j-(k-2)}\big{)}^{-1+\frac{k-1}{j}}\), which allows us to repeat the computation as in (2.24) for the \((k-2)\)nd step. By iterating this procedure, we obtain (2.18) with \(c_{j}=\prod_{k=3}^{j}2^{-2^{\frac{j}{j-(k-1)}}}\) which follows from (2.20), (2.23), and (2.25). **Remark 2.4**.: Let \(j=2\) such that \(\frac{2j}{j-1}=4\). Given any \(u_{0}\in\mathcal{D}^{\prime}(\mathbb{T}^{d})\setminus L^{2}(\mathbb{T}^{d})\), by setting \(u_{1}=\langle\nabla\rangle u_{0}\), it follows from (2.3) and considering the contribution only from \(n_{1}+n_{2}=0\) that \[\mathbb{E}\Big{[}\|:\!z_{N}^{2}(t)\!:\!\|_{H^{\sigma}}^{2}\Big{]}\gtrsim\sum_ {|n_{1}|\leq N}|a_{n_{1}}|^{4}\longrightarrow\infty,\] as \(N\to\infty\), for any \(\sigma\in\mathbb{R}\) unless \(u_{0}\in\mathcal{F}L^{0,4}(\mathbb{T}^{d})\). ### Acknowledgements T.O. was supported by the European Research Council (grant no. 864138 "SingStochDispDyn"). M.O. was supported by JSPS KAKENHI Grant number JP23K03182. O.P. was supported by the EPSRC New Investigator Award (grant no. EP/S033157/1). N.T. was partially supported by the ANR project Smooth ANR-22-CE40-0017.
2309.10175
One ACT Play: Single Demonstration Behavior Cloning with Action Chunking Transformers
Learning from human demonstrations (behavior cloning) is a cornerstone of robot learning. However, most behavior cloning algorithms require a large number of demonstrations to learn a task, especially for general tasks that have a large variety of initial conditions. Humans, however, can learn to complete tasks, even complex ones, after only seeing one or two demonstrations. Our work seeks to emulate this ability, using behavior cloning to learn a task given only a single human demonstration. We achieve this goal by using linear transforms to augment the single demonstration, generating a set of trajectories for a wide range of initial conditions. With these demonstrations, we are able to train a behavior cloning agent to successfully complete three block manipulation tasks. Additionally, we developed a novel addition to the temporal ensembling method used by action chunking agents during inference. By incorporating the standard deviation of the action predictions into the ensembling method, our approach is more robust to unforeseen changes in the environment, resulting in significant performance improvements.
Abraham George, Amir Barati Farimani
2023-09-18T21:50:26Z
http://arxiv.org/abs/2309.10175v1
# One ACT Play: Single Demonstration Behavior Cloning with Action Chunking Transformers ###### Abstract Learning from human demonstrations (behavior cloning) is a cornerstone of robot learning. However, most behavior cloning algorithms require a large number of demonstrations to learn a task, especially for general tasks that have a large variety of initial conditions. Humans, however, can learn to complete tasks, even complex ones, after only seeing one or two demonstrations. Our work seeks to emulate this ability, using behavior cloning to learn a task given only a single human demonstration. We achieve this goal by using linear transforms to augment the single demonstration, generating a set of trajectories for a wide range of initial conditions. With these demonstrations, we are able to train a behavior cloning agent to successfully complete three block manipulation tasks. Additionally, we developed a novel addition to the temporal ensembling method used by action chunking agents during inference. By incorporating the standard deviation of the action predictions into the ensembling method, our approach is more robust to unforesen changes in the environment, resulting in significant performance improvements. ## I Introduction Behavior cloning, or the process of teaching an agent to mimic the actions of a human in order to complete a task, is a key aspect of robotic learning. It has been used to teach agents to complete tasks ranging from driving cars [1, 2, 3] to playing video games [4], to robotic locomotion [5] and complex manipulation [6, 7, 8]. However, behavior cloning has many challenges, including compounding errors that lead to unpredictable out-of-distribution performance [9, 10], and sample inefficiency [11]. Although much progress has been made recently in addressing these issues, mitigating the problems of compounding errors and unpredictable performance using strategies such as action chunking [12] and increasing sample efficiency through both data set augmentation and improved network architecture [13, 14], the limitations, especially for sample efficiency, persist. In particular, the issue of poor sample efficiency means that behavior cloning agents require many demonstrations, often in the hundreds, to learn tasks that a human could master with only a single demonstration [15]. Recent work in the related field of reinforcement learning (RL) with human demonstrations has addressed the issue of sample efficiency by augmenting a single demonstration using simple linear transforms, then autonomously replaying the augmented demonstrations and observing the resulting states [16]. In this work, we explore applying a similar augmentation method in a behavior cloning setting to develop a method to learn a task given only a single human demonstration. An outline of our method can be found in Figure 1. Because the agent's training data originates from a single demonstration, the agent is only able to learn on a small portion of the task's state space. As such, the ability to generalize to unseen states and recover from poor predictions is vital for our behavior cloning algorithm. Therefore, we chose to base our method on Action Chunking with Transformers (ACT), whose use of a convolutional autoencoder (CVAE) increases generalizability, and whose action chunking and ensembling methods make the agent resistant to occasional poor actions [17]. However, we found that the original action ensembling method (a weighted average of actions predicted for the current time step, each predicted at a different prior time step) was not suited for the block manipulation tasks we evaluated our method on. If the task does not go as the agent expected, some of the previous action predictions may become erroneous, corrupting the cumulative action. To address this issue, we introduce a heuristic, based on the standard deviation of the predicted actions, to estimate if the action predictions are in agreement. If they are not, we alter the ensembling method to ignore older action predictions, and if the disagreement is very large, we temporarily suspend Fig. 1: Diagram outlining our method. A single demonstration trajectory is augmented then replayed, and the resulting state-action observations are used to train our behavior cloning agent (ACT). At inference time, the environment state is passed to the agent, which returns an action chunk, \(a_{t}\). The action chunk is combined with prior action chunks via temporal ensembling, and the resulting action is executed. action ensembling, instead replaying a single action chunk. Finally, we evaluated both our single demonstration behavior cloning method and novel ensembling method on three block manipulation tasks: moving a block across a table to a goal location (push), moving a block to a goal location above the table (pick-and-place), and stacking two blocks, in the correct order, at a specified location on a table (stack). ## II Related Works ### _Behavior Cloning_ Behavior cloning uses demonstrations to determine an agent's actions by having the agent replicate expert examples [1]. This task can be accomplished through machine learning using a supervised learning approach such as classification [18]. These forms of behavior cloning have proven effective on complex tasks such as biped locomotion [19], but they require large data sets and do not function well in environments outside of the range of examples they trained on [20]. Data set Aggregation (DAgger) addresses some of theses issues by augmenting the learned policy with expert demonstrations collected throughout the training process, interconnecting the learned and expert policies [9]. However, DAgger can be difficult to implement because it requires the use of expert examples collected throughout the duration of training. ### _Single Demonstration Reinforcement Learning_ Multiple methods have been developed for learning from a single demonstration in the field of reinforcement learning, where the addition of a single demonstration is primarily used to overcome the exploration problem [21, 22, 23]. One strategy is to use a single demonstration through curriculum learning, which trains the agent on progressively larger subsections of the desired task [24]. [25] used this method to beat Montezuma's Revenge, an Atari game with a long-horizon for reward, with a single demonstration. By resetting the game to states along the demonstration path, starting near the end of the demonstration and progressively moving further and further back, a curriculum of steadily increasing difficulty was created that enabled PPO [26] to achieve a SOTA score on the game. Similar curriculum methods have been used by [27] to help train robotic tasks. Another approach to training reinforcement learning agents using demonstrations is to directly add the demonstrations to the replay buffer [28, 29]. This approach was used by [16], in combination with curriculum learning, to train an RL agent. This work only used a single demonstration, augmented to form 5000 'human-like' demonstrations by linearly scaling the demonstration trajectory, showing that a simple augmentation method could result in significant improvements in performance. Although learning from a human demonstration, the agent often learned a policy significantly different from that shown, illustrating the creativity of reinforcement learning [30]. ### _Action Chunking with Transformers_ Action Chunking with Transformers (ACT) is a behavior cloning algorithm that uses a Conditional Variational Autoencoder (CVAE) to model diverse scenes, combined with a transformer to be able to predict action sequences (chunks) given a multimodal input [17]. By casting these actions as goal states for the manipulator to reach, a temporal aggregation method can be used to combine predicted actions from multiple previous time steps through a weighted average. By combining multiple predictions, this approach mitigates the problems of compounding errors and unpredictable responses to out-of-distribution states. Although erroneous actions may still be chosen by the model, correct actions, predicted during previous time steps, will provide a moderating influence on the final action. Additionally, the transformer network structure [31] allows for a wide range of multi-model inputs, such as text prompts, further improving the robustness of the ACT framework [32]. ## III Methods ### _Human Demonstration Collection_ The single human demonstration used by our behavior cloning method was collected using an Oculus Quest 2 virtual reality headset, running the unity-based teleoperation environment developed by [33]. This is the same approach as was taken by [16]. The VR environment shows the user a first-person view of the task they need to complete along with a virtual Franka-Emika Panda end-effector, which the user controls using the Oculus's hand-held controller. For example, to demonstrate a pick-and-place task, the user is shown the block to move, along with a transparent goal block, highlighting the region where the block is to be placed. The demonstration is done entirely in simulation, using Unity's physics engine, which has slightly different dynamics than the pybullet simulator [34] used for testing, or the hardware system used for validation. The user's actions are recorded while completing the desired task, creating a trajectory file that will be used by the augmentation method to generate additional demonstrations. ### _Demonstration Augmentation_ Behavior cloning algorithms treat the control problem as a supervised learning task, imitating a human's action (or series of actions) for a given state. Therefore, to learn a task the agent must be trained on a wide range of demonstrations from across the task's state space so that the agent can extrapolate the correct action for any given state. However, our approach only involves a single demonstration. As such, we require an augmentation method to turn that single demonstration into a collection of demonstrations that covers a sizeable portion of the task's state space. To accomplish this, we turn to the linear scaling method developed by [16]. However, since our task is behavior cloning, not reinforcement learning, we have to take more care about the quality of the demonstration we collect. As such, rather than scaling and shifting each axis, we apply a linear transform to the trajectory consisting of rotation, translation, and uniform scaling, which results in less distortion in the generated trajectory. To generate a new trajectory using our recorded demonstration trajectory, we first generate a random start and goal location. A rotation matrix is then calculated such that the vector from the recorded start location to the recorded goal location, once rotated, will align with the vector from the generated start to the generated goal locations. This constraint leaves a degree of freedom, as the transformed vector is free to rotate about its axis. We use this degree of freedom to stipulate that the z-axis of the transformed vector should be aligned with the world frame's z-axis. This constraint ensures that "up" is preserved in the augmented demos, which is important due to gravity. The two constraints for the rotation matrix are shown below: \[r_{\Delta}=r_{g}-r_{s},\quad g_{\Delta}=g_{g}-g_{s} \tag{1}\] \[\frac{R\,r_{\Delta}}{\left\|R\,r_{\Delta}\right\|}=\frac{g_{\Delta}}{\left\|g _{\Delta}\right\|},\quad\frac{\text{Proj}_{(R\,\hat{z})}(P)}{\left\|\text{Proj} _{(R\,\hat{z})}(P)\right\|}=\frac{\text{Proj}_{(\hat{z})}(P)}{\left\|\text{Proj} _{(\hat{z})}(P)\right\|} \tag{2}\] Where \(r_{g}\) is the recorded goal, \(r_{s}\) is the recorded start, \(g_{g}\) is the generated goal, \(g_{s}\) is the generated start, \(R\) is the rotation matrix, \(P\) is a plane with normal vector \(g_{\Delta}\), \(\hat{z}\) is a vertical unit vector, and \(\text{Proj}_{(a)}(B)\) means the projection of vector \(a\) onto plane \(B\). Next, a scaling factor is calculated so that the distance between the recorded start and recorded end location matches the distance between the generated start and generated end locations. Finally, a translation matrix is calculated to make the rotated, scaled, and translated recorded start location match the generated start location. These transforms, along with the rotation matrix from above, are combined to give a final transform, as shown below: \[s=\frac{||g_{\Delta}||}{||r_{\Delta}||},\quad t=(g_{s}-R\,r_{s}) \tag{3}\] \[T=\begin{bmatrix}s&0&0&t_{x}\\ 0&s&0&t_{y}\\ 0&0&s&t_{z}\\ 0&0&0&1\end{bmatrix}\begin{bmatrix}R&0\\ 0&1\end{bmatrix} \tag{4}\] Where \(T\) is the final linear transform. Once the linear transform for a given environment is calculated, the recorded points from the single demo are transformed, and the resulting trajectory is replayed using a proportional controller. The augmentation method assumes the task can be linearly transformed given a start and goal location. If the task is more complex, it can instead be decomposed into multiple sub-tasks, each of which can be linearly transformed. For example, for block stacking, the trajectory can be split into multiple sub-trajectories (move block one, then move block two, etc.), and each sub-trajectory can then be warped independently. When the agent replays the generated trajectory, state-action information is collected as if the agent were human-controlled. If the augmented demonstration is unsuccessful at completing the task, it is discarded. Because this method disposes of bad demonstrations, the requirement for effectiveness of the augmented trajectories is mainly dependent on the expense of playing back trajectories; if replaying trajectories is relatively inexpensive (such as in simulation) then low accuracy can be mitigated by increasing the volume of simulations. ### _Learning Architecture_ To develop efficient policy architectures for robot manipulation, especially with the limited variety of data produced through single demo augmentation, we must address several key objectives. First, we need a policy that can generalize well to unseen states outside of the training distribution. Additionally, the policy should be robust to a small number of poor predictions, so that a few incorrect choices (potentially caused by out-of-distribution states) do not derail the agent's overall policy. For these reasons, we based our method on Action Chunking with Transformers [17]. Our network structure is very similar to the original ACT network, except for minor alterations due to differences in state representations. We chose to control the location and gripper width of a single arm with a parallel plate gripper, whereas the original work controlled joint angles of a 2 arm setup. Similarly to ACT, we chose to use a pixel-level observation space, removing the need for an additional vision system to extract permanent state information [35]. A diagram of our network structure can be seen in Figure 2. Additionally, we have altered the temporal ensembling method to better account for dynamic and multimodal environmental states. The original ACT algorithm determined its action at inference time by calculating a weighted average of actions (goal end-effector positions) it previously predicted for that time step, with weights of the form \(e^{-kt}\), where \(t\) is the time since the prediction was made. Although this strategy works well, decreasing action noise and limiting the effect of erroneous actions due to out-of-distribution states, the use of a weighted average makes the implicit assumption that the predicted actions consist of a correct action, with symmetric noise added. Under this assumption, using an average action will lead the model to choose the underlying correct action. However, if there are multiple possible approaches, the predicted actions can form a multi-modal distribution, clustered around different "correct" actions. In this situation, a weigh Fig. 2: Diagram of the network structure used for the behavior cloning model, based on the ACT framework [17]. The network is trained as a conditional variational auto-encoder (CVAE) on the action sequence. During training, a style variable \(Z\) is calculated by a transformer encoder from the action sequence and the agent’s position. During inference, this encoder is removed and \(Z\) is set to 0. an in-between action, part of neither approach, and likely a poor option. This issue is exacerbated in non-stationary environments and environments with distinct state changes, where earlier predictions can be quite bad. For example, in block manipulation tasks, the choice of action at a future time step is heavily dependent on whether the agent thinks it will have successfully grasped the block at that time. If this prediction changes during execution (such as if the gripper motion is slower than expected) then the predicted action distribution will be bi-modal (some actions assuming the block is gripped, some actions assuming it is not), causing the average action to be a poor choice for either situation. Our implementation of temporal aggregation addresses the issue of multi-modal action distributions by dynamically adjusting the temperature value, \(k\), for the exponentially decaying weights used in the weighted average. If the action distribution is clustered around a value, then we assume that the predicted state has been consistent, and therefore using the average of all of the predicted actions would be effective. However, if the distribution of actions is highly variant, then we assume that the predicted state has not been static and that earlier action predictions may be erroneous and should be ignored. Given this relationship, we choose \(k\) to be proportional to the standard deviation of the distribution. If the actions are widely distributed, then \(k\) will be large, causing the agent to ignore previous action predictions, and if the predicted actions are tightly clustered, then \(k\) will be small, causing the agent to incorporate past actions more equally. Because our action is comprised of two modalities, end-effector position and gripper width, two \(k\) values are calculated. Because the position is a vector, we use the \(L_{\infty}\) norm of its standard deviation for the positional \(k\). \[k_{g}=\beta\sigma(a_{g}),\quad k_{p}=\beta||\sigma(a_{p})||_{\infty} \tag{5}\] Where \(k_{g}\), \(a_{g}\) and \(k_{p}\), \(a_{p}\) are the temperature constants and predicted actions for the gripper width and end-effector positions, respectively, and \(\beta\) is a proportionality constant. This approach addresses drift in the action space, where the difference between the expected and resulting state leads to differing action predictions. However, if this drift is severe, or if the action predictions are oscillating between multiple different action policies, \(k\) becomes very large, rendering action chunking moot and reinstating the issues of compounding errors and out-of-distribution states that action chunking was designed to address. In this case, we prefer to keep action chunking and instead eliminate temporal ensembling. If \(k\) goes above a specified cutoff, we directly sample the next \(n\) time steps directly from the current action chunk prediction, where \(n\) is a hyperparameter (we chose \(n\) to be one-half of the action chunk length). This allows the agent to execute a coherent series of actions, which ideally removes the agent from the state that was causing temporal ensembling trouble. Because our method is based on standard deviation, we use the original ensembling method until the fifth time step of a run to ensure a sufficient sample size of action predictions. ## IV Experimental Evaluation ### _Simulation Environment_ We assessed the efficacy of our singular demonstration behavior cloning approach on three Panda-Gym tasks developed by Gallouedec et al.: block pushing, block pick-and-place, and block stacking, as illustrated in Figure 3[36]. These Panda-Gym tasks use a seven-degree-of-freedom Franka Emika Panda robot arm to manipulate one or more 4 cm blocks in a PyBullet simulation. Throughout the experiments, the gripper's orientation remains fixed, rendering the end effector a four-degree-of-freedom system, consisting of position and gripper width. Task success is based on the distance between the current cube position (depicted as the opaque cube in Figure 3) and the desired cube location (represented by the transparent cube in Figure 3), with a success cutoff of 5 cm for the push and pick-and-place tasks, and 4 cm for both cubes in the stacking task. To make the push task more intuitive for human teleoperation, we modify the existing implementation to permit fingertip mobility, allowing the user to use the gripper to complete the task. This change effectively transforms the push task into a two-dimensional pick-and-place task. The behavior cloning agent observes the scene using two virtual cameras, one mounted on the end-effector and another located in the top corner of the environment, providing an isometric view of the scene (see Figure 4). In addition to the two cameras, the agent observes the state of the robot's end-effector, consisting of the x, y, and z position of the center of the gripper, and the gripper's width (x,y,z,g). Fig. 4: Example views from the two cameras used in simulation experiments. On the left is a view from the end-effector camera, and on the right a view from the isometric camera. Fig. 3: Panda-Gym environments used for evaluation. The block(s) to be stacked are shown as opaque cubes, while the goal locations are shown as transparent cubes. ### _Results_ #### Iv-B1 Single Demonstration Behavior Cloning To examine the effectiveness of learning a policy from a single demonstration using our augmentation method and our variation of Action Chunking with Transforms, we trained our behavior cloning agent on the three evaluation tasks (push, pick and place, and stack) with a varying number of augmented demonstrations (25, 50, 100, 200, and 400). The agent's success rates, after training, are shown in Figure 5. Our results show that using a single demonstration, augmented with linear transforms, a behavior cloning agent is able to learn all three block manipulation tasks, with a nearly perfect success rate for push and pick and place, and an impressive 78.4% success rate for the more complicated stack task. Additionally, these results show a significant increase in performance as the number of augmented demonstrations increases. This relationship was expected since more augmentations mean more demonstrations, increasing the variety of experience the BC agent is exposed to, leading to a more complete coverage of the state space, and an according decrease in out-of-distribution states. Additionally, our results show the number of augmented demonstrations needed to learn a task is proportional to the complexity of the task, which is in line with observations made for similar BC tasks that directly used demonstrations [6]. #### Iv-B2 Temporal Ensembling To examine the effectiveness of our temporal ensembling method, we re-ran our behavior cloning experiments with the original temporal ensembling method proposed by [17], using an exponentially decaying weighted average with a constant \(k\). The results from this experiment can be seen in Table 1. Compared with our ensembling method, we observe that the baseline performance is slightly worse for the push and pick and place tasks, and significantly worse for the stack task. Because the stack task is the most complex task with the longest action horizon, it is more likely to suffer from drift or multi-modality in its action predictions, making our changes to address these issues more relevant. Our ensembling method has two main components: a dynamic heat constant, \(k\), based on standard deviation and the temporary suspension of temporal ensembling, instead using pure action chunking, when the ensembled actions become too varied. To examine the impact of each of these aspects individually, we re-ran our experiments with only the dynamic heat constant and only the suspension of temporal ensembling (fixed heat constant). Because the effect of the dynamic heat constant is largely dependent on the proportionality constant, beta, used to calculate \(k\) from the standard deviation, we ran experiments with beta equal to 1, 0.5, and 0.25 for both only a dynamic \(k\) and the combined method. The results can be found in Table 1. Our results show that using only a dynamic \(k\), with a good \(\beta\) value, performs slightly better than baseline, and using only resetting performs slightly worse than baseline. However, combining the two approaches has the greatest success rate. Additionally, we found that our augmentation method is quite susceptible to the choice of \(\beta\), with a \(\beta\) of 0.5 performing the best when only using a dynamic \(k\), and a \(\beta\) of 1 performing the best when using the combined approach. ### _Hardware Validation_ In order to test the viability of our single demonstration behavior cloning methodology for use on hardware, we Fig. 5: Success rates of our behavior cloning method for different numbers of augmented demonstrations used in training. The method was evaluated on the push, pick and place, and stack tasks; the experiments were run in simulation. The shaded region on the graph shows one standard deviation from the mean. implemented the push task (planar pick-and-place) using a Franka-Emika Panda robot as our manipulator [37], and Intel RealSense D415 RGB-D cameras (we only used the RGB data) to record observations. The goal of this task was to pick up a block placed on the table and move it to a goal location. To indicate the goal location, we used an 8 cm orange square (in simulation, this had been done with a transparent cube). Additionally, due to the size of our table, we used a slightly smaller action space (60 cm square instead of 70 cm square). An image of our hardware setup can be seen in Figure 6. To run the experiment, we first collected a series of augmented demos using the Franka robot. Given a single demonstration trajectory (the same demonstration trajectory, collected in VR, that we used for the simulation experiments) and a randomized goal and start location, we used our augmentation method to generate new trajectories. The generated trajectories were then replayed by the Franka Robot using a proportional controller, and the states (images from the RealSense cameras, the current end-effector position, and the current gripper width) and actions were recorded. At the end of the demonstration replay, an operator indicated whether the task was successful, and unsuccessful demonstrations were discarded. Once the augmented demos were collected, we trained a behavior cloning agent using the same method we used for the simulation experiments, except we used three cameras (an end-effector camera, a side camera, and a front camera), rather than two, to compensate for our cameras' more limited view due to mounting constraints. We trained with 50, 100, and 200 augmented demonstrations. The trained policies were evaluated on the same hardware, using our temporal ensembling strategy. The agent had an accuracy of 35%, 70%, and 90% for 50, 100, and 200 augmented demos, respectively. These values are quite similar to the results we got for the same conditions in simulation, following the trend of increasing accuracy with an increased number of demonstrations, but with slightly worse performance (see Table 1). Although not conclusive, the similarity between the hardware and simulation results suggests that the conclusions we draw from experiments in simulation can be extended to hardware applications. ## V Conclusions Our results show that a single human demonstration can be used to learn a successful policy through behavior cloning, as long as a sufficient augmentation method is applied. We showed that even a naive augmentation method - applying a piece-wise linear transform to the recorded trajectory - can allow behavior cloning to succeed with only a single human demonstration. By collecting many generated demonstrations, disposing of the failures, and training on the successes, even a brittle augmentation method, such as the one we use, can be used to train a robust policy. Although the diversity of the demonstrations the agent is training on may be limited, a combination of a CVAE to improve generalization and action-chunking to mitigate the negative effects of out-of-distribution states can overcome this limitation, enabling the agent to train a successful policy in multiple tasks. Additionally, our work introduced a novel temporal ensembling method for combining action chunks at inference time. This method, which uses the standard deviation of the predicted actions as a proxy for changes in dynamics that may render prior action choices incorrect, mitigates the issues encountered by weighted average ensembling when the action distribution is multi-modal. By incorporating this simple statistical heuristic into the ensembling method's weighted average, we were able to improve our accuracy on the stack task from 60.8% to 78.4%, almost halving our error rate. Although this method is vital to the performance of our single demonstration behavior cloning algorithm, it can be applied to any temporal aggregation agent, making it a valuable tool for the behavior cloning community. The main limitation of our work is that the benefits of using a single human demonstration with augmentation, as opposed to simply collecting more human demonstrations, are directly tied to the relative cost of collecting demonstrations vs. executing a demonstration. If the cost of the human demonstration dominates (such as when training in simulation), then this method essentially allows a user to collect a nearly unlimited number of demonstrations for the cost of only one. However, if the cost of an agent autonomously completing a task is similar to that of a human demonstrating the task (such as if a human has to play an active role in robot operation during autonomous demonstration collection), then our method is less impactful. Moving forward, we hope to address this discrepancy by combining hardware and simulation training, using our single demonstration method to train a policy in simulation, and then (using the same single demonstration) fine-tuning the policy on hardware demonstrations. Fig. 6: Hardware validation setup. A Franka Emika Panda arm was used to train a behavior cloning agent to move a 4 cm block to a goal location, shown with an 8 cm square orange piece of paper. A wrist-mounted RealSense camera, along with two stationary cameras mounted around the arm (one is labeled, one is off-screen), was used to collect visual observations for the agent.
2309.07409
Masked Diffusion with Task-awareness for Procedure Planning in Instructional Videos
A key challenge with procedure planning in instructional videos lies in how to handle a large decision space consisting of a multitude of action types that belong to various tasks. To understand real-world video content, an AI agent must proficiently discern these action types (e.g., pour milk, pour water, open lid, close lid, etc.) based on brief visual observation. Moreover, it must adeptly capture the intricate semantic relation of the action types and task goals, along with the variable action sequences. Recently, notable progress has been made via the integration of diffusion models and visual representation learning to address the challenge. However, existing models employ rudimentary mechanisms to utilize task information to manage the decision space. To overcome this limitation, we introduce a simple yet effective enhancement - a masked diffusion model. The introduced mask acts akin to a task-oriented attention filter, enabling the diffusion/denoising process to concentrate on a subset of action types. Furthermore, to bolster the accuracy of task classification, we harness more potent visual representation learning techniques. In particular, we learn a joint visual-text embedding, where a text embedding is generated by prompting a pre-trained vision-language model to focus on human actions. We evaluate the method on three public datasets and achieve state-of-the-art performance on multiple metrics. Code is available at https://github.com/ffzzy840304/Masked-PDPP.
Fen Fang, Yun Liu, Ali Koksal, Qianli Xu, Joo-Hwee Lim
2023-09-14T03:25:37Z
http://arxiv.org/abs/2309.07409v1
# Masked Diffusion with Task-awareness for Procedure Planning ###### Abstract A key challenge with procedure planning in instructional videos lies in how to handle a large decision space consisting of a multitude of action types that belong to various tasks. To understand real-world video content, an AI agent must proficiently discern these action types (_e.g._, _pour milk_, _pour water_, _open lid_, _close lid_, etc.) based on brief visual observation. Moreover, it must adeptly the intricate semantic relation of the action types and task goals, along with the variable action sequences. Recently, notable progress has been made via the integration of diffusion models and visual representation learning to address the challenge. However, existing models employ rudimentary mechanisms to utilize task information to manage the decision space. To overcome this limitation, we introduce a simple yet effective enhancement - a masked diffusion model. The introduced mask acts akin to a task-oriented attention filter, enabling the diffusion/denoising process to concentrate on a subset of action types. Furthermore, to bolster the accuracy of task classification, we harness more potent visual representation learning techniques. In particular, we learn a joint visual-text embedding, where a text embedding is generated by prompting a pre-trained vision-language model to focus on human actions. We evaluate the method on three public datasets and achieve state-of-the-art performance on multiple metrics. Code is available at [https://github.com/ffzzy840304/Masked-PDPP](https://github.com/ffzzy840304/Masked-PDPP). 1Institute for Infocomm Research, Agency for Science, Technology, and Research (A*STAR), Singapore 2Nanyang Technological University, Singapore 1 Fusionopolis Way, #21-01, Connexis (South), Singapore, 138632 ## Introduction Learning procedural knowledge from instructional videos - a natural ability of humans - presents a tough challenge to artificial intelligence (AI). It requires multiple aspects of cognitive and reasoning abilities such as scene understanding, event segmentation and discovery, action recognition and prediction, and causal reasoning [22, 23]. Building an AI agent with these capabilities is a pressing task for the AI community and has broad implications for real-world applications, _e.g._, to monitor human behaviors or to assist humans in collaborative tasks. In this paper, we focus on a sub-field of instructional video understanding, namely learning goal-directed actions from real-world videos, and subsequently generate feasible plans. In particular, we follow the work of [1] and cast the problem as procedure planning in instructional videos, which requires a model to generate action plans given the visual observations of a start state and a goal state (an example of _making jello_ is illustrated in Figure 1 - bottom). Moreover, we adopt the challenging problem setting of learning with weak supervision, _i.e._, to learn procedure knowledge without requiring intermediate visual observations [24]. Instead, only action labels are provided, which alleviates the costly annotation of the start and end times of each intermediate step. A key challenge with procedure planning in instructional videos lies in how to handle a large decision space consisting of a multitude of action types that belong to many tasks. For example, there are 778 action types from 180 task classes in the COIN dataset [13]. Since the datasets are collected from real-world videos at scale, the distribution of actions is largely unknown. In the current problem setting, the visual observations are essentially a pair of images (start and goal states) that are stochastically drawn from a video, and hence it is extremely difficult to recognize them from visual observations. Moreover, planning a sequence of actions from a large pool of actions is even more challenging considering the complicated semantic relationships between action types and task goals. This is exacerbated by the existence of multiple viable action plans to accomplish a specific task goal [10]. Early works on procedure planning have employed a Figure 1: Searching and sorting action sequences from a large set of action types is challenging. Projected diffusion (top left) uses task class as a condition that does not restrict the decision space effectively. We propose masked diffusion (top right) to explicitly manage the decision space. Additionally, text embedding is used to enhance task classification and subsequent action sequence generation. two-branch autoregressive approach while adopting different network architectures to model the probabilistic process. These include the dual dynamics networks (DDN) [14], Bayesian Inference using exterior generative adversarial imitation learning (Ext-GAIL) [15], and Transformers [23]. One limitation of these methods is related to autoregressive process, which is slow and subject to error propagation. Moreover, they require the costly observation of intermediate states as supervisory signals. In contrast, a single-branch non-autoregressive model is proposed that not only simultaneously predicts all intermediate steps, but more importantly alleviates the need for intermediate visual observations [14]. However, this method involves a complicated training process on multiple loss functions to manage a large design space. Recently, a diffusion-based probabilistic model is proposed to generate procedure plans non-autoregressively [22]. It adopts a two-stage process, namely task classification and action sequence generation. The former aims to capture contextual information and used it as a conditional constraint in the latter step. However, as illustrated in Figure 1 and shown by our results, using the task class as the condition has a limited effect on reducing the design space, _i.e._, decisions are still made with respect to a large pool of action types. In this study, we propose a masked diffusion model to use task knowledge as context constraints. Instead of using task labels as a soft condition as in [22], we propose to generate a task-oriented mask to directly restrict the search space of action prediction. As shown in Figure 1, action plans are generated on a greatly reduced subset of action types, owning to the task-guided mask. It helps to reduce the dimensionality of the decision space and enforces stronger hierarchical reasoning. Considering the possible adverse effect of inaccurate task classification, we further enhance visual representation learning via action-aware visual captioning based on pre-trained vision-language models (VLMs). In particular, a text embedding is obtained by prompting a frozen VLM (_e.g._, LLaVA) [13] to focus on the human actions in the current visual scene. We use text-enhanced multimodal embedding to both improve task classification and enhance action planning on the masked diffusion model. **Contributions**: (1) We propose a novel masked diffusion model to harness task information and enforce hierarchical procedure planning. Multiple strategies of masking operation are designed and evaluated to show the effectiveness of masking. (2) We enhance visual representation learning with an action-aware text embedding generated from a VLM in a zero-shot manner. We achieve state-of-the-art performance on multiple datasets under different testing conditions. These show the effectiveness of masked diffusion in planning under uncertainty and the potential of text-enhanced representation learning in procedure planning. ## Related Work Action sequence modelingTo handle complexities related to a large decision space, early works in procedure planning resort to solutions in probabilistic reasoning for goal-directed planning, such as universal planning networks [27], uncertainty-aware action anticipation [2], and causal InfoGAN [17]. However, these models have limited capacity in handling complexities in the scenes of instructional videos. The DDN model [14] learns the latent space via the interplay of a transition model and conjugate model, but suffers from compounding error. An Ext-GAIL model is proposed to separately handle time-invariant context knowledge and the casual relationship among actions, where a stochastic model is used to explicitly capture the distribution of plan variants during training [15]. The PlaTe model adopts transformer-based visual grounding and planning [23], but has limited capacity to handle uncertainty. The above approaches suffer from slow convergence and error propagation owing to the auto-regressive reasoning process. Recently, a memory-enhanced probabilistic procedure planning framework is proposed, which adopts weak and adversarial supervision [14]. The method handles uncertainty by combining a pre-trained global memory unit, an adversarial generative model, and a Viterbi post-processing method. However, it involves a complicated training scheme and tedious inference process owing to the computation of multiple loss functions and the brittleness of training GANs. It is also restricted by the limited capability of a small-sized global memory with a fixed structure. The closest work to ours is the projected diffusion procedure planning (PDPP) model [22], which leverages the power of diffusion models to tackle complexity. However, task information is used as a "soft" condition in the representation, resulting in weak guidance to action planning. Moreover, task classification is performed using a simple multilayer perceptron (MLP) on standard visual embedding, which may not fully capture the value of the task context. We anticipate that context/task knowledge is crucial in effective and efficient procedure planning as is shown by numerous empirical evidences in hierarchical procedure planning [1, 13, 14, 15]. Visual representation learningVisual reasoning can be enhanced by stronger visual representation learning. In the current problem formulation, the AI agent needs to infer the task type and generate action sequences based solely on two "peeks" into the start and goal states. Recently, notable progress has been made to train and fine-tune large VLMs [14, 15, 16, 17], which is partially driven by the availability of large-scale instructional video datasets [14, 15, 16, 17, 18]. The latest models usually use knowledge from the language domain (_e.g._, wikiHow) as distant supervision signals [14, 15, 16]. However, the computational cost of training/fine-tuning large VLMs is usually prohibitively high. Alternatively, efforts have also been made to use pre-trained large language models (LLMs) as a visual planner [13, 14], leveraging the zero shot reasoning ability of powerful foundation models [12, 13, 14, 15]. However, there is still a notable performance gap due to the lack of domain knowledge. Another stream of research resorts to graph-based representation to capture visual semantics of procedures, ranging from conventional neural task graph [10] to sophisticated transformer-based models [11, 12, 13]. One drawback of these models is that they are usually complex with an additional medium of graph representation. ## Method ### Problem Formulation Given the visual observations of a start state (\(o_{s}\)) and a goal state (\(o_{g}\)), the procedure planning task is to produce a plan in the form of a sequence of actions \(a_{1:T}\) that, when executed, will enable the transition from \(o_{s}\) to \(o_{g}\) in \(T\) steps, where \(T\) is called the horizon of planning. Similar to [10], we decompose the task into two steps: (1) predicting the task category (_e.g._, _make sandwich_, _assemble bed_), and (2) generating action sequences conditioned on the predicted task category. The decision process can be formulated as \[p(a_{1:T}|o_{s},o_{g})=\int p(a_{1:T}|o_{s},o_{g},c)p(c|o_{s},o_{g})dc. \tag{1}\] The system architecture is shown in Figure 2. As mentioned earlier, using task information as a condition does not exert sufficient modulation power on search space reduction. To address this issue, we propose a new strategy to make use of the task information, namely to generate a mask to restrict the decision space to a subset of "promising" actions. Notably, such a masking approach is different from masked diffusion transformers [15, 12]. The latter aim to strengthen the model's ability to learn context information for image generation, whereas we use masks to restrict the decision space. ### Action-aware Visual Representation Learning In [10], the task classifier is a simple MLP that takes the concatenated visual embedding of the start and goal state as input. In our model, task class plays an important role in the diffusion process by generating a mask to constrain the design space. Therefore, it is important to improve the accuracy of task classification. We propose two techniques to address this issue. First, we employ a Transformer model (_e.g._, ViT) (to replace the original MLP) that takes \(o_{s}\) and \(o_{g}\) (based on joint vision-text embedding) to predict the task class (\(c\)). Second, we enhance the visual representation by affixing an action-aware text embedding to the visual embedding. It is observed that prevalent image encoders are pre-trained on generic instructional videos, such as Howt0100M [13], which does not possess sufficient ability to discriminate refined human actions. Fine-tuning such models is both costly and may jeopardize their generalisability. Meanwhile, numerous VLMs have been developed that show impressive descriptive power and flexibility. We adopt LLaVA [15] with frozen network weights1 and prompt it to concentrate on the human actions in the visual input, _e.g._, "_Please briefly describe_ [Image]_focusing on the human actions_". A list of candidate prompts is included in the supplementary material. Despite the explicit request for brevity, the generated description may still be verbose and not suitable for subsequent reasoning. Therefore, we design a simple routine to extract the key words in the form of \(verb+noun(s)+[optional]adverb(s)\). For example, in Figure 2, the raw description of the start state can be "_In the image, a person is pouring water into an electric kettle from a faucet._". The routine can extract the key information, such as "_pour water into an electric kettle_". Textual descriptions of \(o_{s}\) and \(o_{g}\) are fed into a pre-trained text encoder to generate the text embedding of the two states. Finally, the text embedding is concatenated with the visual embedding, resulting in the text-enhanced representation \((o_{s}^{VT},o_{g}^{VT})\). Footnote 1: Other VLM models can be used to achieve similar outcome. ### Masked Diffusion Model In a standard diffusion model [11], a forward diffusion process involves incremental addition of Gaussian noise \(\epsilon\sim\mathcal{N}(0,\,\mathbf{I})\) to the input data (\(x_{0}\), _i.e._, the true signal) until it degrades to a random Gaussian distribution \(x_{t}\). The process is parameterized via a Stochastic Differential Equation (SDE): \[\begin{split} q(x_{n},x_{0})=\sqrt{\overline{\alpha}_{n}}x_{0}+ \epsilon\sqrt{1-\overline{\alpha}_{n}},\\ q(x_{n}|x_{n-1})=\mathcal{N}(x_{n};\sqrt{1-\beta_{n}}x_{n-1}, \beta_{n}\mathbf{I}),\end{split} \tag{2}\] where \(\overline{\alpha}_{n}=\prod_{s=1}^{n}(1-\beta_{s})\) denotes the noise magnitude, and \(\beta_{s}\in(0,1)_{s=1}^{t}\) specifies the ratio of Gaussian noise added to the signal in each step. Similarly, the reverse denoising process gradually maps a Gaussian noise into the sample via a discrete SDE: \[p_{\theta}(x_{n-1}|x_{n})=\mathcal{N}(x_{n-1};\mu_{\theta}(x_{n},n),\sum_{ \theta}(x_{n},n)). \tag{3}\] The network is trained by optimizing the variational lower-bound of \(p_{\theta}(x_{0})\), based on which \(\sum_{\theta}(x_{n},n)\) can be obtained. Meanwhile, \(\mu_{\theta}(x_{n},n)\) is reparameterized as a noise prediction network \(\epsilon_{\theta}(x_{n},n)\), which is trained with a simple mean-squared error loss \(L=||\epsilon-\epsilon_{\theta}(x_{n},n)||^{2}\). After training, the model can recover the signal \(x_{0}\) from random Gaussian noise. We construct the input signal by concatenating three elements, namely (1) the text-enhanced visual observations of the start and goal states \((o_{s}^{VT},o_{g}^{VT})\), (2) the predicted task class (\(c\)), and (3) a sequence of candidate actions \((a_{1:T})\), i.e. \(x=[(o_{s}^{VT},o_{g}^{VT}),c,a_{1:T}]\). Different from [10], the candidate action sequence is affixed with a binary mask (_e.g._, '1' for active actions in the predicted task class, '0' for other actions) that is specified by the task class. In practice, the loss function is computed on \(x\) with respect to the unmasked actions, \(x^{m}\). The task-specific mask is derived from the mapping relationship between a task class and the action types, which can be obtained from ground truth during training. In essence, despite the fact that an individual action planning instance does not have the complete list of action types with respect to the task, one can simply include all action types for a specific task from many instances and remove the duplicates. We adopt a similar condition project scheme on the task class and observations as in [23]. Consistent with the premise that the initial and terminal actions are more important due to their primacy and recency effects, additional weights are assigned to these specific actions. The projection operation \(Proj()\) in our model is defined as \[\begin{bmatrix}\hat{c}_{1}&\hat{c}_{2}&&\hat{c}_{T}\\ w\hat{a}_{1}^{m}&\hat{a}_{2}^{m}&...&w\hat{a}_{T}^{m}\\ \hat{o}_{1}&\hat{o}_{2}&&\hat{o}_{T}\end{bmatrix}\quad\rightarrow\quad\begin{bmatrix} c&c&c\\ w\hat{a}_{1}^{m}&\hat{a}_{2}^{m}&...&w\hat{a}_{T}^{m}\\ o_{s}^{VT}&0&o_{g}^{VT}\end{bmatrix}, \tag{4}\] where \(\hat{c}_{i}\), \(\hat{o}_{i}\) and \(\hat{a}_{i}^{m}\) refer to the \(i^{th}\) horizon task class, observation dimensions and predicted masked action logits in masked representation \(x^{m}\), respectively. \(c\), \(o_{s}^{VT}\), \(o_{g}^{VT}\) represent the specified conditions. The projection operation in Eq. 4 indicates that the guidance is not changed during training. More importantly, after projecting task classification and observations to their original values, the loss on \(x^{m}\) is exclusively attributed to \(a^{m}\). Thus, the training loss can be computed as follows: \[\mathcal{L}_{diff}^{m}=\sum_{n=1}^{N}(\epsilon_{\theta}(a_{n}^{m},n)-a_{0}^{m} )^{2}. \tag{5}\] By employing a binary mask on the action dimensions, Gaussian noise is exclusively introduced to unmasked active actions. As a result, the search space for optimal actions is confined to the task-defined subset, rather than encompassing the entire action space of the dataset. This operation considerably reduces the learning load of the model during loss minimization, which in turn leads to a streamlined convergence process and enhanced accuracy in the denoising phase. This benefit becomes even more pronounced as the action space becomes larger. ``` Input: Initial input \(x_{0}\), Gt task class \(c\), the condition project function \(Proj()\), total diffusion steps \(N\), diffusion model \(\epsilon_{\theta}\), \(\{\overline{\alpha}_{n}\}_{n=1}^{N}\) 1: apply a binary mask to action dimension in \(x_{0}\) given \(c\) 2:\(a_{0}^{m}=a_{0}[0,1,0...1,0]\)(\(a\) in \(c\) value is '1', otherwise '0') 3:repeat 4:\(n\sim\{1,N\}\) 5:\(\epsilon\sim\mathcal{N}(0,\,\mathbf{I})\) 6:\(x_{n}^{m}\)=\(\sqrt{\alpha_{n}}x_{0}^{m}+\epsilon\sqrt{1-\alpha_{n}}\) 7:\(\hat{x}_{0}^{m}=\epsilon_{\theta}(Proj(x_{n}^{m}),n)\) 8: Take gradient descent step on 9:\(\bigtriangledown_{\theta}\parallel x_{0}^{m}-Proj(\hat{x}_{0}^{m})\parallel^{2}\) 10:until converged ``` **Algorithm 1**Training Process ### Training Our training program consists of two main stages: (1) training of a task class prediction model to extract conditional guidance from start to goal observation as well as action masks; (2) leveraging the masked diffusion model to effectively fit the target action sequence distribution. As mentioned earlier, a binary mask is applied to the action dimensions, directing the denoising model to focus on active actions. In the action sequence distribution fitting stage, we adopt the U-Net architecture [10] to learn the noise prediction model \(\epsilon_{\theta}(x_{n},n)\) on the masked action distribution, as it resembles the stacked denoising autoencoders. By minimizing \(\mathcal{L}_{diff}^{m}\), the model effectively mitigates the impact of randomly introduced noise on \(x_{n}^{m}\). The detailed denoising model training process is shown in Algorithm 1. ``` Input: Initial input \(x_{0}\), Gt task class \(c\), the condition project function \(Proj()\), total diffusion steps \(N\), diffusion model \(\epsilon_{\theta}\), \(\{\overline{\alpha}_{n}\}_{n=1}^{N}\) 1: apply a binary mask to action dimension in \(x_{0}\) given \(c\) 2:\(a_{0}^{m}=a_{0}[0,1,0...1,0]\)(\(a\) in \(c\) value is '1', otherwise '0') 3:repeat 4:\(n\sim\{1,N\}\) 5:\(\epsilon\sim\mathcal{N}(0,\,\mathbf{I})\) 6:\(x_{n}^{m}\)=\(\sqrt{\alpha_{n}}x_{0}^{m}+\epsilon\sqrt{1-\alpha_{n}}\) 7:\(\hat{x}_{0}^{m}=\epsilon_{\theta}(Proj(x_{n}^{m}),n)\) 8: Take gradient descent step on 9:\(\bigtriangledown_{\theta}\parallel x_{0}^{m}-Proj(\hat{x}_{0}^{m})\parallel^{2}\) 10:until converged ``` **Algorithm 2**Training Process **Inference** During the inference stage, only the initial observation \(o_{s}\) and the target observation \(o_{g}\) are given. The task class is generated through the trained task classifier, eliminating the need for the ground truth task class as in the training phase. Subsequently, Gaussian noise is introduced to the conditions Figure 2: Overview of our masked diffusion model with task-awareness. A frozen Visual-language model (VLM) generates text embedding of the start and goal states based on action-oriented prompts. An action mask is generated based on task class to restrict the action types. of the observations and masked action dimensions, resulting in the creation of \(x_{n}^{m}\). The acquired denoise model is then employed to conduct denoising \(N\) times for sampling an optimal action sequence. The detailed procedure in the inference stage is shown in Algorithm 2. ### Implementation Details The perceptual input to our model is a 1536-dimensional vector that represents the visual features extracted from HowTo100M [11]. For the text representation input, we utilize LLaVA's [12] prompt-extracted text, which is subsequently encoded into a 578-dimensional vector using a DistilBERT [13] base model. All models are trained using a linear warm-up scheme. Throughout our experiments, the training batch size remains constant at 256. All the experiments are conducted using the ADAM optimizer [10] on a setup consisting of 4 NVIDIA RTX A5000 GPUs. Refer to supplement for more detailed information, such as learning rate and training epochs on different datasets. ## Experiments ### Evaluation Protocol #### Datasets We conduct evaluations of our model on three instructional video datasets: CrossTask [13], NIV [1], and COIN [14]. The CrossTask dataset comprises 2,750 videos spanning 18 different tasks, with an average of 7.6 actions per video. The NIV dataset consists of 150 videos depicting 5 daily tasks, with an average of 9.5 actions per video. The COIN dataset contains 11,827 videos involving 180 different tasks, with an average of 3.6 actions per video. We adopt the standard approach by randomly splitting the data, using 70% for training and 30% for testing [14, 13, 15]. We adhere to the data pre-processing methodology [15] to generate action sequences and select {start, goal} observations. #### Metrics In accordance with prior studies [14, 13, 15], we employ three metrics to assess the performance of our approach: (1) _Success Rate (SR)_: A plan is considered correct only if all actions in the predicted sequence exactly match the corresponding actions in the ground truth. (2) _Mean Accuracy (mAcc)_: It is the accuracy of actions at each individual time step. An action is considered correct if it precisely matches the action in the ground truth at the same time step. (3) _Mean Intersection over Union (mIoU)_: It quantifies the overlap between predicted actions and the ground truth by computing the action IoU. Note that _mIoU_ does not consider the order of actions and solely indicates whether the model effectively captures the correct set of steps required to complete the procedure plan. Following [15], we calculate the _mIoU_ metric on each individual sequence, instead of computing it on every mini-batch, as done in prior studies [14, 13]. This is a more stringent condition and allows us to assess the accuracy of predicted actions for each specific sequence independently. In addition, we conduct a comprehensive evaluation of the stochastic nature of our model by employing various probabilistic metrics: (1) _Kullback-Leibler divergence (KL-Div)_ and _Negative Log Likelihood (NLL)_ between the probability distributions of the predicted plans and the corresponding ground truth; (2) _Mode Recall (ModeRec)_ to assess the coverage of ground truth modes in the results, and (3) _Mode Precision (ModePrec)_ to indicate the frequency with which our predicted plans align with the true modes of the data. #### Baselines We include recent procedure planning approaches based on instructional videos as baselines [14, 13, 15, 16, 17, 18, 19]. ### Task Classification Results We intend to improve the task prediction accuracy by employing a combination of visual and text representations along with a transformer model. The results of task prediction performance are shown in Table 1, where different configurations are examined. Our model achieves an improvement of approximately 3% in task classification accuracy on the COIN dataset (with the largest task space). It achieves a slight improvement on CrossTask; and maintains perfect accuracy (100%) on NIV as in other configurations. To verify the influence of task classification on the ultimate accuracy of action planning, we compare the outcomes achieved through the utilization of the MLP as detailed in [15] with the results obtained by incorporating Transformer classifiers as inputs for both PDPP and our model. We observe a positive effect of Transformer, as shown by the results in supplementary section D. ### Comparison with Prior Approaches #### Crosstask (short horizon) We show the main performance results on CrossTask in Table 2. Our model consistently outperforms other approaches in terms of both _SR_ and _mAcc_. Across sequence lengths \(T=3\) and \(4\), our model exhibits a notable _SR_ increase of approximately 2% (absolute change) compared to the previous state-of-the-art (SotA). In terms of _mAcc_, our model showcases significant enhancements, achieving more than 11% improvement at \(T=3\) and around 2.3% at \(T=4\). Regarding _mIoU_, as aforementioned, we follow PDPP [22] to compute it by calculating the mean of every IoU for a single action sequence rather than a mini-batch adopted by [23, 24]. Hence, a direct comparison with [23, 24] is not relevant. Compared to PDPP, our model achieves about \(1.5\%\) improvement in _mIoU_. CrossTask (long horizon)Following [24, 25], we evaluate the performance on predicting plans for longer time horizons, \(T=3,4,5,6\). The results are shown in Table 3. Our model consistently achieves substantial enhancements across all planning horizons, surpassing the performance of previous models. NIV and COINResults on the NIV and COIN datasets are presented in Table 4. It is shown that our method demonstrates superior performance on both datasets, surpassing other approaches in terms of _SR_ and _mAcc_ metrics. In particular, on the relatively smaller NIV dataset, our model achieves 1% (\(T=3\)) and 2% (\(T=4\)) increases respectively in _SR_, along with improvements of 0.6% (\(T=3\)) and 1.3% (\(T=4\)) in _mAcc_. On the COIN dataset, which poses the highest level of difficulty, our method achieves a remarkable absolute improvement of 8.1% (\(T=3\)) and 6.9% (\(T=4\)) on _SR_, and 4% (\(T=3\)) and 2.7% (\(T=4\)) on _mAcc_ metrics, respectively. These represent a substantial margin over previous SotA, _i.e._, PDPP. Such a performance boost is also illustrated in Figure 3, which shows the training process on COIN dataset. Our approach features a large margin on SR and a faster learning speed, especially during the initial stage of training. ### Evaluating Probabilistic Modeling To assess the effectiveness of our method on probabilistic modeling, we conduct a comparison between the plan distributions generated by our model and the ground truth distribution of viable plans, following the protocol proposed in [24, 25]. The evaluation is done on CrossTask dataset which is most suitable for this purpose with its higher variations in feasible plans. Results on NIV and COIN datasets are included in the supplementary material. We compare our model with three baselines: (1) a Deterministic baseline established by setting the initial distribution \(\hat{x}_{N}=0\), (2) a Noise baseline achieved by directly sampling from a random distribution using the provided observations and task class condition in a single step, and (3) the original PDPP approach [25]. The outcomes are presented in Table 5. Our model consistently produces the lowest _NLL_ and _KL-Div_ values across all horizons in comparison with the other three models. The results underscore the enhanced proficiency of our model in managing uncertainty. Furthermore, our model exhibits a remarkable capability to generate plans that are both diverse and logical, consistently outperforming the other models in terms of _SR_, _ModePrec_, and _ModeRec_ across all horizons. ### Ablation Studies Effect of text-enhanced representation learningTo validate the efficacy of text-enhanced representation learning within our model, we compare the performance of three setups: (1) the original PDPP model that utilizes only visual representation, (2) a truncated model that uses only text-based representation, and (3) a model that employs joint vision-text representations. The results are listed in Table 6. Apparently, the text-only modality is inferior to the visual-only modality and the vision-text multimodality in representation learning. Importantly, the additional action-aware text embedding does have a positive effect on planning efficacy as indicated by the higher performance of vision-text joint representation than visual-only. This outcome is consistent with the information in Table 1, wherein higher accuracy of task classification is achieved when vision-text joint representation is used. Effect of masked diffusionWe conduct an ablation study to investigate the impact of different masking techniques on performance. In our method, we apply a binary mask to the action dimensions (called hard mask). Gaussian noise is exclusively generated within the unmasked regions, corresponding to the actions relevant to the active task. However, a possible adverse effect is that if the task classification is incorrect, there is a substantial likelihood that the action plans are wrong. Hence, we use the confidence score of task prediction to dictate the likelihood of a set of actions being retained, enabling the creation of a "soft" mask that is applied to the action dimensions. We also include a condition where no masking is applied to the action dimensions (w/o mask), resulting in a diffusion model identical to that of PDPP. The _SR_ results are outlined in Table 7 - detailed data for _SR_, _mAcc_, and _mIoU_ can be found in supplementary ma \begin{table} \begin{tabular}{c c c c c|c c|c c} \hline \hline & \multicolumn{4}{c}{CrossTask} & \multicolumn{4}{c}{COIN} & \multicolumn{2}{c}{NIV} \\ \cline{2-10} & _T=3_ & _T=4_ & _T=5_ & _T=6_ & _T=3_ & _T=4_ & _T=3_ & _T=4_ \\ \hline VM & 92.4 & 93.0 & 93.4 & 93.2 & 79.4 & 78.9 & 100 & 100 \\ VTM & 92.7 & 93.2 & 93.5 & 93.6 & 81.0 & 80.2 & 100 & 100 \\ VTT & **92.9** & **93.3** & **93.8** & **93.7** & **82.6** & **81.9** & **100** & **100** \\ \hline \hline \end{tabular} \end{table} Table 1: Task classification results. VM: visual representation + MLP classifier; VTM: visual-text representation + MLP classifier; VTT: visual-text representation + Transformer classifier. Figure 3: Success rate during training on COIN dataset. terials. It is shown that hard masking results in the highest _SR_. In fact, even without applying masking to action dimensions, the "w/o mask" configuration outperforms PDPP, possibly due to improved task class prediction facilitated by text-enhanced representation. Interestingly, soft masking leads to the lowest _SR_, performing worse than both PDPP and the non-masked approach. The possible reason is that with hard masking, action planning is confined within the boundaries of a task. This restriction significantly reduces the action space, allowing for a thorough exploration of the action sequencing within the unmasked subset of actions. With soft masking, the confidence scores of task classification could be ill-calibrated Guo et al. (2017), which leads to wrong allocation to task-guided action types. ## Conclusion In this paper, we have introduced a masked diffusion model to deal with the large design space that challenges procedure planning in instructional videos. A simple yet effective masking mechanism is designed in a projected diffusion model to restrict the scope of planning to a subset of actions, as is guided by the task class information. We show that such a binary mask leads to significant improvements in procedure planning with respect to multiple metrics. It also engenders a positive effect on probabilistic modeling to reflect the inherent data distribution. Furthermore, we show the preferable effect of text-enhanced representation learning, which leverages the power of large VLMs and generates action-aware text description simply via prompting, without the need for computationally intensive training or fine-tuning. A direction of future work is to develop a more sophisticated masking scheme based on a well-calibrated task prediction model, so as to allow for a well-balanced compromise between the reduction in dimensions induced by masking and the retention of context relevant to the task. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{4}{c}{CrossTask} & \multicolumn{4}{c}{NIV} & \multicolumn{2}{c}{COIN} \\ \cline{2-9} & _T=3_ & _T=4_ & _T=5_ & _T=6_ & _T=3_ & _T=4_ & _T=3_ & _T=4_ \\ \hline PDPP(V) & 37.20 & 21.48 & 13.58 & 8.47 & 31.25 & 26.72 & 21.33 & 14.41 \\ PDPP(T) & 32.18 & 18.86 & 11.47 & 8.15 & 28.33 & 24.87 & 17.63 & 11.35 \\ PDPP(V+T) & 37.72 & 22.07 & 14.03 & 9.04 & 31.73 & 27.41 & 24.46 & 16.02 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study on the role of text-enhanced representation learning. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{4}{c}{_T=3_} & _T=4_ & _T=5_ & _T=6_ \\ \cline{2-6} & SR\(\uparrow\) & SR\(\uparrow\) & SR\(\uparrow\) & SR\(\uparrow\) \\ \hline DDN Chang et al. (2020) & 12.18 & 5.97 & 3.10 & 1.20 \\ PlaTe Sun et al. (2021) & 18.50 & 14.00 & 10.00 & 7.50 \\ P3IV Zhao et al. (2022) & 23.34 & 13.40 & 7.21 & 4.40 \\ PPDP Wang et al. (2023) & 37.20 & 21.48 & 13.58 & 8.47 \\ Ours & **39.17** & **23.47** & **15.25** & **10.10** \\ \hline \hline \end{tabular} \end{table} Table 4: Results for prediction horizons T\(\in\){3, 4} on NIV and COIN datasets. ‘Sup.’ means the type of supervision during training. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Supervision} & \multicolumn{4}{c}{_T=3_} & _T=4_ \\ \cline{3-6} & & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) \\ \hline DDN Chang et al. (2020) & V & 12.18 & 31.29 & 47.48 & 5.97 & 27.10 & 48.46 \\ PlaTe Sun et al. (2021) & L & 16.00 & 36.17 & 65.91 & 14.00 & 35.29 & 44.36 \\ Ext-GAIL Bi et al. (2021) & V & 21.27 & 49.46 & 61.70 & 16.41 & 43.05 & 60.93 \\ P3IV Zhao et al. (2022) & L & 23.34 & 49.46 & 73.89 & 13.40 & 44.16 & 70.01 \\ PDPP Wang et al. (2023) & C & 37.20 & 55.35 & 66.57 & 21.48 & 57.82 & 65.13 \\ Ours & C & **39.17** & **66.66** & 68.31 & **23.47** & **60.16** & 66.75 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of benchmarks with planning horizons T\(\in\){3, 4} on CrossTask. The ‘Supervision’ column indicates the type of supervision during training. ‘V’: intermediate visual states; ‘L’: language features; ‘C’: task class. Notably, to get _mIoU_, we compute the average IoU for each individual action sequence, rather than across a mini-batch (in grey font). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Hor.} & \multirow{2}{*}{Models} & \multirow{2}{*}{Sup.} & \multicolumn{4}{c}{NIV} & \multicolumn{2}{c}{COIN} \\ \cline{3-8} & & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) \\ \hline \multirow{4}{*}{_T=4_} & DDN & V & 18.41 & 32.54 & 56.56 & 13.9 & 20.19 & 64.78 \\ & Ext-GAIL & V & 22.21 & 42.60 & 65.93 & - & - & - \\ & P1PV & L & 24.68 & 49.01 & 74.20 & 15.4 & 21.67 & 76.31 \\ & PDPP & C & 31.25 & 49.26 & 57.92 & 21.33 & 45.62 & 51.82 \\ & Ours & C & **32.35** & **49.89** & **58.90** & **29.43** & **49.50** & 52.20 \\ \hline \multirow{4}{*}{_T=5_} & DDN & V & 15.97 & 27.09 & 53.84 & 11.13 & 17.71 & 68.06 \\ & Ext-GAIL & V & 19.91 & 36.31 & 53.84 & - & - & - \\ & P1V & L & 20.14 & 38.36 & 67.29 & 11.32 & 18.85 & 70.53 \\ & PDPP & C & 26.72 & 48.92 & 59.04 & 14.41 & 44.10 & 51.39 \\ & Ours & C & **28.88** & **50.20** & 59.75 & **21.30** & **46.84** & 52.45 \\ \hline \hline \end{tabular} \end{table} Table 5: Uncertainty and diversity evaluation on CrossTask. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Supervision} & \multicolumn{4}{c}{_T=3_} & _T=4_ \\ \cline{3-8} & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) & SR\(\uparrow\) & mAcc\(\uparrow\) & mIoU\(\uparrow\) \\ \hline DDN Chang et al. (2020) & V & 12.18 & 31.29 & 47.48 & 5.97 & 27.10 & 48.46 \\ PlaTe Sun et al. (2021) & L & 16.00 & 36.17 & 65.91 & 14.00 & 35.29 & 44.36 \\ Ext-GAIL Bi et al. (2021) & V & 21.27 & 49.46 & 61.70 & 16.41 & 43.05 & 60.93 \\ P3IV Zhao et al. (2022) & L & 23.34 & 49.46 & 73.89 & 13.40 & 44.16 & 70.01 \\ PDPP Wang et al. (2023) & C & 37.20 & 55.35 & 66.57 & 21.48 & 57.82 & 65.13 \\ Ours & C & **39.17** & **66.66** & 68.31 & **23.47** & **60.16** & 66.75 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of benchmarks with planning horizons T\(\in\){3, 4} on CrossTask. The ‘Supervision’ column indicates the type of supervision during training. ‘V’: intermediate visual states; ‘L’: language features; ‘C’: task class. Notably, to get _mIoU_, we compute the average IoU for each individual action sequence, rather than across a mini-batch (in grey font).
2302.00014
Transient stellar collisions as multimessenger probes: Non-thermal-, gravitational wave emission and the cosmic ladder argument
In dense stellar clusters like galactic nuclei and globular clusters stellar densities are so high that stars might physically collide with each other. In galactic nuclei the energy and power output can be close, and even exceed, to those from supernovae events. We address the event rate and the electromagnetic characteristics of collisions of main sequence stars (MS) and red giants (RG). We also investigate the case in which the cores form a binary and emit gravitational waves. In the case of RGs this is particularly interesting because the cores are degenerate. We find that MS event rate can be as high as tens per year, and that of RGs one order of magnitude larger. The collisions are powerful enough to mimic supernovae- or tidal disruptions events. We find Zwicky Transient Facility observational data which seem to exhibit the features we describe. The cores embedded in the gaseous debris experience a friction force which has an impact on the chirping mass of the gravitational wave. As a consequence, the two small cores in principle mimic two supermassive black holes merging. However, their evolution in frequency along with the precedent electromagnetic burst and the ulterior afterglow are efficient tools to reveal the impostors. In the particular case of RGs, we derive the properties of the degenerate He cores and their H-burning shells to analyse the formation of the binaries. The merger is such that it can be misclassified with SN Ia events. Because the masses and densities of the cores are so dissimilar in values depending on their evolutionary stage, the argument about standard candles and cosmic ladder should be re-evaluated.
Pau Amaro Seoane
2023-01-31T19:00:01Z
http://arxiv.org/abs/2302.00014v2
# Transient stellar collisions as multimessenger probes: ###### Abstract In dense stellar clusters like galactic nuclei and globular clusters stellar densities are so high that stars might physically collide with each other. In galactic nuclei the energy and power output can be close, and even exceed, to those from supernovae events. We address the event rate and the electromagnetic characteristics of collisions of main sequence stars (MS) and red giants (RG). We also investigate the case in which the cores form a binary and emit gravitational waves. In the case of RGs this is particularly interesting because the cores are degenerate. We find that MS event rate can be as high as tens per year, and that of RGs one order of magnitude larger. The collisions are powerful enough to mimic supernovae- or tidal disruptions events. We find Zwicky Transient Facility observational data which seem to exhibit the features we describe. The cores embedded in the gaseous debris experience a friction force which has an impact on the chirping mass of the gravitational wave. As a consequence, the two small cores in principle mimic two supermassive black holes merging. However, their evolution in frequency along with the precedent electromagnetic burst and the ulterior afterglow are efficient tools to reveal the impostors. In the particular case of RGs, we derive the properties of the degenerate He cores and their H-burning shells to analyse the formation of the binaries. The merger is such that it can be misclassified with SN Ia events. Because the masses and densities of the cores are so dissimilar in values depending on their evolutionary stage, the argument about standard candles and cosmic ladder should be re-evaluated. Subject headings:stellar collisions -- gravitational waves -- multimessenger probes ## 1. Motivation Dense stellar systems such as globular clusters and galactic nuclei have stellar densities ranging between a million and a hundred million stars per cubic parsec. In them, relative velocities of the order of \(\sim\) a few 10 km/s in the case of globular clusters and of \(\sim 100-1000\) km/s in the case of galactic nuclei can be reached (Neumayer et al., 2020; Spitzer, 1987; Binney & Tremaine, 2008). In these exceptional conditions, and unlike anywhere else in the host galaxy, collisional effects come into play. With "collisional" we mean in general mutual gravitational deflections which lead to an exchange of energy and angular momentum, but also in particular genuine contact collisions. The possibility that collisions between stars play a fundamental role both in explaining particular observations and in the global influence of dense stellar systems has been studied with dedicated numerical studies (Spitzer & Saslaw, 1966; David et al., 1987a; Sanders, 1970; Benz & Hills, 1987; David et al., 1987b; Davies et al., 1991; Benz & Hills, 1992; Murphy et al., 1991; Lai et al., 1993; Lombardi et al., 1995, 1996; Bailey & Davies, 1999a; Davies et al., 1998; Bailey & Davies, 1999b; Lombardi et al., 2002a; Shara, 2002; Adams et al., 2004; Trac et al., 2007; Dale et al., 2009; Wu et al., 2020; Mastrobuono-Battisti et al., 2021; Vergara et al., 2021). We have chosen to first focus on galactic nuclei. We then address globular clusters, in which the rates are larger due to the smaller relative velocities between the stars participating in the collision (which is of the order of the velocity dispersion). For galactic nuclei, we first derive the event rate of these collisions as a function of the host galaxy cusp (section 2) and analyse analytically the non-thermal properties of the outcome of such collisions (sections 3 and 4). This analysis is performed for both main-sequence stars and, later, for red giants (section 6). The electromagnetic analysis reveals that these collisions can mimic over periods of time tidal disruptions but also Type Ia supernovae (da Silva, 1993). Our analysis is a dynamical and analytical one, and depends on solely two free parameters which whose value should be extracted with dedicated numerical simulations. We extend the analysis to the gravitational radiation phase as emitted by a subset of these collisions, namely those in which the core survives and forms a binary (section 5). Red giants have a very compact nucleus and can always withstand the onslaught of the collision. We find that the number of gravitational wave sources that form is not negligible, and leads to the emergence of a type of source that can be misleading. A source that drastically changes its characteristics within a very short time. In a matter of months, the binary that forms initially appears to have a few solar masses to later appear as a supermassive black hole binary. Similarly, the luminosity distance varies tremendously in that short interval of time. Due to the multi-messenger characteristics of this source, the extraction of information is very interesting and complementary. That is, electromagnetic data can help us to break various degeneracies in the analysis of gravitational waves and vice versa. In the particular case of the red giants, the rates are very high and, because the electromagnetic nature of the process very strongly depends on the stage of the evolution of the colliding red giants, if these collisions were confused with supernovae events, which are used as a kind of standard candles, the ladder argument to calculate cosmological distances would be in danger of revision. Although galactic nuclei are often left out from the supernova searches, it is often difficult if not impossible to discern the nucleus due to a lack of resolution. Moreover, collisions happen more frequently in globular clusters, as mentioned before, which are located off the plane and away from the galactic nucleus, and are hence not excluded in the searches. However, the low relative velocities lead to a different kind of phenomenon: stellar pulsations. In Sec. (7) we find that the collisions in globular clusters can lead to the classical Cepheids pulsation phenomenon. We show that in the adiabatic, spherical case this is a stable phenomenon, and we calculate the associated timescale (sections 7.4 and 7.3). However, ulterior inputs of energy are required if the vibrational or thermal instability dissipate the oscillations. This additional inputs of energy can happen if futher collisions take place with the same companion star in the case of binary formation, or with another star, or in the case in which internal instabilities lead to them. The classical pulsation problem has been envisaged as another rung in the standard candle classification of the cosmological ladder, so that this must be addressed in more detail than we present here, and will be presented elsewhere. We discuss the supernovae and pulsating star misclassification in the context of the cosmological ladder in Sec. (8). Finally, in Sec. (9) we present a summary of all of the conclusions from our investigations. ## 2. Event rate derivation The quasi-steady solution for how stars distribute around a massive black hole (MBH) follows an isotropic distribution function in physical space of the form \(\rho(r)\sim R^{-\gamma}\), where \(\rho\) is the stellar density \(\rho\) and \(R\) the radius (Peebles, 1972; Bahcall & Wolf, 1976). This mathematical derivation has been corroborated using numerical techniques (Shapiro & Marchant, 1978; Marchant & Shapiro, 1979, 1980; Shapiro & Teukolsky, 1985; Freitag & Benz, 2001; Amaro-Seoane et al., 2004; Preto et al., 2004) and, recently, a comparison with data from our Galactic Centre yields a very good match between observations, theory and numerical simulations (Baumgardt et al., 2018; Gallego-Cano et al., 2018; Schodel et al., 2018). Therefore, we assume a power-law mass distribution for the numerical density of stars around the MBH, \(n_{*}(R)\propto R^{-\gamma}\), with \(R\) the radius. Following this, we can derive that the enclosed stellar mass around the MBH within a given radius is (see e.g. Amaro-Seoane, 2019) \[M_{*}(R)=M_{\bullet}\left(\frac{R}{R_{\rm infl}}\right)^{3-\gamma}. \tag{1}\] In this last equation \(M_{*}(R)\) is the stellar mass at a radius \(R\), \(M_{\bullet}\) is the mass of the MBH, \(R_{\rm infl}\) is the influence radius of the MBH (i.e. the radius within which the potential is dominated by the MBH) and \(\gamma\) is the exponent of the power law. Hence, the total number of stars at that radius is \[N_{*}(R)=\frac{M_{\bullet}}{m_{*}}\left(\frac{R}{R_{\rm infl}}\right)^{3-\gamma }, \tag{2}\] where \(m_{*}\) is the mass of one star and we are assuming for simplicity that all stars have the same mass and radius \(R_{*}\), so that the stellar mass density at a given radius is \(\rho_{*}(R)=m_{*}\,n_{*}(R)\). Therefore, we have that the numerical density is \[n_{*}(R)=\frac{3-\gamma}{4\pi}\frac{M_{\bullet}}{m_{*}R_{\rm infl}^{3}}\left( \frac{R}{R_{\rm infl}}\right)^{-\gamma}, \tag{3}\] since \(dN_{*}/dR=4\pi R^{2}\,n_{*}\). At the radii of interest, those close to the MBH, within the radius of influence, the typical relative velocity between stars \[V_{\rm rel}(R)=K_{\nu}\sqrt{\frac{GM_{\bullet}}{R}} \tag{4}\] is \(V_{\rm rel}(R)\geqslant V_{\rm esc}\), with \(V_{\rm esc}\) the escape velocity from the stellar surface, \[V_{\rm esc}=\sqrt{\frac{2Gm_{*}}{R_{*}}}, \tag{5}\] and \(K_{\nu}\) depends on \(\gamma\) and is of order unity. The collision rate for one star can be estimated as \[\frac{1}{T_{\rm coll,\,1}(R)}=n_{*}(R)\,V_{\rm rel}(R)\,S, \tag{6}\] with \(S\) the cross-section, \[S=\pi\left(f_{\rm coll}2R_{*}\right)^{2}, \tag{7}\] since we are neglecting the gravitational focusing, because \(V_{\rm rel}(R)\geqslant V_{\rm esc}\), so that \(S\) can be computed geometrically. In practise this means that we are looking at a lower-limit case, since the rates could be slightly enhanced. This is particularly true in globular clusters, where the relative velocity is lower. As stated in the introduction, nonetheless, we are focusing in galactic nuclei, which is a lower-limit case of the general scenario. In this equation, \(f_{\rm coll}\) defines how deep a collision is. Introducing Eq. (7), \(n_{*}(R)\) and \(V_{\rm rel}(R)\) in Eq. (6), we have that \[\frac{1}{T_{\rm coll,\,1}(R)}=(3-\gamma)K_{\nu}f_{\rm coll}^{2}\left(\frac{R_{ *}}{R_{\rm infl}}\right)^{2}\frac{M_{\bullet}}{m_{*}}\sqrt{\frac{GM_{\bullet}} {R_{\rm infl}^{3}}}\left(\frac{R}{R_{\rm infl}}\right)^{-(\gamma+1/2)} \tag{8}\] The total collisional rate in the cusp around the MBH is \[\Gamma_{\rm coll}=\frac{N_{*}}{2}\frac{1}{T_{\rm coll,\,tot}}, \tag{9}\] since \(N_{*}=4\pi R^{2}n_{*}\) and we take into account that for a collision we need two stars. Therefore \[\Gamma_{\rm coll}=2\pi\int_{R_{\rm min}}^{R_{\rm min}}n_{*}(R)\frac{R^{2}}{T_{ \rm coll,\,1}(R)}dR. \tag{10}\] In this integral we choose the maximum radius \(R_{\rm max}\) to be the distance within the influence radius at which \(V_{\rm rel}(R)=V_{\rm esc}(R)\), i.e. \[R_{\rm max}=K_{\nu}^{-2}R_{*}\frac{M_{\bullet}}{m_{*}}, \tag{11}\] and the minimum radius \(R_{\rm min}\) to be the radius which contains on average one star. From Eq. (2) we derive that \[R_{\rm min}=R_{\rm infl}\left(\frac{m_{*}}{M_{\bullet}}\right)^{\frac{1}{\nu- \gamma}}. \tag{12}\] We note that the interior mass enclosed in \(R_{\rm max}\) is \[M_{*,\rm max}=K_{\nu}^{-2(3-\gamma)}M_{\bullet}\left(\frac{R_{*}}{R_{ \rm infl}}\right)^{3-\gamma}\left(\frac{M_{\bullet}}{m_{*}}\right)^{3-\gamma}\approx\] \[M_{\bullet}\left(\frac{\sigma_{\nu}^{2}}{V_{\rm esc}^{2}} \right)\ll M_{\bullet}, \tag{13}\] where \(\sigma_{\nu}\) is the velocity dispersion at large distances from the MBH. This last equation means that \(R_{\rm max}\ll R_{\rm infl}\). We can now integrate Eq. (10), \[\Gamma_{\rm coll}= \,2.12\times 10^{-9}\frac{1}{yr}\frac{(3-\gamma)^{2}}{5-4\gamma}K_{ \nu}\left(\frac{f_{\rm coll}}{0.25}\right)^{2}\left(\frac{R_{*}}{1\,R_{\odot}} \right)^{2}\times\] \[\left(\frac{M_{\bullet}}{10^{6}M_{\odot}}\right)^{5/2}\left(\frac {m_{*}}{1\,M_{\odot}}\right)^{-2}\left(\frac{R_{\rm infl}}{1\,{\rm pc}}\right)^ {-7/2}\times\] \[\left[A(\gamma)\left(\frac{R_{*}}{R_{\odot}}\right)^{-2\gamma+5/2 }\left(\frac{M_{\bullet}}{10^{6}M_{\odot}}\right)^{-2\gamma+5/2}\left(\frac{R _{\rm infl}}{1\,{\rm pc}}\right)^{2\gamma-5/2}\times\right.\] \[\left.\left(\frac{m_{*}}{1\,M_{\odot}}\right)^{2\gamma-5/2}-B( \gamma)\left(\frac{M_{\bullet}}{10^{6}M_{\odot}}\right)^{\frac{2\gamma-5/2}{ 3-\gamma}}\left(\frac{m_{*}}{1\,M_{\odot}}\right)^{\frac{5\gamma-5/2}{3-\gamma }}\right],\] where we have defined \[A(\gamma):= 2.25^{-2\gamma+5/2}10^{4\gamma-5}K_{\nu}^{4\gamma-5}\] \[B(\gamma):= 10^{\frac{12\gamma-15}{3-\gamma}} \tag{15}\] We note that, since \(R_{\rm min}\ll R_{\rm max}\), the rates are dominated at short distances from the MBH, so that the first term in the square brackets of Eq. (14) can in principle be neglected. However, since this could artificially increase the rates, we do not neglect it. We have normalised \(f_{\rm coll}\) to 0.25 because we are interested in collisions which lead to a total disruption of the stars. This situation is achieved when the periastron distance of a gravitational two-body hyperbolic encounter in the centre-of-mass reference frame \(d_{\rm min}\) has the value \[d_{\rm min}=\left(R_{\rm half,1}+R_{\rm half,2}\right), \tag{16}\] with \(R_{\rm half,1}\) the half-mass radius of the first star participating in the collision (and \(R_{\rm half,1}=R_{\rm half,2}\) since we assume they have the same radius and mass). Therefore, for a complete disruptive collision \(f_{\rm coll}\), a measure of the depth of the impact, as we explained, is \[f_{\rm coll}\approx\frac{R_{\rm half}}{R_{*}}. \tag{17}\] As we can see in e.g. Fig. 4 of Freitag & Benz (2005) (and see also their Fig. 9), for \(m_{*}=1\,M_{\odot}\), \(R_{*}=1\,R_{\odot}\), and then \(f_{\rm coll}=0.25\). For \(m_{*}=10\,M_{\odot}\), \(R_{*}=6\,R_{\odot}\), and \(f_{\rm coll}=0.2\). As for the influence radius, we use the so-called "mass-sigma" correlation (McConnell et al., 2011; Kormendy & Ho, 2013; Davis et al., 2017) for black hole masses in nearby galaxies, \[\frac{M_{\bullet}}{3\times 10^{8}\,M_{\odot}}\cong\left(\frac{\sigma}{200\,{\rm km \,s}^{-1}}\right)^{5}, \tag{18}\] with \(\sigma\) the velocity dispersion of the stars. This combined with the definition of the influence radius which takes into account the overall effect on the motion of a star by the bulge, including those that have moved away from the MBH, as introduced by Peebles (1972), \[R_{\rm infl}=\frac{GM_{\bullet}}{\sigma^{2}}, \tag{19}\] leads to \[R_{\rm infl}=1.05\,{\rm pc}\times\left(\frac{M_{\bullet}}{10^{6}\,M_{\odot}} \right)^{0.6}, \tag{20}\] and note that for \(M_{\bullet}=4\times 10^{6}\,M_{\odot}\) such as the one in our Galactic Centre, \(R_{\rm infl}=2.5\,{\rm pc}\), which is close to the value observed of \(\sim 3\)pc (Schodel et al., 2014, 2018). Hence, for \(M_{\bullet}=10^{7}\,M_{\odot}\), \(R_{\rm infl}=4.2\,{\rm pc}\) and for \(M_{\bullet}=10^{5}\,M_{\odot}\), \(R_{\rm infl}=0.27\,{\rm pc}\). For a Bahcall-Wolf power-law (Bahcall & Wolf, 1976), \(\gamma=7/4\), taking \(f_{\rm coll}=0.25\), and the default values given in Eq. (14), we obtain that \(\Gamma_{\rm coll,\,6}\cong 10^{-4}{\rm yr}^{-1}\) for a Milky-Way-like nucleus, i.e. with a MBH in this mass range, \(M_{\bullet}=10^{6}\,M_{\odot}\) (as indicated with the sub-index 6). The calculation of the event rate applies to nuclei hosting MBHs with masses between \(\sim 10^{5}-10^{7}\,M_{\odot}\), since for larger MBH masses the relaxation time would exceed a Hubble time, and for lighter MBH masses the MBH is in the intermediate-mass regime and hence cannot be envisaged as fixed in the centre of the potential, but wandering, which renders the calculation much more complicated. For \(M_{\bullet}=10^{7}\,M_{\odot}\), and taking the same parameters as for the \(M_{\bullet}=10^{6}\,M_{\odot}\) case but for the influence radius, we obtain that \(\Gamma_{\rm coll,\,7}\cong 2\times 10^{-4}{\rm yr}^{-1}\), and for \(M_{\bullet}=10^{5}\,M_{\odot}\), \(\Gamma_{\rm coll,\,5}\cong 10^{-5}{\rm yr}^{-1}\). Assuming an observable distance of \(100\,{\rm Mpc}\) for these events, this translates into an observable volume of \(\sim 4.2\times 10^{6}\,{\rm Mpc}^{3}\). Within this volume, and assuming \(10^{-2}\) MBH of \(M_{\bullet}=10^{6}\,M_{\odot}\) per \({\rm Mpc}^{3}\) (see Fig.2 of Kelly & Merloni 2012), we derive a total of \(4.2\times 10^{4}\) sources, i.e. nuclei hosting MBHs with a mass of \(M_{\bullet}=10^{6}\,M_{\odot}\), so that this multiplied by \(\Gamma_{\rm coll,\,6}\) leads to a total event rate of \(\Gamma_{\rm coll,\,6}^{\rm tot}\sim 4.2\,{\rm yr}^{-1}\). For MBHs with masses of \(10^{7}\,M_{\odot}\), the work of Kelly & Merloni (2012) yields \(6\times 10^{-3}\) MBH per \({\rm Mpc}^{3}\), and hence \(\Gamma_{\rm coll,\,7}^{\rm tot}\sim 5\,{\rm yr}^{-1}\). For MBHs with masses of \(10^{5}\,M_{\odot}\), and extrapolating the results of Kelly & Merloni (2012), to about \(10^{-2}\) MBH per \({\rm Mpc}^{3}\) as well, we have that \(\Gamma_{\rm coll,\,5}^{\rm tot}\sim 0.42\,{\rm yr}^{-1}\). Therefore, neglecting the contribution of \(10^{5}\,M_{\odot}\) MBHs, and for a mass range for the MBH between [\(10^{6}\), a \(\rm{few}\,10^{7}M_{\odot}\), we have a total integrated event rate of \(\gtrsim 100\,{\rm yr}^{-1}\) in \(100\,{\rm Mpc}\). In Fig.(1) we show \(\Gamma_{\rm coll,\,6}^{\rm tot}\) and \(\Gamma_{\rm coll,\,7}^{\rm tot}\) for various typical values of \(\gamma\) in a volume of \(100\,{\rm Mpc}\) of radius. ## 3. Energy release During the collision release of nuclear energy is negligible (see Mathis, 1967; Rozyczka et al., 1989). Gravitational energy can also be neglected in the kind of collisions we are considering (very high velocities and \(f\sim 0.2\)). We can also neglect radiative transport, since the merging stars are obviously optically thick while the collision it taking place. During it, the energy transport by radiation is diffusive. In this kind of almost head-on stellar collisions and in our framework of high relative velocities, the colliding stars merge into a single object surrounded by a gaseous structure which is approximately spherical (see e.g. the numerical work of Freitag & Benz 2005). This gaseous cloud will expand at a speed which is equivalent to the average relative speeds one observes at galactic centres harbouring MBHs of masses \(M_{\bullet}=[10^{6}\), a few \(10^{7}]\,M_{\odot}\). In this section, we first estimate the timescale for the energy to diffuse from the centre of the cloud to the surface and the timescale associated for the cloud to become transparent. Then we calculate the total emission of the energy and its time dependency, as well as the luminosity. ### Diffusion of energy: Timescales We estimate the associated timescales for a cloud to diffuse energy to the surface and for it to become fully transparent. We consider it to be transparent when the mean free path of photons is larger than the radius of the cloud. We define the mean free path \(l(t)\) (which changes over time) as the average distance for a photon between two interactions with two electrons at a given time, so that the time to cover it is \(l(t)/c\), with \(c\) the speed of light. Since we are talking about a random-walk process, the average number of steps of length \(l(t)\) for the photon to cover a distance \(R(t)\) (the radius of the cloud, function of time) is \[N(t)=\left(\frac{R(t)}{l(t)}\right)^{2}, \tag{21}\] because the average of the squared distance is proportional to the time in a random walk. We define the diffusion time as this number of steps multiplied by the time to cover the distance between two interactions, so that \[T_{\rm diff}(t)\cong\frac{N(t)\,l(t)}{c}. \tag{22}\] We now calculate the mean free path by estimating the probability \(P_{\rm coll}\) that an electron collides with a photon after a distance \(x\), \[dP_{\rm coll}=S_{\rm eff}\,n\,dx, \tag{23}\] with \(S_{\rm eff}\) is the effective area and \(n\) the numerical density of electrons. Hence, the collisional rate for one electron is \[\Gamma_{\rm e}=\frac{dP_{\rm coll}}{dt}=S_{\rm eff}\,n\,v, \tag{24}\] with \(v\) the relative velocity between the electron and the photon, i.e. \(v=c\). Therefore, the average number of collisions over a distance \(x\) is \[N_{\rm coll}=S_{\rm eff}\,nx. \tag{25}\] By setting \(N_{\rm coll}=1\) in this last equation, we derive the value of \(x\), i.e. the mean free path, \[l=\frac{1}{S_{\rm eff}\,n}. \tag{26}\] Since \(n=\rho_{\rm g}/m\), with \(m\) the mass of one "gas particle" (i.e. the proton mass, since we assume that we have completly ionised H) per electron, \[l=\frac{m}{\rho_{\rm g}\,S_{\rm eff}}, \tag{27}\] which allows us to introduce the usual definition of opacity, \(\kappa=S_{\rm eff}/m\). If we assume that the ionisation degree does not change, then \(l\propto 1/\rho_{\rm g}\), and since \(\rho_{\rm g}\simeq M/R(t)^{3}\), we derive that \(l(t)\propto R(t)^{3}\). Therefore, there must be a time in which \(l(t)>R(t)\) and the cloud is transparent, \(t=t_{\rm transp}\). If at that moment, which we denote as \(t=t_{\rm transp}\), there is still enough energy in form of photons in the cloud, they will be able to escape it instantaneously even if they are located at the centre of the cloud, in a straight line, without diffusion. I.e. if \(t\) is the time passed since the formation of the cloud (i.e. right after the collision), and \(T_{\rm diff}\ll t\), then most of the photons are still trapped in the cloud. Nonetheless, \(t\) obviously increases and \(T_{\rm diff}\) varies in time, so that there might be a moment in which \(t>T_{\rm diff}\) before we reach \(t=t_{\rm transp}\). We need to estimate these timescales. From the previous equations, we have that \[T_{\rm diff}(t)\simeq\frac{\kappa}{c}\frac{M}{R(t)}. \tag{28}\] With \(\kappa=0.04\,{\rm m}^{2}{\rm kg}^{-1}\) (a lower bound for an ionised gas due to electron scattering). In the right hand side of this last equation everything is constant but for \(R(t)\), which increases, so that \(T_{\rm diff}(t)\) decreases with time. We can calculate at what time \(T_{\rm diff}\) is reached, so as to compare it with \(t=t_{\rm transp}\). An approximation is to set \(T_{\rm diff}=t\) in Eq. (28), so that if we approximate the expansion velocity \(V_{\rm exp}\) to the relative velocity, \(V_{\rm exp}=10^{4}\,{\rm km}\,{\rm s}^{-1}\) (we will elaborate on this choice later), we have that \(t=\sqrt{\kappa M/(V_{\rm exp}c)}\). Hence, \[T_{\rm diff}\sim 0.16\,{\rm yrs}\sim 2\,{\rm months} \tag{29}\] After reaching this time, approximately half of the total energy contained in the cloud has been released and the remaining half is still trapped in it. If we wait two times this amount of time, half of half the initial energy will still be in the cloud, so that the remaining amount of energy in the cloud goes as Figure 1.— Total amount of events per year in a volume of 100 Mpc of radius for two different values of MBHs and for typical values of the power index \(\gamma\). We note that \(\gamma=1.75\) corresponds to the theoretical expectation of a relaxed nucleus for a single-mass population (Peebles 1972; Bahcall & Wolf 1976). We show lower values as an illustration for the dependency of \(\Gamma_{\rm coll}^{\rm tot}\) with \(\gamma\), which is not obvious from Eq. (14). At smaller values of \(\gamma\), \(10^{6}M_{\odot}\) is the upper curve and from \(\gamma\sim 1.625\) the situation reverts and the upper one corresponds to \(10^{7}\,M_{\odot}\). \(1/2^{n}\) the initial amount, with \(n\) the amount of temporal intervals corresponding to \(T_{\rm diff}\). We note that this assumes that \(T_{\rm diff}(t)\) is the same as \(T_{\rm diff}(0)\). This is of course not true but it gives us a first rough estimate of the initial timescale for half of the energy to be released. We will improve this approximation in Sec. (3.3). To calculate at what time \(t_{\rm transp}\) is reached, we substitute \(R(t)=I(t)\), so that, \[R(t)\simeq\sqrt{\kappa\,M}. \tag{30}\] Adopting the same values as before, we find that \(t_{\rm transp}\sim 9\,\)yr. When the cloud has become transparent, all of the energy will have been already radiated away via diffusion. ### Total emission of energy The total energy involved in the collision \(E_{\rm tot}\) is the sum of three contributions: The binding energy of the stars (\(E_{\rm bin}\)) which take place in the collision plus the kinetic energy \(E_{\rm kin}\) at infinity. For one of the stars participating in the collision, these values are \[E_{\rm kin} = \frac{\mu}{2}V_{\rm rel}^{2}\] \[E_{\rm bin} = \alpha\,\frac{Gm_{*}^{2}}{R_{*}}, \tag{31}\] with \(\mu:=m_{*,1}m_{*,2}/(m_{*,1}+m_{*,2})\) the reduced mass, and \(\alpha=3/(5-n)\), with \(n=3\) for a Sun-like star (see Chandrasekhar 1942,for the equation and value). We can approximate \(E_{\rm bin}\approx m_{*}V_{\rm esc}^{2}\), so that for the two stars \[E_{\rm tot}\approx-\left(m_{*,1}V_{\rm esc,1}^{2}+m_{*,2}V_{\rm esc,2}^{2} \right)+\mu V_{\rm rel}^{2}. \tag{32}\] As mentioned in Sec. (2), since \(R_{\rm min}\ll R_{\rm max}\), the collisional rate will be dominated at smaller radii. For \(M_{\bullet}=10^{6}\,M_{\odot}\), and for the adopted value of \(\gamma=7/4\), Eq. (12) yields \(R_{\rm min}\sim 10^{-5}\)pc, so that \(V_{\rm rel}\cong 20000{\rm km\,s}^{-1}\). One order of magnitude farther away from the centre in radius, at \(10^{-4}\)pc, \(V_{\rm rel}\cong 6500{\rm km\,s}^{-1}\). For \(M_{\bullet}=10^{7}\,M_{\odot}\), the minimum radius is also \(R_{\rm min}\sim 10^{-5}\)pc but \(V_{\rm rel}\cong 65000{\rm km\,s}^{-1}\). At a distance from the MBH of \(\sim 10^{-4}\)pc \(V_{\rm rel}\cong 20000{\rm km\,s}^{-1}\). At such high relative velocities, we can ignore the contribution of the binding energy of the stars in Eq. (32). To consider two limiting cases, a \(M_{\bullet}=10^{7}\,M_{\odot}\) at \(R_{\rm min}\sim 10^{-5}\)pc, yields \(E_{\rm tot}\approx 42\,\)foe (\(4.2\times 10^{32}\)ergs), while a \(M_{\bullet}=10^{6}\,M_{\odot}\) at a distance of \(10^{-4}\)pc yields \(E_{\rm tot}\approx 0.42\,\)foe (\(4.2\times 10^{50}\)ergs). A "typical" case would range between these two limits; i.e. \(E_{\rm tot}\approx 1\)foe, which is the usual energy release of a supernova (considering \(V_{\rm rel}\cong 10000{\rm km\,s}^{-1}\) at \(10^{-4}\)pc). ### Time evolution of the released energy and power We define the loss of energy in the cloud as \[\frac{dE}{dt}=-\frac{E}{T_{\rm diff}(t)}, \tag{33}\] with \(T_{\rm diff}(t)\) as given by Eq. (28). The physical meaning of the last equation is that we are identifying \(T_{\rm diff}(t)\) as the time for the photons to escape the cloud as the main sink of energy of it and, hence, the right hand side is negative. Therefore, \[\frac{dE}{E}=-\frac{1}{\xi}\,t\,dt, \tag{34}\] with \(\xi^{-1}:=cV_{\rm exp}/(\kappa M)\). The solution to Eq. (34) is \[E(t)=E(0)\left(\frac{\eta}{1}\right)\exp\left[-\frac{1}{2\,\xi}t^{2}\right]. \tag{35}\] Here \(\eta\) is a parameter quantifying the amount of initial kinetic energy \(E(0)\) that goes into radiation. The value of \(\eta\) depends on the details of the collision and in particular on the slowing down of the shock downstreams; i.e. how the shock evolves during the collision will alter the relative velocity of the parts of the stars which have still not collided and translate into a total efficiency conversion of the kinetic into radiation. See for instance the work of Calderon et al. (2020), in particular their Figs. 4-10. In this work they focus on relatively low velocities and stellar winds but it illustrates the non-linearity of our problem. The derivation of this parameter requires detailed numerical simulations. We now introduce \[T_{\rm E}\equiv\sqrt{\frac{\kappa\,M}{c\,V_{\rm exp}}}=\sqrt{T_{\rm diff}(0) \frac{R(0)}{V_{\rm exp}}}, \tag{36}\] as we can see from Eq. (28). This corresponds to \(t\) in the approximation we did before, to obtain Eq. (29). Indeed, for the values we adopted to derive Eq. (29), we have that \(T_{\rm E}=0.16\,\)yr. We can now rewrite Eq. (34) as \[E(t)=E(0)\left(\frac{\eta}{1}\right)\exp\left[-\frac{1}{2}\,\left(\frac{t}{T_{ \rm E}}\right)^{2}\right]. \tag{37}\] Normalizing to standard values, we have \[E(t)=10^{51}\,{\rm ergs}\left(\frac{E(0)}{10^{51}\,{\rm ergs}}\right)\left( \frac{\eta}{1}\right)\exp\left[-\frac{1}{8}\left(\frac{t}{1\,{\rm month}} \right)^{2}\right]. \tag{38}\] In Fig.(2) we depict this time evolution for an initial energy of \(E(0)=10^{51}\,\)ergs. With Eq. (37) we can obtain the emitted power by deriving this last equation, \[P(t)=-\frac{dE}{dt}=\frac{E(0)}{T_{\rm E}^{2}}\,\left(\frac{\eta}{1}\right)\,t \,\exp\left[-\frac{1}{2}\left(\frac{t}{T_{\rm E}}\right)^{2}\right]. \tag{39}\] Figure 2.— Time evolution of the released energy for four different values of \(\eta\), ranging from 1 (uppermost curve) to 0.1 (lowest curve). We can normalize the equations by defining \(\tau:=t/T_{\rm E}\) and \(P_{\rm norm}\equiv E(0)/T_{\rm E}\), so that \[E(\tau) =E(0)\left(\frac{\eta}{1}\right)\,\exp\left[-\frac{\tau^{2}}{2}\right] \tag{40}\] \[P(\tau) =P_{\rm norm}\,\tau\,\left(\frac{\eta}{1}\right)\,\exp\left[-\frac {\tau^{2}}{2}\right]. \tag{41}\] We note here that \(\tau\) contains the information relative to the scattering length of the environment, in \(\kappa\), since the mean free path \(l=1/(\rho_{\rm p}\kappa)\), as we can see in Eq. (26), so that encoded in \(T_{\rm E}\) in Eq. (37) we have the information about the location of the peak of the distribution, which is, as we derived, after 2 months. Adopting typical values, we can express \(P_{\rm norm}\) as follows, \[P_{\rm norm}\cong\times 10^{44}\,{\rm erg\,s}^{-1}\left( \frac{E(0)}{10^{51}\,{\rm ergs}}\right)\left(\frac{\kappa}{0.04\,{\rm m}^{2}{ \rm kg}^{-1}}\right)^{-1/2}\] \[\left(\frac{M}{1\,M_{\odot}}\right)^{-1/2}\left(\frac{V_{\rm exp} }{10^{4}{\rm km\,s}^{-1}}\right)^{1/2}. \tag{42}\] Therefore, the final equation for the evolution of power with time is \[P(t)\cong 10^{44}{\rm erg\,s}^{-1}\left(\frac{\eta}{1}\right) \left(\frac{t}{1\,{\rm month}}\right)\exp\left[-\frac{1}{8}\left(\frac{t}{1\,{ \rm month}}\right)^{2}\right]\] \[\left(\frac{E(0)}{10^{51}\,{\rm ergs}}\right)\left(\frac{\kappa} {0.04\,{\rm m}^{2}{\rm kg}^{-1}}\right)^{-1/2}\] \[\left(\frac{M}{1\,M_{\odot}}\right)^{-1/2}\left(\frac{V_{\rm exp }}{10^{4}{\rm km\,s}^{-1}}\right)^{1/2}. \tag{43}\] In Fig. (3) we depict this power for various values of \(\eta\). Decreasing \(\eta\) shifts the peak of the power, lowers its maximum and broadens the distribution, as expected from Eq. (33). We have added a line which follows a power-law of time, \(t^{-5/3}\), which corresponds to a stellar tidal disruption (see e.g. Rees 1988). If the observation of the event takes place between the 3rd and 4th month after the collision, it could easily be misinterpreted as a tidal disruption. At later times the curves diverge, so that depending on the observational errors one could discern the two, or not. ## 4. Temperature and Spectral Power ### Effective temperature From the previous section, we can now estimate the evolution of the effective temperature of the cloud which expands at a constant velocity \(V_{\rm exp}\). We use the approximation of Stefan-Boltzmann of black body radiation, \(P(t)=\sigma\,T_{\rm eff}^{4}\,\pi\pi R(t)^{2}\), with \(\sigma\) the Stefan-Boltzmann constant and \(T_{\rm eff}\) the effective temperature of the body, and assume that the radius of the cloud coincides with the photosphere. The physical interpretation of the definition of this temperature corresponds to the observed temperature, i.e. what a telescope would measure from the moment of the impact onwards. From Eq. (43) we obtain \[T_{\rm eff}\cong 2.32\times 10^{6}\,{\rm K}\left(\frac{\eta}{1} \right)^{1/4}\left(\frac{t}{1\,{\rm month}}\right)^{1/4}\exp\left[-\frac{1}{2} \left(\frac{t}{1\,{\rm month}}\right)^{2}\right]\] \[\left(\frac{E(0)}{10^{51}\,{\rm ergs}}\right)^{1/4}\left(\frac{ \kappa}{0.04\,{\rm m}^{2}{\rm kg}^{-1}}\right)^{-1/8}\left(\frac{M}{1\,M_{ \odot}}\right)^{-1/8}\] \[\left(\frac{V_{\rm exp}}{10^{4}{\rm km\,s}^{-1}}\right)^{1/8} \left[1+37775\left(\frac{V_{\rm exp}}{10^{4}{\rm km\,s}^{-1}}\right)\left( \frac{t}{1\,{\rm month}}\right)\right]^{-1/2}, \tag{44}\] where we have not neglected the 1 in \(R(t)=R(0)+V_{\rm exp}t\) in the last bracket because this would lead to an artificial value of \(T_{\rm eff}\) at \(t=0\). In Fig. (4) we display the evolution of the effective temperature as a function of time for the values of \(\eta\) of Fig. (3). Since we are dealing with short wavelengths, we can calculate the peak wavelength \(\lambda_{\rm peak}\) of the spectral radiance of the cloud as a function of time using an approximation. This is Wien's displacement law, which relates the absolute tem Figure 4.— Time evolution of \(T_{\rm eff}\) for various values of \(\eta\), following the same order as in Fig. (3). We include two zooms; the top embedded zoom shows in logarithmic scale in the x-axis the whole range of time, from \(10^{-13}\) to 9 months, and the bottom one in linear scale the last few months, from 1 to 9, in logarithmic scale in the y-axis. Figure 3.— Evolution of the power by a stellar disruption of masses \(1\,M_{\odot}\) and \(V_{\rm rad}=10^{4}\,{\rm km\,s}^{-1}\), corresponding to the default values of Eq. (43) for different efficiency parameters \(\eta\). The uppermost curve corresponds to the maximum value of \(\eta\) and the lowermost to the minimum value. We add a power-law curve proportional to \(t^{-5/3}\), which is the typical value one expects from a stellar tidal disruption. perature \(T\) in \(K\) and the peak wavelength as \(T=b/\lambda_{\rm peak}\), with \(b\sim 2.89\times 10^{-3}\,\)m K Wien's displacement constant. In Fig. (5) we show the evolution of \(\lambda_{\rm peak}\) in the different regimes of frequencies as a function of time. ### Kinetic temperature A different definition of temperature is the conversion of kinetic energy into heat as a result of the impact of the stars. This definition will be useful for the derivation of the sound velocity at the innermost region of the outcome of the collision, which will be derived later. Assuming an ideal gas, the energy and kinetic temperature of the environment are linked via the usual equation \[E=\frac{3}{2}N\,k\,T_{\rm kin}, \tag{45}\] with and \(k\) the Boltzmann constant, \(N=M_{\rm tot}/\mu\), \(M_{\rm tot}=2\,M_{\odot}\) is the total mass, \(\mu=0.6\,m_{\rm p}=5.05\times 10^{-58}\,M_{\odot}\) the mean molecular mass for fully ionised matter, and \(m_{\rm p}\) the mass of the proton. We adopt this value because it corresponds to the radiative zone of a star with a mass similar to the Sun, where hydrogen and helium constitute most of all elements. In the surface, where the temperature drops significantly, this assumption would be wrong. It follows from Eq. (38) that \[T_{\rm kin}=1.22\times 10^{9}\,{\rm K}\left(\frac{E(0)}{10^{51}\,{\rm ergs}} \right)\left(\frac{\eta}{1}\right)\exp\left[-\frac{1}{8}\left(\frac{t}{1\,{ \rm month}}\right)^{2}\right]. \tag{46}\] We show the evolution of this last equation in Fig. (6). ### Spectral power In the previous sections we have estimated the total amount of energy released, as well as the power and the peak wavelength, which can be used as an approximation to understanding the distribution of energy over different bandwidths. In this section we will derive how the power distributes over different ranges of energy. For that, we first have to obtain the distribution of energy in function of time \(t\) and frequency \(\nu\). Hence, we have to evaluate the following quantity, which we will call the spectral power \[\frac{dE}{dt\,d\nu}=P(t)\,b\,(T_{\rm eff}(t),\nu). \tag{47}\] In this equation the function \(b\,(T_{\rm eff}(t),\nu)\) is the black body spectrum normalised to 1 for \(T_{\rm eff}(t)\) (i.e. the "observable temperature") and \(\nu\). In terms of integration, \(T_{\rm eff}(t)\) can be envisaged as a constant, because we have to integrate in frequencies. I.e. the function corresponds to the spectral radiance of the cloud for frequency \(\nu\) at absolute temperature, Planck's law, but normalised to one, \[b\,(T_{\rm eff}(t),\nu)=\frac{B\,(T_{\rm eff}(t),\,\nu)}{C\,(T_{\rm eff}(t))}. \tag{48}\] Here \(B\,(T_{\rm eff}(t),\,\nu)\) is \[B\,(T_{\rm eff}(t),\,\nu)=\frac{2h\nu^{3}}{c^{2}}\frac{1}{e^{h\nu/(kT)}-1}, \tag{49}\] with \(h\) the Planck constant, \(c\) the speed of light, and we are identifying \(T\equiv T_{\rm eff}(t)\) for clarity. The integral of this equation over the whole range of \(\nu\) does not yield 1, which is why we need to obtain the normalization factor, \[C\,(T_{\rm eff}(t))=\int_{\nu=0}^{\nu=\infty}B\,(T_{\rm eff}(t),\,\nu)\,d\nu. \tag{50}\] If we change the variable \(\alpha=h\nu/(kT)\) so that \(d\alpha=h\nu/(kT)\), we obtain \[C\,(T_{\rm eff}(t))=2\frac{(kT)^{4}}{c^{2}h^{3}}\int_{0}^{\infty}\frac{\alpha ^{3}}{e^{\alpha}-1}d\alpha. \tag{51}\] The integral of Eq. (51) is a special function and a particular case of a Bose-Einstein integral, the Riemann zeta function \(\zeta(s)\), a function of a complex variable \(s\). The integral is analytical and has the solution \[\int_{0}^{\infty}\frac{\alpha^{3}}{e^{\alpha}-1}d\alpha=\zeta(4)\,\Gamma(4), \tag{52}\] with \(\Gamma(n)\) the Gamma function, \(\Gamma(n)=(n-1)!\) if \(n\) is a positive integer. Hence, \(\zeta(4)\,\Gamma(4)=6\zeta(4)=\pi^{4}/15\) and so, Eq. (51) becomes \[C\,(T_{\rm eff}(t))=\frac{2}{15}\,\frac{(Tk\pi)^{4}}{c^{2}h^{3}}. \tag{53}\] Plugging this result into Eq. (48) and using Eq. (49), we derive that \[b\,(T,\nu)=15\left(\frac{h}{\pi kT}\right)^{4}\frac{\nu^{3}}{e^{h\nu/(kT)}-1}. \tag{54}\] Therefore, the spectral power of the cloud is \[\nu\,\frac{dE}{dt\,d\nu}=\frac{dE}{dt\,d(\ln\nu)}=\frac{15}{\pi^{4}}\,P(t) \frac{\left[h\nu/(kT)\right]^{4}}{e^{\left[h\nu/(kT)\right]}-1}, \tag{55}\] where we have multiplied Eq. (47) by \(\nu\) to obtain the spectral power in \(\ln\nu\), and \(P(t)\) is given by Eq. (43). In Fig. (7) we depict the spectral power as a function of \(\nu\) for the different values of \(\eta\) taken into consideration. With decreasing \(\eta\) values, the spectral power is obviously lowered but in the range of observable frequencies, i.e. from \(10^{6}\,\)MHz, the values achieve relatively high values. ### Photometric colours and AB magnitude We now display the same information but in a different way. If we define a set a set of passbands (or optical filters), with a known sensitivity to incident radiation, we are in the position of comparing with real data taken from surveys. For that, we first adopt Eq. (55) and remove the factor \(\nu\) on the left-hand-side of the equation, so that we are left with this integral to solve \[\mathcal{C}(t)=\frac{15}{\pi^{4}}P(t)\left[h/(kT(t))\right]^{4}\int_{\nu_{\rm min }}^{\nu_{\rm max}}\frac{\nu^{3}}{e^{\left[h\nu/(kT(t))\right]}-1}\,d\nu, \tag{56}\] where we have identified \(\mathcal{C}\equiv dE/dt\) as the "colour". Depending on the range of frequencies of interest, we will be looking at different bands. In particular, we define the following ranges for the bands of interest (\(\nu\) is given in Hz): U-Band: \(\nu_{\rm min}=7.54\times 10^{14}\), \(\nu_{\rm max}=9.04\times 10^{14}\), B-Band: \(\nu_{\rm min}=6.10\times 10^{14}\), \(\nu_{\rm max}=7.54\times 10^{14}\), G-Band: \(\nu_{\rm min}=5.68\times 10^{14}\), \(7.50\times 10^{14}\), V-Band: \(\nu_{\rm min}=5.04\times 10^{14}\), \(\nu_{\rm max}=5.92\times 10^{14}\), R-Band: \(\nu_{\rm min}=4.13\times 10^{14}\), \(\nu_{\rm max}=5.09\times 10^{14}\). The integral in Eq. (56) is a non-trivial one. However, since the ranges of frequencies that are of our interest are very narrow, what we can do is to approximate the integral by the value of the rectangle delimited by those values. I.e. we simply calculate \[\frac{dE}{dt}=\frac{15}{\pi^{4}}P(t)\frac{\left[h\nu_{\rm avg}/(kT(t))\right] ^{4}}{e^{\left[h\nu_{\rm avg}/(kT(t))\right]}-1}\ln\left(\frac{\nu_{\rm max}}{ \nu_{\rm min}}\right). \tag{57}\] In this expression, \(\nu_{\rm max}\) and \(\nu_{\rm min}\) are determined by the colour of interest and \(\nu_{\rm avg}\) is the characteristic frequency associated with that particular band. We can obtain its value by knowing that the length in nm for the various bands is in the U band \(l=365\) nm, so that \(\nu_{\rm avg}=8.21\times 10^{14}\) Hz, in the B band \(l=445\) nm, and hence \(\nu_{\rm avg}=6.74\times 10^{14}\) Hz, in the G band \(l=464\), \(\nu_{\rm avg}=6.46\times 10^{14}\) Hz, in the V band \(l=551\) nm, \(\nu_{\rm avg}=5.44\times 10^{14}\) Hz and in the R band \(l=658\) nm, \(\nu_{\rm avg}=5.56\times 10^{14}\) Hz. The conversion is straightforward, since \(\nu_{\rm avg}(l)=c/l=3\times 10^{8}/(l\times 10^{-9})\) to obtain Hz. This approximation has an error of about 10% as compared to a numerical integration. In Fig. (8) we show the different evolutions of the photometric indeces as a function of time. In order to derive the absolute magnitude (AB magnitude), we remind the reader that it is usually defined as the logarithm of the spectral flux density which defines a zero point value at 3631 Jy. By defining the spectral flux density as \(\mathcal{F}\), the AB magnitude can be calculated in cgs units as \[m_{\rm AB}=-2.5\log_{10}\mathcal{F}-48.60. \tag{58}\] The bandpass AB magnitude spanning across a continuous range of wavelengths is usually defined in such a way that the zero point corresponds to \(\mathcal{F}\sim 3631\) Jy. Hence, \[m_{\rm AB}\approx-2.5\log_{10}\left[\frac{\int\mathcal{F}(h\nu)^{-1}e(\nu) \,{\rm d}\nu}{3631\int\left(h\nu)^{-1}e(\nu)\,{\rm d}\nu}\right]. \tag{59}\] Figure 5.— _Top, left panel:_ Evolution of the peak wavelength \(\lambda_{\rm peak}\) of the spectral radiance for the cloud. We display the approximate ranges of the spectrum which it will cover in time. The color scheme follows that of Fig.(4), meaning that \(\eta=1\) is the lower curve and the upper one corresponds to \(\eta=0.5\). _Top, right panel:_ Same as the left panel for \(\eta=0.3\) (upper curve) and \(\eta=0.1\) (lower curve). _Bottom, left panel:_ Same as the top, left one, but for different time intervals. We add vertical lines to delimit the different ranges in \(\lambda_{\rm peak}\) in time. In this expression, \(e(\nu)\) is the filter response function and the term \((t\nu)^{-1}\) accounts for the photon-counting device. In Fig. (9) we display the AB magnitude for a typical collision located at a distance of 194.4 Mpc to be able to compare it to the object ZTF19acboexm from the ZTF transient discovery report of Nordin et al. (2019). If this transient had indeed its origin in a stellar collision, then the free parameter responsible for the efficiency of the energy conversion should be of about \(\eta 0.05\). ## 5. Gravitational Waves and Multimessenger Searches If we calculate the binding energy of the cores of the stars which initially are on a hyperbolic orbit and compare it to the total kinetic energy of the system as derived in Sec.(3.2), we obtain that the binding energy is of about one order of magnitude below the total kinetic energy. This is a natural consequence of our choice of the problem, since in this work we are focusing on totally disruptive collisional events, which are the most energetic ones. However, for lower relative velocities, of about \(V_{\rm rel}\leq 2500\,{\rm km\,s}^{-1}\), a fraction of the stellar collisions are such that the inner cores survive the impact and form a temporary binary embedded in a gaseous medium. In this section we will consider a fixed relative velocity of \(V_{\rm rel}=1000\,{\rm km\,s}^{-1}\). With this new value, when evaluating Eq. (32), we find that the total kinetic energy involved is of \(T_{\rm K}\sim 9.94\times 10^{48}\) ergs, while the binding energy of the two stars is of \(E_{\rm bind}\sim 1.57\times 10^{49}\) ergs (i.e. \(\sim 7.6\times 10^{48}\) ergs per star). Therefore, after the collision, one has a gaseous cloud which is expanding very quickly plus two surviving pieces of the stars. If we assume that \(T_{\rm K}\) is distributed equally among the two colliding stars, then each receives an input of \(T_{\rm K}/2=4.97\times 10^{48}\) ergs. This means that after the collision, there would be a leftover of binding energy per star of approximately 40% the initial binding energy of one star. Since the core is the densest part of the star, it stands to reason that this 40% represents the core which is surviving. The core of the Sun has a mass of \(\sim 0.34\,M_{\odot}\). So all we have after the collision is two cores in a gaseous cloud which is expanding. The luminosity of a naked core of a Sun-like star radiates at \(\sim 4\times 10^{33}\) ergs but the total initial kinetic energy radiated right after the collision is of \(\sim 10^{49}\) ergs. We could think that the gaseous cloud will radiate away this energy in such a short timescale that we are left with the two cores which will continue radiating. However, as we will see, the cores will merge before this happens. Therefore we will neglect this extra luminosity of the cores when evaluating the properties of such a "flare" in the following sections. This kind of collisions is a subfraction of the subset of almost head-on collisions, i.e. for small impact parameters (private communication of Marc Freitag, as published in his PhD thesis, but see Freitag & Benz 2005,as well). In this section we will adopt a representative value of \(V_{\rm rel}=10^{3}\,{\rm km\,s}^{-1}\), i.e. one order of magnitude smaller than before, which is of the order of the velocity dispersion in these environments. We note that the derivation of the absolute rates, however, as derived previously, remain the same, since the assumptions we used still hold for our current choice of \(V_{\rm exp}\), even if it is one order of magnitude smaller, as explained in section (2). Nonetheless, Eq. (14) should be multiplied by a fraction number \(f_{\rm bin}\) of those simulations which lead to the temporary formation of a core binary. This is the second free parameter of this article (the first is \(\eta\), responsible for the non-linearity), which would require dedicated numerical simulations since this information is not contained in Freitag & Benz (2005) or elsewhere to the best of our knowledge. In this section we consider a low-velocity disruptive collision which firstly leads to a source of electromagnetic radiation. We rederive the quantities and figures of the previous sections for this smaller value of \(V_{\rm rel}\). Later, we derive the properties of the binary to then address the evolution of the source of gravitational waves and the prospects for its detection because, as we will see, it could mimic a binary of two supermassive black holes in vacuum, although it should be straightforward to tell them apart. ### Electromagnetic signature of low-velocity collisions Because we are interested in the electromagnetic precursor of the gravitational wave, we reproduce the previous figures for the effective temperature, energy release, power output and spectral power for the new value of \(V_{\rm rel}=10^{3}\)\({\rm km\,s}^{-1}\), because they change and could be of interest in a search in observational data. To derive the time evolution of the released energy and power, we must note that Eq. (36) now is \(T_{\rm E}\sim 0.52\)\({\rm yr}\sim 6.2\) months and that \(E(0)\sim 10^{49}\) ergs. Hence, \[E(t) \cong 10^{49}\,{\rm ergs}\left(\frac{E(0)}{10^{49}\,{\rm ergs}} \right)\left(\frac{\eta}{1}\right) \tag{60}\] \[\exp\left[-\frac{13}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{ 2}\right].\] We can see this graphically in Fig. (10). The initial values are significantly lower but the time in which the source is radiating is extended to almost two years in the decay. In Fig. (11) we depict the same as in Fig. (4) but for the new velocity. In Fig. (12) we display a comparison between the two different cases we are treating, the high-velocity one and the low one. As expected, the temperature peak decreases in the case of low velocity, and lasts longer, so that it is shifted towards later times. As for the emitted power, Eq. (42) becomes Figure 6.— Evolution of the kinetic temperature as the outcome of the collision with time, given by Eq. (46). We include an embedded zoom of the last few months of evolution and note that in it both axes are in log scale. \[P_{\rm norm}\sim 6.32\times 10^{41}\,{\rm erg\ s^{-1}}\left(\frac{E(0)}{ 10^{49}\,{\rm ergs}}\right)\left(\frac{\kappa}{0.04\,{\rm m^{2}kg^{-1}}}\right)^ {-1/2}\] \[\left(\frac{M}{1\,M_{\odot}}\right)^{-1/2}\left(\frac{V_{\rm exp}} {10^{3}{\rm km\ s^{-1}}}\right)^{1/2}, \tag{61}\] and so, the emitted power in the collision of two stars at low velocity is \[P(t) \sim 6.32\times 10^{41}{\rm erg\ s^{-1}}\left(\frac{\eta}{1} \right)\left(\frac{t}{1\,{\rm month}}\right)\] \[\exp\left[-\frac{13}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{ 2}\right]\] \[\left(\frac{E(0)}{10^{49}\,{\rm ergs}}\right)\left(\frac{\kappa} {0.04\,{\rm m^{2}kg^{-1}}}\right)^{-1/2}\] \[\left(\frac{M}{1\,M_{\odot}}\right)^{-1/2}\left(\frac{V_{\rm exp} }{10^{3}{\rm km\ s^{-1}}}\right)^{1/2}. \tag{62}\] We can see this in Fig. (13). Thanks to this last expression, as explained in the previous section, we can now derive \(T_{\rm eff}\) for the low-velocity collision, Figure 7.— The spectral power as a function of the frequency \(\nu\) for the four different values of the nonlinear parameter \(\eta\) taken into consideration. In each of the panels the different curves correspond to different moments in the evolution of the expanding cloud after the stellar collision. From the right (higher values of \(\nu\)) to the left, we show in dashed lines the first nine tenths of the first month in the evolution, i.e. towards lower frequencies in the first dashed curves there is a time increment of \(1/10\) of a month. The first rightmost solid curve corresponds to the spectral power range one month after the event, the second rightmost one, achieving as expected the maximum value, to the second month, etc. We display eight months in the evolution to show the decrease in spectral power, although we note that \(10^{6}\,{\rm MHz}\) corresponds to the lowest frequency of present instruments. Figure 8: Photometric indeces U, B, G, V and R as a function of time (in months, lower x-axis and days, upper x-axis) for the different values of the parameters \(\eta\). Figure 9: _Left panel:_ AB magnitude as calculated from the theoretical model at a distance of 194.4 Mpc. We give the extreme values that we have adopted in this work for the free parameter \(\eta\), i.e. 1 and also 0.05. _Right panel:_ Zwicky Transient Facility (ZTF) report for 2019-10-07 corresponding to the object ZTF19acbcosen by Nordin et al. (2019). The data taken with ZTFG are marked with squares and the data taken with ZTFR with circles. If the transient was the result of a stellar collision, it would seem to correspond to a value of \(\eta\lesssim 0.05\). \[T_{\rm eff} \cong 1.2\times 10^{6}\,{\rm K}\left(\frac{\eta}{1}\right)^{1/4} \left(\frac{t}{1\,{\rm month}}\right)^{1/4} \tag{63}\] \[\exp\left[-3.25\times 10^{-3}\left(\frac{t}{1\,{\rm month}}\right)^{ 2}\right]\left(\frac{E(0)}{10^{49}\,{\rm ergs}}\right)^{1/4}\] \[\left(\frac{\kappa}{0.04\,{\rm m}^{2}{\rm kg}^{-1}}\right)^{-1/8} \left(\frac{M}{1\,M_{\odot}}\right)^{-1/8}\left(\frac{V_{\rm exp}}{10^{3}{\rm km \,s}^{-1}}\right)^{1/8}\] \[\left[1+3777\left(\frac{V_{\rm exp}}{10^{3}{\rm km\,s}^{-1}} \right)\left(\frac{t}{1\,{\rm month}}\right)\right]^{-1/2},\] Finally, in Fig. (14) we show the corresponding of Fig. (7) but for the lower value of \(V_{\rm exp}\). The Eq. (55) needs no modification, but we need to take the correct values for \(T_{\rm eff}\) and \(P(t)\) into account, i.e. Eq. (63) and Eq. (62) respectively. We note how the spectral power is now concentrated over a much shorter span of frequencies. The kinetic temperature can be estimated as in Eq. (46), but this time the values are accordingly lower, \[T_{\rm kin}\cong 1.22\times 10^{7}\,{\rm K}\left(\frac{E(0)}{10^{51}\,{\rm ergs}} \right)\left(\frac{\eta}{1}\right) \tag{64}\] \[\exp\left[-\frac{13}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{ 2}\right].\] In Fig. (15) we depict this evolution. We can see that the values remain higher at later times. ### Point particles in vacuum Let us now consider the evolution of two point particles in perfect vacuum with the masses of the cores starting at a given semi-major axis and evolving only due to the emission of gravitational radiation. Thanks to the the approximation of non-precessing but shrinking Keplerian ellipses of Peters (1964) we can derive an estimate for the associated timescale for a binary of (point) masses \(m_{1}=m_{2}=m_{\rm core}\) and semi-major axis \(a\) to shrink only via emission of gravitational radiation, \[T_{\rm GW}\equiv\frac{a}{|\dot{a}_{\rm GW}|}=\frac{5}{128}\frac{c^{5}\,a^{4} }{G^{3}m_{\rm core}^{3}}\,F(e)^{-1}. \tag{65}\] Figure 11.— Same as Fig. (4) but for \(V_{\rm exp}=10^{3}\) km s\({}^{-1}\). Figure 10.— Same as Fig. (2) but for \(V_{\rm exp}=10^{3}\) km s\({}^{-1}\). Normalising to the values we are using, \[T_{\rm GW}\cong 5\times 10^{8}{\rm yrs}\left(\frac{m_{\rm core}}{0.34M_{\odot}} \right)^{-3}\left(\frac{a}{R_{\odot}/2}\right)^{4}F(e)^{-1}, \tag{66}\] where we have chosen the semi-major axis of the cores to be roughly \(a\sim d_{\rm min}=R_{\odot}/2\), from Eq. (17). We will however see that this initial choice has little to no impact on the merging time when gas is taken into account. We have chosen \(m_{\rm core}=0.34M_{\odot}\) by assuming that the core radius of the Sun is located at about a distance of \(r_{\rm core}\sim 0.2R_{\odot}\), following the data in table 3 of Abraham & Iben (1971), and we note that the correction factor \(Q\) to multiply this timescale introduced by Zwick et al. (2019) can be neglected, because \(Q\sim 1\) in our case. However, as we will see later, the final results are to some extent independent of the choice of initial and final semi-major axes. In the equation we have introduced \[F(e):=(1-e^{2})^{-7/2}\left(1+\frac{73}{24}e^{2}+\frac{37}{96}e^{4}\right). \tag{67}\] Figure 14.— Same as Fig. (7) but for \(V_{\rm exp}=10^{3}\,{\rm km\,s^{-1}}\). The solid lines now range from the first month to the 24th and the dashed lines represent the same fraction of time as in Fig. (7). We add a zoom in of each \(\eta\). Figure 15.— Same as in Fig. (6) but for \(V_{\rm exp}=10^{3}\,{\rm km\,s^{-1}}\). For a very eccentric orbit, \(e=0.9\), \(F(e)^{-1}\sim 2\times 10^{-3}\). I.e. we shorten the timescale by two orders of magnitude. However, even if the eccentricity at binary formation is very large, it circularizes in a very few orbits (see the SPH simulations of Freitag & Benz, 2005). We will hence assume \(F(e)^{-1}=1\). Nevertheless, Eq. (66) is nothing but an _instantaneous_ estimation of the (order of magnitude) time for merger due solely to the emission of gravitational radiation. This means that, for a given, fixed, semi-major axis, we obtain a timescale. Nonetheless, the axis shrinks as a function of time, so that \(T_{\rm GW}\) will become shorter as well, because it is a function of time. From Eq. (66), and taking into account the original negative sign of Peters (1964), we can derive that \[\int a^{3}\,da=-\frac{128}{5}\frac{G^{3}\,m_{\rm core}^{3}}{c^{5}}\int dt. \tag{68}\] Hence, \[\frac{a^{4}}{4}=-\frac{128}{5}\frac{G^{3}\,m_{\rm core}^{3}}{c^{5}}\,t+{\rm constant}. \tag{69}\] We can obtain the value of the constant by setting \(t=0\), which leads to constant \(=a(0)^{4}/4\). Since we have chosen \(a(0)\equiv a_{0}=R_{\odot}/2\), we derive that the evolution of the semi-major axis of the binary due only to the emission of gravitational waves is \[a(t)\cong \left[\frac{1}{16}\left(\frac{a_{0}}{R_{\odot}/2}\right)^{4}\right.\] \[\left.-4.4\times 10^{-11}\left(\frac{m_{\rm core}}{0.34\,M_{ \odot}}\right)^{3}\left(\frac{t}{1\,{\rm month}}\right)\right]^{1/4}R_{\odot}. \tag{70}\] In Fig. (16) we show the evolution of Eq. (70). Replacing Eq. (70) in Eq. (65) leads to \[T_{\rm GW}(t)\cong 5\times 10^{8}\,{\rm yrs}\left(\frac{m_{\rm core}}{0.34\,M_{ \odot}}\right)^{-3}\times\left[\left(\frac{a_{0}}{R_{\odot}/2}\right)^{4}\right.\] \[\left.-7.04\times 10^{-10}\left(\frac{m_{\rm core}}{0.34\,M_{ \odot}}\right)^{3}\left(\frac{t}{1\,{\rm month}}\right)\right] \tag{71}\] whose evolution we can see in Fig. (17). ### Cores embedded in a gaseous medium For a stellar object of mass \(m_{\rm obj}\) moving through a homogeneous isothermal gaseous medium of constant density \(\rho\) along a straight line with a velocity \(V_{\rm obj}\), Ostriker (1999) derives that for a supersonic motion, the drag force provided by dynamical friction as derived by Chandrasekhar (1943) must be modified and is \[F_{\rm drag}\sim 4\pi\rho\left(\frac{Gm_{\rm obj}}{V_{\rm obj}}\right)^{2}. \tag{72}\] Her results have been confirmed numerically by the work of Sanchez-Salcedo & Brandenburg (1999). Hence, for the velocity of one of the two cores to be decreased by one e-folding in the gaseous cloud, the associated timescale is \[T_{\rm gas}\equiv\frac{V_{\rm core}}{dV_{\rm core}/dt}=\frac{dt}{d\,{\rm ln} V_{\rm core}}\cong\frac{m_{\rm core}V_{\rm core}}{F_{\rm drag}}, \tag{73}\] where \(V_{\rm core}\) is the velocity of the core. The last term in the equation is momentum divided by force, which gives an estimate of order of magnitude for the characteristic timescale, the timescale to change \({\rm ln}\,V_{\rm core}\) by one dex. We normalise it to the relevant values for this work as \[T_{\rm gas}\cong 8.4\times 10^{-6}\,{\rm yrs}\,\,\left(\frac{n}{10^{ 24}\,\,{\rm cm}^{-3}}\right)^{-1}\] \[\left(\frac{m_{\rm core}}{0.34M_{\odot}}\right)^{1/2}\left(\frac{ a}{R_{\odot}/2}\right)^{-3/2}. \tag{74}\] This timescale agrees with the results found by Antoni et al. (2019), in particular their Eq. (37). This is about two orders Figure 16.— Evolution of the semi-major axis of the binary since formation, as described by Eq. (70). The embedded panel allows us to see the evolution of the last \(0.2R_{\odot}\) in units of \(10^{8}\) yrs. The semi-major axis reaches \(0\) (assuming point-particles) at \(t=1.42\times 10^{9}\,{\rm months}\), as can be derived by setting Eq.(70) to zero. Figure 17.— Evolution of the characteristic timescale \(T_{\rm GW}\) as a function of time. The inset allows us to see when it reaches zero. Note that the x-axis in it is in linear scale. of magnitude shorter than the orbital period of the binary with the default values in Eq. (74), \(P_{\rm orb}=2\pi\sqrt{a^{3}/(2\times G\,m_{\rm core})}\), \[P_{\rm orb}\sim 1.4\times 10^{-4}\,{\rm yrs}\left(\frac{a}{R_{\odot}/2}\right)^{3/ 2}\left(\frac{m_{\rm core}}{0.34\,M_{\odot}}\right)^{-1/2}, \tag{75}\] which means that the binary would not be able to do one orbit before the cores sink and merge due to the gas. To derive Eq. (74), we have taken as average density that of the Sun, \(\rho_{\odot}\sim 1\,{\rm gr\,cm^{-3}}\), which translates into a numerical density of \(10^{24}{\rm cm^{-3}}\) for the mass of the proton. The amount of gas contained within the orbit can be easily calculated; this is important because, should it be larger than the mass of the cores, then one should use this mass to calculate the orbital velocity. However, for the kind of semi-major axis that we are considering, the mass in gas contained in the orbit of the cores is \(M_{\rm gas,orb}=\bar{\rho}_{\odot}\times V_{\rm gas,orb}\sim 5\times 10^{-3}M_{ \odot}<2\times m_{\rm core}\), with \(V_{\rm gas,orb}\) the volume inside of the orbit and \(\bar{\rho}_{\odot}\) the average solar density in the radiative zone, assumed to be \(\bar{\rho}_{\odot}=10\,{\rm g\,cm^{-3}}\). This means that the velocity to take into account to derive Eq. (74) is \(V_{\rm core}\), as we have done. Nonetheless, this derivation of \(T_{\rm gas}\) does _not_ take into account the fact that the cores are not moving into a straight line, but they form a binary and hence the density wake around them modifies the drag force (see e.g. Sanchez-Salcedo & Brandenburg 2001; Escala et al. 2004; Kim & Kim 2009). If the semi-major axis is smaller than the Bondi accretion radius, \[R_{\rm Bondi}=\frac{2\,Gm_{\rm core}}{C_{\rm s}}, \tag{76}\] with \(C_{\rm s}\) the sound speed of the cloud, one needs to correct the gas density around the cores by multiplying \(n\) in Eq. (74) by \((R_{\rm Bondi}/a)^{3/2}\) (as realised by Antoni et al. 2019). Since we are assuming almost head-on collisions, we have chosen the semi-major axis for the cores to be of about \(R_{\odot}/2\), also motivated by the outcome of the SPH simulations of Freitag & Benz (2005). Assuming an ideal gas, we can estimate \(C_{\rm s}=\sqrt{\gamma_{\rm ad}P/\rho_{\rm g}}\), with \(\gamma_{\rm ad}\) the adiabatic index of the gas, which we assume to be a fully ionized plasma, so that \(\gamma_{\rm ad}=5/3\), and \(P\) the pressure, and so \(C_{\rm s}=\sqrt{\gamma_{\rm ad}T(t)k/m}\), with \(m=0.6\,m_{\rm p}=1.004\times 10^{-27}\,{\rm kg}\) and \(T(t)\) the temperature of the environment. This temperature is _not_ the effective temperature \(T_{\rm eff}(t)\), but the kinetic temperature \(T_{\rm kin}(t)\), i.e. the temperature around the cores in the environment in which they are embedded, whose properties we approximate to be those of the radiative zone in the Sun, in terms of fully ionised matter but also of density, as we will see later. Since we are interested in low-velocity collisions, from Eq. (15), we derive that Eq. (74) is \[T_{\rm gas}\cong 6.4\times 10^{-4}\,\,{\rm yrs} \left(\frac{n}{10^{24}\,{\rm cm^{-3}}}\right)^{-1}\left(\frac{m_{ \rm core}}{0.34\,M_{\odot}}\right)^{-1}\] \[\left(\frac{C_{\rm s}}{20\,{\rm km^{-1}}}\right)^{3}, \tag{77}\] where we have used the value of \(C_{\rm s}\) at \(T_{\rm kin}=5\times 10^{2}\,K\) as an illustrative example. However, \(T_{\rm kin}\) is a function of time, and hence \(C_{\rm s}\) as well, so that plugging in Eq. (64), \[C_{\rm s}(t)\cong 5.29\times 10^{2}\,{\rm km^{-1}}\left(\frac{E(0)}{10^{49}\,{ \rm ergs}}\right)^{1/2}\left(\frac{\eta}{1}\right)^{1/2}\] \[\exp\left[-\frac{13}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{ 2}\right]. \tag{78}\] In Fig. (19) we depict its evolution with time. It follows the trend of Fig. (11); i.e. because of the temperature quickly drops, so does \(C_{\rm s}(t)\) too. We note that this value is in agreement with the results of Vorontsov (1989) (Fig. 2) for the Sun at a radius of about \(\sim 1\,R_{\odot}\), with the proviso that the radius is roughly that of the Sun, i.e. at values of \(t\sim 0\), which is our departure assumption. Before we derive the final expression for \(T_{\rm gas}(t)\), we note that the density around the cores is not constant; it will decrease with time, since the gaseous cloud is expanding at \(V_{\rm exp}\). The SPH simulations of Freitag & Benz (2005) show that when the cores form a binary, the gaseous density around them is of about one order of magnitude lower than the density in the cores. To derive the initial value of the density around the cores, i.e. at \(t=0\), we take the Sun as a reference point. Most of its mass is enclosed in the radiative zone, because the convective zone only represents about \(0.3\,R_{\odot}\) and the density in that region is negligible. Following the work of Abraham & Iben (1971), we note that for the mass we have adopted for the cores, \(M_{\rm core}=0.34\,M_{\odot}\), the corresponding radius is of \(R_{\rm core}\sim 0.2\,R_{\odot}\) and, according to their table 3, the corresponding density in that region is of \(\rho\sim 150\,{\rm g\,cm^{-3}}\). Therefore, we will assume that the density around the cores (corresponding to that of the radiative zone) should be of \(\rho_{\rm rad}\sim 15{\rm g\,cm^{-3}}\) (and hence use the tag "rad"), which corresponds to a numerical density of \(n_{\rm rad}\sim 10^{25}\,{\rm cm^{-3}}\). We therefore only consider a radius of \(0.7\,R_{\odot}\), because we are assuming that all mass is in the radiative zone. Taking these considerations into account, plus assuming that the convective zone is fully ionised hydrogen, with the of the proton \(m_{p}\sim 1.7\times 10^{-24}\) g, the time evolution of the numerical density around the cores follows the expression \[n_{\rm rad}(t)\sim 10^{25}\,{\rm cm^{-3}}\left(\frac{M}{1\,M_{\odot}}\right)\] \[\left[\frac{\bf 7}{\bf 10}+\frac{19}{5}\times {\bf 10^{3}}\left(\frac{V_{\rm exp}}{10^{3}{\rm km\,s^{-1}}}\right)\left( \frac{t}{1\,{\rm month}}\right)\right]^{-3}. \tag{79}\] We can see this evolution, as well as the evolution of the physical density, in Fig. (18). In a few months the density decreases significantly, so that assuming a constant value would be wrong. We are now in the position of deriving the time dependency of \(T_{\rm gas}(t)\) by replacing Eq. (78) and Eq. (79) in Eq. (77), \[\begin{split} T_{\rm gas}(t)&\cong 473.7\,{\rm yrs}\left( \frac{M}{1M_{\odot}}\right)^{-1}\left(\frac{m_{\rm core}}{0.34\,M_{\odot}} \right)^{-1}\left(\frac{\eta}{1}\right)^{3/2}\\ &\left(\frac{E(0)}{10^{49}\,{\rm ergs}}\right)^{3}\exp\left[- \frac{39}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{2}\right]\\ &\left[1+\frac{19}{5}\left(\frac{V_{\rm exp}}{10^{3}{\rm km\,s}^ {-1}}\right)\left(\frac{t}{1\,{\rm month}}\right)\right]^{3}\end{split} \tag{80}\] In this result, the power of 2 in the exponential for the time stems from the cooling of the cloud via the sound speed, Eq. (77). This quickly decays, as we can see in Fig. (19), and is in power law of 3. The power of 3 in the last term reflects the fact that in our model we assume that the cloud has a volume expanding at a constant rate over time. These are competitive effects responsible for the behaviour of the curve, which we can see see in Fig. (20), where we display Eq.(80). The function initially increases until about 1.5 yrs from the formation of the binary to then decay. The shape of the curve allows us to estimate when the binary will merge. Since \(T_{\rm GW}(t)\gg T_{\rm gas}(t)\), we can ignore the effects of gravitational radiation in the shrinkage of the binary. By evaluating Fig. (20) we can obtain a rough approximation for the binary to merge via gas friction when the elapsed time (i.e. the abscissa, time since the formation of the binary) is larger than \(T_{\rm gas}\) and \(T_{\rm gas}\) is not increasing in time. We see in the inset of the figure that this requirement is met approximately when \(t\sim 2.7\) yrs (for \(\eta=1\)), which corresponds to \(T_{\rm gas}=1\) yr. From that point, i.e. \((x,y)=(2.7,1)\,{\rm yrs}\), (i) \(t>T_{\rm gas}\) and (ii) \(T_{\rm gas}\) is only decreasing in time. Hence, if after \(t\sim 2.7\) yrs the binary has not yet merged, it should do so in about \(T_{\rm mg}\sim 1\) year, as an upper limit, as for all other values of \(\eta\). To derive a more accurate value for the merger time \(T_{\rm mg}\), we need to derive the evolution of the semi-major axis of the binary due to the drag force of the gas. The differential equation can be derived by taking into account that, for a circular orbit, \(V^{2}\propto 1/a\), which means that \(\dot{a}/a=-2\dot{V}/V\). Since we have identified in Eq. (73) \(V/\dot{V}=T_{\rm gas}\), we have that \(\dot{a}/a=-2/T_{\rm gas}\), and hence \[\int_{R_{\odot}/2}^{a_{\rm mg}}a^{-1}\,da=-2\int_{0}^{T_{\rm mg}}T_{\rm gas}^{ -1}(t)\,dt, \tag{81}\] since we are integrating from the initial semi-major axis \(a_{0}=R_{\odot}/2\) to \(a_{\rm mg}\). This final value of the semi-major axis, \(a_{\rm mg}\) is reached when the separation between the cores reaches \(R_{\rm core}\). I.e. \(a_{\rm mg}=0.2\,R_{\rm core}\) \[\ln\left(\frac{a_{\rm mg}=0.2\,R_{\odot}}{a_{0}=R_{\odot}/2}\right)=-2\int_{0 }^{T_{\rm mg}}T_{\rm gas}^{-1}(t)\,dt, \tag{82}\] we need to evaluate the right-hand side of the last equation to find the time \(t\) for which \(a_{\rm mg}=R_{\rm core}\), although, a priori, from Fig. (20), we already predict that this time is of about 1 yr. Figure 19.— Evolution of the sound speed in the cloud as a function of time. We include a zoom in between months 10 and 30, when it drops to zero. The uppermost curve corresponds to \(\eta=1\) and the lowermost to \(\eta=0.1\). We add an inset to show the convergence of the models when the sound speed is zero. Figure 20.— Evolution of \(T_{\rm gas}\) as a function of time, see Eq. (80). The embedded zoom has linear scale in the x-axis ranging from 2.3 years fier the formation of the binary to 3 years. We can see that all of the four models follow a similar behaviour, but the difference between them is not linearly proportional to \(\eta\). Nonetheless, as we already mentioned before, the solution is relatively independent of the initial and final semi-major axis. In the integral, \(T_{\rm gas}\) is given by Eq. (80) and \(T_{\rm mg,\,m}:=T_{\rm mg/}/({\rm month})\), and we introduce \(\tau:=t/({\rm month})\), so that \(d\tau=dt/({\rm month})\). Hence, \[\int_{0}^{T_{\rm mg}}T_{\rm gas}^{-1}(t)\,dt=\frac{1}{\alpha\,\eta}\int_{0}^{T _{\rm mg,\,m}}e^{c\,\tau^{2}}\,(1+b\,\tau)^{-3}\,d\tau. \tag{83}\] We have introduced \(\alpha\equiv 5684.4\) months (see Eq. (80)), \(b\equiv 19/5\), and \(c\equiv 39/2000\). The integral given by Eq. (83) can be solved analytically, as we show in Appendix 1. The result is \[I(x) =\frac{1}{2b}\left[1-\frac{1}{(1+bx)^{2}}\right]+\left(\frac{c}{b ^{3}}+\frac{2c^{2}}{b^{5}}\right)e^{c/b^{2}}\ln(1+bx)\] \[-\frac{1}{2bx}\frac{e^{cx^{2}}-1}{(1+bx)^{2}}-\frac{cx}{b^{2}} \frac{e^{cx^{2}}}{1+bx}\] \[+\sum_{n=1}^{\infty}\frac{n(2n-1)c^{n}}{n!}F_{n}(x). \tag{84}\] With \[F_{n}(x)=\begin{cases}0&n=1\\ \sum_{k=1}^{2n-2}\binom{2n-2}{k}\frac{(-1)^{k}}{k}\left[(1+bx)^{k}-1\right]&n> 1,\end{cases} \tag{85}\] where we have defined \(x\equiv T_{\rm mg}\) for legibility. The solution agrees with standard numerical Gauss-Kronrod quadrature methods to evaluate the value of the integral at different values of \(\tau\). Since \[\ln\left(\frac{a_{\rm mg}=0.2}{a_{0}=0.5}\right)=-0.916291=-\frac{2}{\alpha\, \eta}\int_{0}^{T_{\rm mg,\,m}}I(\tau)\,d\tau, \tag{86}\] with \(I(\tau)\) the integrand of Eq. (83), and \(\ln(a_{\rm mg}/a_{0})=-\ln(a_{0}/a_{\rm mg})\), we plot \(log(a_{\rm mg}/a_{0})\) as a function of \(\tau\) and look for the value at which \[0.916291=\frac{2}{\alpha\,\eta}\int_{0}^{T_{\rm mg,\,m}}I(\tau)\,d\tau, \tag{87}\] to find \(T_{\rm mg,\,m}\). In Fig. (21) we show the evolution of the right-hand side of Eq. (83). We can see that from the month 20th the exponential behaviour dominates the evolution of the function and the integral reaches values as high as \(10^{70}\). Although mathematically correct, this is a result of the infinite summation of Eq. (84), which is physically only realistic up to the moment at which we consider that the binary forms, i.e. at the value of \(\tau\) for which \(a_{0}=0.5\,R_{\odot}\), which is \(\tau=34.3\) months. From that moment upwards, the result of the integral is physically meaningless for our purposes. As a consequence of the exponential behaviour, we note that the result is relatively independent of the initial semi-major axis. More precisely, this means that, if we e.g. mutiply by a factor 3 the initial semi-major axis, the result in the x-axis will be larger by a small factor \(\epsilon\), \[\ln\left(\frac{3\times 0.5\,R_{\odot}}{a_{\rm mg}}\right)=2\times I(T_{\rm mg}+ \epsilon)/(\alpha\,\eta). \tag{88}\] ### Supermassive black hole mimickers The drag force acting on to the cores has a direct impact on the observation of the mass of the source in gravitational waves, as shown by Chen & Shen (2019), more precisely on the chirp mass, as introduced by Cutler & Flanagan (1994) \[M_{\rm chirp}:=\frac{(m_{1}m_{2})^{3/5}}{(m_{1}+m_{2})^{1/5}}, \tag{89}\] which reduces in our case to the following trivial expression, since \(m_{1}=m_{2}=m_{\rm core}\), \[M_{\rm chirp}=\frac{1}{2^{1/5}}m_{\rm core}=0.29\,M_{\odot}. \tag{90}\] On the detector, however, the evolution of the gravitational wave frequency is affected by the timescale in which the gas shrinks the binary in such a way that the observed chirp mass is not given by Eq. (89) but for \[M_{\rm chirp,\,obs}(t)=\left[1+\Lambda(t)\right]^{3/5}M_{\rm chirp}, \tag{91}\] with \(\Lambda(t):=T_{\rm GW}(t)/T_{\rm gas}(t)\). This can be seen from Eq. (3) of Chen et al. (2020); Chen & Shen (2019), and is due to the fact that the frequency \(f\) and its time derivative \(f\) now do not evolve solely because of the gravitational radiation (and see also Caputo et al. 2020). In our case, however \(T_{\rm gas}\) is a function of time, given by Eq. (80), and \(T_{\rm GW}(t)\) is given by Eq. (71). The full expression for \(\Lambda(t)\) is Figure 21.— Evolution of \(\alpha\,\eta\,\ln(a_{\rm mg}(\tau)/a_{0})/2\) as a function of \(\tau\) (i.e. in months). In the zoom, with a dashed line, we show the values corresponding to \(\alpha\eta\times\ln(a_{0}/a_{\rm mg})/2\). This corresponds to \(T_{\rm mg,\,m}=34.7\) months, i.e. 2.892 yrs, which is off by a value of \(0.192\) yr from the value predicted by analysing Fig. (20). We can see that \(\tau_{\rm mg}\) varies very little as a function of \(a_{0}\) and \(a_{\rm mg}\), because a big change in distance in the y-axis turns into a small change in the x-axis. This means that the result does not depend (much) on the choice of the initial semi-major axis, which was chosen here to be \(R_{\odot}/2\). We display only the value \(\eta=1\) because the other values are virtually identical. \[\Lambda(t) \cong 10^{6}\left(\frac{M}{1\,M_{\odot}}\right)\left(\frac{m_{\rm core }}{0.34\,M_{\odot}}\right)^{-2}\left(\frac{\eta}{1}\right)^{-3/2}\] \[\left(\frac{E(0)}{10^{49}\,{\rm ergs}}\right)^{-3}\exp\left[\frac{ 39}{2000}\left(\frac{t}{1\,{\rm month}}\right)^{2}\right]\] \[\left[1+\frac{19}{5}\left(\frac{V_{\rm exp}}{10^{3}{\rm km\,s}^{ -1}}\right)\left(\frac{t}{1\,{\rm month}}\right)\right]^{-3}\] \[\left[\,\left(\frac{a_{0}}{R_{\odot}/2}\right)^{4}-7.04\times 10^{ -10}\left(\frac{m_{\rm core}}{0.34\,M_{\odot}}\right)^{3}\left(\frac{t}{1\,{ \rm month}}\right)\,\right] \tag{92}\] From this and (91), we observe in Fig. (22) the increase of the chirp mass as observed by a gravitational-wave detector such as LIGO/Virgo, the Einstein Telescope or LISA (depending on the observed chirp mass). The fact that the chirp mass reaches a minimum to then again increase again to higher values is due to the fact that we are taking into account the Bondi radius, Eq. (76), since the cores will be surrounded by a region of overdensity, a "wake" around them. Since the sound speed decreases over time, as we can see in Fig. (19), \(R_{\rm Bondi}\) increases. Moreover, the semi-major axis decreases with time, and since we are multiplying Eq. (74) by \((R_{\rm Bondi}/a)^{3/2}\), this translates into an increase over time of the chirp mass. An advantage of gravitational wave data analysis is that, since the time evolution of the frequency will be very different as compared to the vacuum case, as we show in this article, so that it will become clear that these sources correspond to stellar collisions. This will be the first evidence. The second one is that the merger will be very different to that of a binary of two black holes because there is no event horizon. Last, and also due to the fact that these objects due have a surface, there will be an afterglow. From the work of Chen & Shen (2019); Chen et al. (2020), the observed distance in gravitational waves due to the same effect has the correction \[D_{\rm obs}(t)=\big{[}1+\Lambda(t)\big{]}D, \tag{93}\] with \(D\) the real distance to the source, as derived in Chen et al. (2020). Assuming a vacuum binary of masses \(m_{1}=m_{2}=0.34\,M_{\odot}\), semi-major axis \(R_{\odot}/2\) and a particular value of the eccentricity, \(e=0\), the horizon distance can be estimated to be \(D\sim 108\,{\rm Mpc}\) using the approximant waveform model IMRPhenomPv2 (Khan et al., 2019), a phenomenological model for black-hole binaries with precessing spins, at a flow frequency of \(10\,{\rm Hz}\) with PyCBC (Nitz et al., 2020), an open-source software package designed for use in gravitational-wave astronomy and gravitational-wave data analysis. In Fig. (23) we can see the evolution of \(D_{\rm obs}\) as given by the Eq. (93) with \(D\sim 108\,{\rm Mpc}\). Again, this is a consequence of \(\dot{f}\) being different from what you expect in vacuum. As with the chirp mass, the distance will diverge from what is expected in vacuum very quickly. The big mismatch in the chirp mass and the too large distance to the source, but in particular the frequency evolution represent the identifiers of the actual physical origin of the source; namely two colliding stars instead of a binary of two black holes. ### Polarizations in vacuum and in gas We can relate the polarizations of the waveform amplitude to the chirp mass and the distance to the source in an approximate, Newtonian way as given by the Eqs.(4.30, 4.31, 4.32) of Maggiore (2008), which we reproduce here for convenience. \[h_{+}(\tau) =\frac{1}{r}\left(\frac{GM_{c}}{c^{2}}\right)^{5/4}\left(\frac{5} {c\varsigma}\right)^{1/4}\left(\frac{1+\cos^{2}(t)}{2}\right)\,\cos\left[\Phi (\varsigma)\right]\] \[h_{\times}(\tau) =\frac{1}{r}\left(\frac{GM_{c}}{c^{2}}\right)^{5/4}\left(\frac{5} {c\varsigma}\right)^{1/4}\cos(\iota)\sin\left[\Phi\left(\varsigma\right)\right]. \tag{94}\] In this equations \(\tau\) is our usual definition of \(\tau=t/{\rm month}\), \(M_{c}\) is the chirp mass, \(\varsigma:=(T_{\rm mrg}-\tau)\), \(r\) the distance to the source and \(\iota\) is the inclination to the source. Finally, the phase of the gravitational wave \(\Phi(\varsigma)\) is the following function, Figure 23.— Evolution of the observed distance to the source, \(D_{\rm obs}(t)\) in Mpc as a function of time, following the same nomenclature as in Fig. (22). Figure 22.— The observed chirp mass for a binary of two cores of masses \(0.34\,M_{\odot}\) each in function of time for the usual four values of \(\eta\) (with the highest value in the lowermost curve), as given by Eq. (91). We stop the plot at \(T_{\rm mrg,m}=2.917\,{\rm yrs}\), which corresponds to the coalescence time, as derived previously, and include an embedded zoom corresponding to the range \(10^{-3}\) months (1.8 minutes) to 6 months. \[\Phi(\varsigma)=-2\left(\frac{5GM_{c}}{c^{3}}\right)^{-5/8}\varsigma^{5/8}+\Phi_{0}, \tag{95}\] with \(\Phi_{0}\) the value of \(\Phi(\varsigma=0)\), and \(r\) is the distance to the source, \(D\). The value of the constant of Eq. (95) can be derived by setting \(\tau=T_{\rm mg}\). With this we find that in vacuum, the value of \(\Phi_{0}\) is \[\Phi_{0}\cong-1.44\times 10^{8}. \tag{96}\] Hence, replacing \(T_{\rm mg,\,m}\), we have in vacuum \[\Phi(\tau)\cong-1.56\times 10^{7}\big{(}35-\tau\big{)}^{5/8}+\Phi_{0}, \tag{97}\] with \(\Lambda(\tau)\) given by Eq. (92) we employ our usual definition of \(\tau\equiv t/(1\) month). Taking into account that we have chosen \(D=108\,\)Mpc and Eq. (89), and setting \(T_{\rm mg,\,m}=1.42\times 10^{9}\) months, Eqs. (94) become \[h_{*}(\tau) \cong 1.65\times 10^{-25}\Big{[}\big{(}1.42\times 10^{9}-\tau \big{)}\Big{]}^{-1/4} \tag{98}\] \[\times\left(\frac{1+\cos^{2}(t)}{2}\right)\,\cos\left[\Phi(\tau )\right],\] \[h_{\times}(\tau) \cong 1.65\times 10^{-25}\Big{[}\big{(}1.42\times 10^{9}-\tau \big{)}\Big{]}^{-1/4}\] \[\times\cos(t)\,\sin\left[\Phi(\tau)\right],\] with \(\Phi(\tau)\) given in Eq. (97) and \(\Phi_{0}\) in Eq. (96). In Fig. (24) we display as an example the plus polarization of the Eqs. (98) in vacuum. In order to derive an expression for the evolution of the polarizations in the case in which we consider the influence of the gas, what we have to do is to analyse the evolution of the semi-major axis of the binary under the influence of the gas, which is given by Eq. (81). In this case, however, we do not integrate up to the merger, i.e. \(a=a_{\rm mrg}\), \(t=T_{\rm mg}\), but up to some semi-major axis \(\hat{a}\) in \(R_{\odot}\) and some time \(\tilde{\tau}\) in units of months. Therefore, we have \[\hat{a}=\left(\frac{R_{\odot}}{2}\right)\exp\left[-\frac{2}{\alpha\,\eta} \int_{0}^{\tilde{\tau}}I(\tau)\,d\tau\right], \tag{99}\] with \(I(\tau)\) given by Eq. (84). Therefore, for each value of \(\hat{\tau}\), we can derive \(\hat{a}\) and, with it and Eq. (65), we can obtain what is the time \(\varsigma\) that we need to use in the set of Eqs. (94), \[\varsigma=\frac{5}{128}\frac{c^{5}\,\hat{a}^{4}}{G^{3}m_{\rm core}^{3}}F(e)^{- 1}. \tag{100}\] I.e. we are deriving the characteristic timescale for an evolution due to gravitational radiation in a case in which the semi-major axis is shrinking at a rate given by the friction with the gas. In Fig. (25) we show the result, which is the counterpart of Fig. (24). We can see that the time has significantly reduced, as well as the width of the oscillations. ### Characteristic strain in vacuum and in gas So as to compare the vacuum case with the one in which the cores are embedded in the gaseous cloud, we will derive the characteristic strain as approximated by Eq. (10.146) of Maggiore (2018), \[h_{c}(f)=\frac{1}{D}\sqrt{\frac{2}{\pi^{2}}\frac{G}{c^{3}}\frac{dE}{df}}, \tag{101}\] with \(dE/df\) the energy spectrum in the inspiraling phase in the Newtonian approximation, see e.g. Eq. (4.41) of Maggiore (2008), \[\frac{dE}{df}=\frac{\pi^{2/3}}{3G}\frac{(GM_{c})^{5/3}}{1+z}\,f^{-1/3}. \tag{102}\] We have then \[h_{c}(f)=\frac{\sqrt{2/3}}{\pi^{2/3}\,c^{3/2}}\frac{(GM_{c})^{5/6}}{D\sqrt{1+ z}}\,f^{-1/6}. \tag{103}\] Figure 24.— Plus polarization of the gravitational wave produced by the cores, assuming an inclination of \(t=45^{\circ}\). The grey, background curve corresponds to the vacuum waveform. We add a zoom figure showing the interval \(10^{9}\) months to \(\tau=T_{\rm mg,\,m}=1.42\times 10^{9}\) months. We note that both y-axis need to be multiplies by \(10^{-26}\), as displayed in the left, uppermost corner. The small spikes in the waveform are an artifact of the sampling of the plotting program. Figure 25.— Plus polarization for the binary embedded in gas. We note that, contrary to Fig. (24), the X-axis is in linear scale. Now, the characteristic strain can be expressed in terms of the amplitude in frequency \(A(f)\), the frequency itself \(f\) and its time derivative \(\dot{f}\) as follows (see Eq. 16.21 of Maggiore 2018), \[h_{c}(f)=A(f)\frac{f}{\dot{f}^{1/2}}, \tag{104}\] the only thing we need to do is to take the ratio of the characteristic strain affected by the gas, \(h_{c}^{\rm g}(f)\), and that in vacuum, \(h_{c}^{\rm g}(f)\). Since the amplitudes and the frequencies are the same, we are left with \[h_{c}^{\rm g}[f(t)] = h_{c}[f(t)]\big{[}\Lambda(t)\big{]}^{-1/2} \tag{105}\] \[= \frac{\sqrt{2/3}}{\pi^{2/3}c^{3/2}}\frac{(GM_{c})^{5/6}}{D\sqrt{1 +z}}\big{[}\Lambda(t)\big{]}^{-1/2}\,f(t)^{-1/6},\] with \(\Lambda(t)\) given, as usual, by Eq. (92), and \(f(t)\) the associated frequency of the source, which is a function of time as well and accordingly needs to be evaluated at the same time as \(\Lambda(t)\). This expression, Eq. (105) gives us the instantaneous value of \(h_{c}^{\rm g}[f(t)]\) at a given moment \(t\). To derive \(f(t)\) we need to take into account two things. First, the driving mechanism in the evolution of the binary, as we have seen previously, is the friction of the binary with the gas, rather than the loss of energy via gravitational radiation, so that in Eq. (105) time derivatives must be done in the context of gas friction. Second, in our derivation of Eq. (81) we used the fact that \(\dot{a}/a=-2\dot{V}/V\) and \(\dot{a}/a=-2/T_{\rm gas}\). Hence, since the frequency associated to any GW source can be expressed in the Newtonian limit as \[f=\frac{1}{\pi}\sqrt{\frac{GM_{\rm tot}}{a^{3}}}, \tag{106}\] where \(M_{\rm tot}=2\,m_{\rm core}\) and we are omitting the time dependence. The time derivative can be calculated to be \[\dot{f}_{\rm gas}=-\frac{3}{2\pi}\sqrt{\frac{GM_{\rm tot}}{a^{3}}}\frac{\dot {d}_{\rm gas}}{a}. \tag{107}\] To derive this expression we have used the chain rule and the fact that what induces a change in the semi-major axis is the gas, so that \(da/dt\equiv\dot{d}_{\rm gas}\). I.e. the physical process that induces time changes is the friction with the gas, so that we need to derivate respect to the time the quantities related to it. Hence, \[\dot{f}_{\rm gas}=3\,\frac{f}{T_{\rm gas}}. \tag{108}\] I.e. we need to solve \[\int f^{-1}\,df=\ln[f(t)]=3\int T_{\rm gas}^{-1}(t^{\prime})\,dt^{\prime}. \tag{109}\] As before, in Eq. (83), \(T_{\rm gas}\) is given by Eq. (80), \(\tau:=t/({\rm month})\), so that \(d\tau=dt/({\rm month})\) and so \[\ln[f/f_{0}]=3\int T_{\rm gas}^{-1}(t^{\prime})\,dt^{\prime}=\frac{3}{\alpha \eta}\int e^{c\tau^{2}}\,(1+b\,\tau)^{-3}\,d\tau. \tag{110}\] With the same values of \(\alpha\), \(b\) and \(c\). In this equation, \(f_{0}\) is the initial frequency from which we start to measure the source, and the ratio is \(f/f_{0}\) because it is a positive integral. The result of the previous integral is \(3\times I(\tau)\), with \(I(\tau)\) given by Eq. (84) and \(\tau\) is a moment of time in months before the merger, i.e. at merger \(\tau=T_{\rm mrg,\,m}\). Therefore, we have that the integrated characteristic strain from the moment of formation of the binary at a frequency \(f_{0}\) and an ulterior given time in months \(\tau\) is \[h_{c}^{\rm g}= \frac{\sqrt{6}}{\pi^{2/3}c^{3/2}}\,\frac{(GM_{c})^{5/6}}{D\sqrt{1 +z}}\big{[}\Lambda(\tau)\big{]}^{-1/2} \tag{111}\] \[\times f_{0}^{-1/6}\exp\left[-\frac{I(\tau)}{2\,\alpha\,\eta} \right],\] with \(I(\tau)\) given by Eq. (84) and \(\Lambda(\tau)\) by Eq. (92), as usual. As for \(f_{0}\), we can derive it from the initial semi-major axis of the binary and the masses of the cores. Since the gravitational-wave frequency is twice the orbital frequency, we have that \(f_{0}\cong 4.7\times 10^{-4}\,{\rm Hz}\). We can express the gravitational wave frequency in vacuum of a binary with the same chirp mass as a function of time in months by assuming a Keplerian, circular orbit which shrinks over time via gravitational loss. In the quadrupole approximation and for circular orbits the source orbital frequency \(\nu_{s}\) (given via Kepler's laws) and the gravitational-wave frequency \(\nu_{\rm GW}\) are related via \(\nu_{\rm GW}=2\,\nu_{s}\). We hence can find from the orbital energy and the fact that \(2\pi\,f_{\rm GW}=\nu_{\rm GW}\) that \[f(t)=\frac{1}{\pi}\left(\frac{GM_{c}}{c^{3}}\right)^{-5/8}\left(\frac{5}{256} \frac{1}{\varsigma}\right)^{3/8}, \tag{112}\] with \(\varsigma\coloneqq(T_{\rm mrg}-\tau)\). See e.g. Sec. 4.1 of Maggiore (2008) for an explicit derivation of this result. We now substitute this result in Eq. (103) and obtain that \[h_{c}=\frac{\sqrt{2/3}\left(5/256\right)^{-3/48}}{\pi^{1/2}c^{29/16}}\frac{( GM_{c})^{15/16}}{D\sqrt{1+z}}\left(\frac{1}{\varsigma}\right)^{-3/48}. \tag{113}\] If we adopt \(D=108\,{\rm Mpc}\), \(z=0\), \(M_{c}=0.29\,M_{\odot}\) and introduce Figure 26.— Evolution of the characteristic strain in vacuum, \(h_{c}\), and after the collision, i.e. in a gaseous environment, \(h_{c}^{\rm g}\). The two curves correspond to the latter case for two different initial frequencies \(f_{0}\), while the former is depicted with a dashed, straight line which does not depend on the initial frequency. \(T_{\rm mag}^{m}\coloneqq T_{\rm mag}/{\rm month}\) and \(\tau\), \(h_{\rm c}(t)\cong 4.42\times 10^{-22}\left(T_{\rm mag}^{m}-\tau\right)^{3/48}\). In Fig. (26) we can see the differences in the time evolution of the different characteristic strains. The vacuum case corresponds to a straight line as one would expect, since we are working in the inspiral approximation of the quadrupole for circular orbits (as is the case). The cores embedded in the stellar debris, however, evolve in a very different fashion even for the very short timescales related to the problem (of months). At the initial time we see that the strains differ in about three orders of magnitude, as Eq. (105) suggests for the default values given in Eq. (92). In a similar way, in Fig. (27) we depict the frequency evolution of the two strains. Again, in the very short interval of frequencies, the strain in vacuum does not change significantly, while the one corresponding to the gaseous case has a completely different behaviour. It is interesting to see the propagation in Figs. (26, 27) of the combined effect of the evolution of the speed of sound and the fact that the cloud is expanding over time, as we mentioned in the paragraph following Eq. (80). We present a sketch of a possible strategy to calculate the mismatch between the vacuum- and the gas sources in Appendix 2. ## 6. Red giants So far we have focused on main sequence stars and looked at the high-energy emission and the potential production of an associated gravitational wave source. A particularly interesting kind of star for which the previous analysis can be applied, however, are red giants. This is so because their masses are also of the order of \(1\,M_{\odot}\), even if they have much larger radii. When the red giants collide, they will also be a powerful source of high-energy. The presence of a degenerate core at the centre of the star, makes it more appealing from the point of view of gravitational radiation, and when the two degenerate cores collide, this will again turn into a strong source of electromagnetic radiation, which has been envisaged as a possible explanation for Type Ia supernova, such as SN 2006gy (Smith et al. 2007 and see Gal-Yam 2012). We hence would have a precursor electromagnetic signal an-nouning the gravitational-wave event followed by another posterior, very violent electromagnetic emission. Contrary to supernovae, red giants come with a different spectrum of masses and radii, and the total mass of the resulting degenerate object would not be constrained by the Chandrasekhar limit. As a consequence, one cannot use them as standard candles. If what is interpreted as Type Ia supernova is mostly the outcome of two colliding red giants, this would have important implications, as we will see. ### Event rate of collisions between red giants The process of giganterythrotropism, as coined by Peter Eggleton, means that the kind of main sequence stars we have been dealing with in this article will tend to get large and red as they evolve. The main sequence stars we are considering here, of light mass, spend a percentage of their lives in the form of a red giant. To derive the amount of time spent in the different phases, we refer to the work of Vassiliadis & Wood (1993), in which they estimate that the amount of time spent in the first giant branch (FGB) is of \(\sim 3.62\times 10^{9}\)yrs, i.e. 24% of the total life of their one-solar mass star of metallicity \(Z=0.016\) in their table 1. Later, the star will reach the asymptotic giant branch (AGB), and during this stage the star's radius can reach as much as \(\sim 215\,R_{\odot}\) (Vassiliadis & Wood 1993). The amount of time spent in the AGB, for a solar-like star is \(\tau_{\rm AGB}\sim 2.5\times 10^{7}\)yrs according to Vassiliadis & Wood (1993), which in their model represents 0.17% of the total life of the star In order to be conservative on the derivation of the rates, this means that the event rates, as derived in the Eq. (14) must be multiplied by a factor of \(10^{-2}\) to take this into account, since we need two stars. We pick up a \(1\,M_{\odot}\) main sequence star, which in its red-giant phase and a few numerical timesteps before the triple-alpha process has a mass of \(M_{\rm RG}=0.953\,M_{\odot}\) and an associated radius of \(R_{\rm RG}=25\,R_{\odot}\)1. We choose these as representative values of our default red giant in the red-giant branch, where the \(1\,M_{\odot}\) main-sequence star will stably fuse hydrogen in a shell for about 10% of its entire life. Footnote 1: P. Eggleton, private communication. This has a significant impact on the geometrical cross-section. As we can see in Eq. (14), this leads to an enhancement factor of \(\sim 600\) without taking into account the first term enclosed in the square brackets which is, however, basically negligible as compared to the second term in the square brackets as we discussed in that section. We do lose a small factor in terms of mass but, in total, the rates are significantly enhanced. In Fig. (28) we show the equivalent of Fig. (1) but for the collision of two red giants with the above-mentioned properties. It is interesting to note that, even though red giants are fully convective, the treatment we have derived in the previous sections regarding the electromagnetic signature still applies to them because the thermodynamics of the gas will not be different from that of the main stars after the collision as soon as the red giants collide, i.e. as soon as they are not in thermodynamical equilibrium. The compact binary forming in the collision will be surrounded by gas in any case. Even if the impact parameter was exactly zero, there will be gas because the merging time of the compact cores due to the gas drag is much shorter than the timescale in which the gas dissipates. However, it is interesting to address the formation of the binary which forms because, as we will see in the next section, it is a particular one. Figure 27.— Same as Fig. (26) but in frequency domain. We include a zoom in for the characteristic strain of the two cores in the gaseous environment between the range of frequencies \(f\in[4.7,4.70094]\times 10^{-4}\) Hz. ### Structure of the red giants The nature of the red giant plays a role however in the evolution of the cores in the resulting gaseous cloud that emerges as a result of the collision. This is important for us because we want to understand what source of gravitational radiation the collision will produce after the collision between the two red giants has taken place, with the proviso that the relative speed does not exceed \(V_{\rm rel}\leq 2500\,{\rm km\,s^{-1}}\), as noted in Sec. (5). For this we need to know (i) the average density of the medium in which the cores will be embedded after the collision, (ii) the density of the H-fusing shell around the cores (see ahead in the text), (iii) the masses of the cores and (iv) an estimate of the initial semi-major axes. We will set the mass of the red giant to \(M_{\rm RG}=0.953\,M_{\odot}\), which comes from the numerical simulation of a \(1\,M_{\odot}\) main-sequence star before reaching the helium flash, where it spends most of its life, and \(R_{\rm RG}=25\,R_{\odot}\) (see previous footnote). In general, a red giant can be envisaged as a self-gravitating, degenerate core embedded in an extended envelope. This is a consequence of the decrease of hydrogen in the inner regions of the star, so that if a main sequence star consumes it, the convective core gives place to an isothermal one. The helium-filled core collapses after reaching a certain maximum (Schonberg & Chandrasekhar 1942) which releases energy that expands the outer layers of the star. However, as proven analytically in the work of Eggleton & Cannon (1991), it is not possible to simply add an envelope fusing H at its base on to a wholly degenerate white dwarf core. One needs to have an (almost) isothermal non-degenerate shell below the fusing shell and above the degenerate core1 The work of Eggleton & Cannon (1991) proves that the fact that a red giant's envelope expands, after shell burning is established, is not related to the nature of the envelope, and even of the burning shell, but to the ostensibly small isothermal non-degenerate shell between the degenerate core and the fusing shell. Footnote 1: We note here that, although the article has in its title “A conjecture” it is in reality a proper theorem, as demonstrated in the appendix of the work. Since we are interested in the collision and characteristics of the cores when they form a binary and eventually merge via emission of gravitational waves, we need to evaluate the properties of this shell. We hence consider a red giant as a star with a He-degenerate core, a H-fusing shell around it as the only energy source, transiting through a thin radiative zone to the fully convective, extended envelope. Assuming an ideal gas in the H-fusing shell, the equation of state is \[P=P_{\rm gas}+P_{\rm rad}=\frac{\Re}{\mu}\rho_{\rm sh}T_{\rm sh}+\frac{a}{3}T _{\rm sh}^{4}, \tag{114}\] with \(\rho_{\rm sh}\) the density in the shell and \(T_{\rm sh}\) its temperature, the radiation density constant \(a=7.56\times 10^{-15}\) erg\(/({\rm cm^{3}\,K^{4}})\) and the universal gas constant \(\Re=8.31\times 10^{7}\) erg\(/({\rm K\,g})\). Usually one introduces \(\beta:=P_{\rm gas}/P\), the constant ratio of gas pressure \(P_{\rm gas}\) to total pressure \(P\), so that \(1-\beta=P_{\rm rad}/P\). We can now solve for \(\rho_{\rm sh}\), \[\rho_{\rm sh}=\frac{a\mu}{3\Re}T_{\rm sh}^{3}\frac{\beta}{1-\beta}. \tag{115}\] We hence have to derive an estimate for the temperature to obtain the density. For this we follow the derivation of the gradient of temperature with radius as in e.g. (Kippenhahn & Weigert 1991), their section 5.1.2). We consider the flux of radiative energy \(F\) in spherical symmetry in the shell and make an analogy with heat conduction, so that (see Eq. 5.11 of Kippenhahn & Weigert 1991) \[\frac{dT_{\rm sh}}{dr}=-\frac{\kappa\rho_{\rm sh}\,L}{4\pi acr^{2}T_{\rm sh}^ {3}}, \tag{116}\] where we have absorbed the flux into the luminosity, \(L=4\pi r^{2}F\) and \(\kappa\) is considered again to be constant, but in this case \(\kappa=0.2(1+X)\) for electron scattering. Since \(P_{\rm rad}=aT_{\rm sh}^{4}/3\), \[\frac{dP_{\rm rad}}{dr}=-\frac{1}{4\pi c}\frac{\kappa\rho_{\rm sh}\,L}{r^{2}}. \tag{117}\] The equation of hydrostatic equilibrium is \[\frac{dP}{dr}=-\frac{Gm(r)}{r^{2}}\rho_{\rm sh}, \tag{118}\] and we approximate \(m(r)\sim M_{\rm core}\). Hence \[dP=C\,dP_{\rm rad}, \tag{119}\] with \(C\) constant. We integrate this last equation and take into account that we can neglect the integration constant deep inside the radiative zone, as noted by Paczynski2, so that \(P/P_{\rm rad}=C\equiv 4\pi cGM_{\rm core}/(\kappa L)=1/(1-\beta)=L_{\rm Edd }/L\). where \(L_{\rm Edd}\equiv 4\pi cGM_{\rm core}/\kappa\) is the Eddington luminosity, the maximum luminosity that the source can achieve in hydrodynamical equilibrium (Rybicki & Lightman 1979). If this luminosity was to be exceeded, then radiation pressure would drive the outflow. From Eq. (117) and Eq. (119), we obtain Footnote 2: This approximation is explained in the unpublished work of Bohdan Paczyński. See the small note in Appendix 2. \[\frac{dT_{\rm sh}}{dr}=-\frac{\kappa L\mu}{16\pi c\Re}\left(\frac{\beta}{1- \beta}\right)\frac{1}{r^{2}}. \tag{120}\] Since we have the expression for \((1-\beta)\), Figure 28.— Same as in Fig. (1) but for red giants of masses \(M_{\rm RG}\sim 0.953\,M_{\odot}\) and radii \(R_{\rm RG}=25\,R_{\odot}\), taking into account that we have adopted the occupation fraction in phase space for the two giants to be of \(f_{\rm RG}=10^{-2}\). This stems from the fact that we are only considering giants in the asymptotic giant branch, where they spend about \(0.17\%\) of their life. We do not consider the first giant branch, where they spend up to \(24\%\) of their lifetime in order to derive lower-limit quantities. \[\frac{dT_{\rm sh}}{dr}=-\frac{\mu\beta GM_{\rm core}}{4\Re r^{2}}. \tag{121}\] We integrate this equation and neglect the constant of integration for the same reasons as we did previously to find \[T_{\rm sh}=\frac{\mu\beta GM_{\rm core}}{4\Re R_{\rm core}}. \tag{122}\] Finally, we obtain that the density can be expressed as \[\rho_{\rm sh}\cong 6\times 10^{-3}\ {\rm g\,cm^{-3}}\frac{(\beta\,\mu)^{4}}{1- \beta}\left(\frac{M_{\rm core}}{M_{\odot}}\right)^{3}\left(\frac{R_{\rm core}} {R_{\odot}}\right)^{-3}, \tag{123}\] and we note that we have used the radius of the core \(R_{\rm core}\) to normalize the last term, although we are referring to the density in the shell. However, the thickness of the H-fusing shell, \(R_{\rm sh}\) extends only a bit farther than the radius of a white dwarf from the center (we use here the letter \(R\) for the thickness instead of \(T\) because it could be misinterpreted with temperature). This is so because the shell is not (yet) degenerate, but we will also derive the value of \(R_{\rm sh}\) later. We can rewrite Eq. (123) because \(\beta\) is constant in the shell, as we have seen previously, so that it can be approximated with a polytrope of index \(n=3\), thanks to Eddington's quartic equation (Eq. 22 of Eddington 1924), which can be written as \[\frac{1-\beta}{\mu^{4}\beta^{4}}=\frac{a}{3\Re^{4}}\frac{(\pi G)^{3}\,c_{1}^{ 2}}{z_{3}^{6}}M^{2}, \tag{124}\] with \(M\) the total mass of the stellar object, in our case \(M=M_{\rm core}\), and \(z:=Ar\) (\(A\) a constant) the usual dimensionless variable for the radius introduced to derive the Lane-Emden equation. The value of \(z_{3}\) (polytrope of index \(n=3\)) has to be derived numerically, and is \(z_{3}\sim 6.897\) (Chandrasekhar 1939). Finally, the constant \(c_{1}\) can be obtained thanks to the relation between central density and average density which one obtains from the Lane-Emden equation, e.g. Eq. (19.20) of Kippenhahn & Weigert (1991), \(c_{1}=12.93\). Therefore, \[\frac{1-\beta}{\mu^{4}\beta^{4}}\cong 3\times 10^{-3}\left(\frac{M_{\rm core}}{M _{\odot}}\right)^{2}, \tag{125}\] and so, Eq. (123) becomes \[\rho_{\rm sh}\cong 2\times 10^{4}\ {\rm g\,cm^{-3}}\left(\frac{M_{\rm core}}{0. 3\,M_{\odot}}\right)\left(\frac{R_{\rm core}}{3\times 10^{-2}\,R_{\odot}} \right)^{-3}. \tag{126}\] This result is not unexpected, since the density of a white dwarf ranges between \(10^{4}\) and \(10^{7}\ {\rm g\,cm^{-3}}\), and the H-fusing shell supports pressures very close to that of the degenerate core itself. We can obtain the mass enclosed between the radius of the white dwarf (\(R_{\rm WD}\)) and that of the core (\(R_{\rm core}\)) by integrating Eq. (123), \[M_{\rm sh}\cong 2\times 10^{-3}\,M_{\odot}\left(\frac{\beta^{4}}{1-\beta} \right)\left(\frac{M_{\rm core}}{M_{\odot}}\right)^{3}\ln\left(\frac{R_{\rm core }}{R_{\rm WD}}\right). \tag{127}\] From Eq. (125) and \(\mu=0.5\) for pure hydrogen, we have that \[\frac{1-\beta}{\beta^{4}}\sim 1.7\times 10^{-5}\left(\frac{M_{\rm core}}{0.3M_{ \odot}}\right)^{2}, \tag{128}\] so that Eq. (127) can be rewritten as \[M_{\rm sh}\cong 3.2\,M_{\odot}\left(\frac{M_{\rm core}}{0.3M_{\odot}}\right)^{ 3}\ln\left(\frac{R_{\rm core}}{R_{\rm WD}}\right). \tag{129}\] The natural logarithm between the two radii and the total mass means that \(R_{\rm sh}\) is a minor amount that extends beyond the radius of the degenerate core, approached by a white dwarf in our work. Therefore, and to first order, we can consider that the properties of the two degenerate objects taking place in the collisions are those of the He core. The numerical code of Eggleton (1971) allows us to obtain the properties of our fiducial model, which is a \(1\,M_{\odot}\) red giant. In Fig. (29) we show the evolution of the mass and radius of the He core, while in Fig. (30) we depict the evolution of its density. We can see that in particular the mass (and hence the density) significantly vary in the lifetime of the star, while the radius can change by almost one order of magnitude. This means that, when the two degenerate cores form a binary and merge, the properties of the electromagnetic radiation will considerably change depending on which stage of the evolution the red giants are. In principle we could choose a given mass and radius for the red giants participating in the collision and repeat the whole electromagnetic analysis we have done in the first sections, when we were addressing main sequence stars. This is so because, even if from the point of view of the Eddington standard model of stellar structure a main-sequence star and a red giant are vew different (treated as radiative objects and fully convective, respectively), the gaseous debris after the collision will be similar. However, because the masses and radii change so much, we decide not to do this exercise just now because we are not aiming at comparing with observational data in this work. It is likely that later we will follow this idea elsewhere. ## 7. Stellar Collisions in Globular Clusters We have focused so far on galactic nuclei. Covering globular clusters is interesting because the rates are potentially larger due to the smaller relative velocities between the stars Figure 29.— Evolution of the mass and radius of the He core of a red giant which initially had a \(1\,M_{\rm side}\). The left Y-axis shows the mass of the core in \(M_{\odot}\) and the right one the radius in \(R_{\odot}\). We can see that, in its evolution, the mass of the core can span three orders of magnitude. participating in the collision which is of the order of the velocity dispersion, as mentioned in the introduction. Indeed, the Table 2 of Baumgardt & Hilker (2018) contains a catalogue of velocity dispersion profiles of 112 Milky Way globular clusters. The average yields \(6.57\,{\rm km\,s^{-1}}\), so that we will fix the relative velocity of the stars participating in the collision to the average velocity dispersion of \(\sigma=7\,{\rm km\,s^{-1}}\). ### Rates While it would be straightforward to repeat the calculations we have presented in Sec. (2) by assuming the presence of an intermediate-mass black hole with a given mass at the centre of the globular cluster, we prefer not to do it. The uncertainty regarding the mass, position (we cannot longer assume it to be fixed at the centre of the system, so that the calculations become more complex) and even existence of such objects would make the rate determination exercise too unconvincing. However, to motivate this section, the following is a brief summary of the most relevant work that has been done in this context. The problem on the origin of blue stragglers (Maeder, 1987; Bailyn, 1995; Leonard, 1989) is a good choice to try to infer the amount of stellar collisions in globular clusters, since these are very likely the outcome of such collisions. Leonard (1989) derives a collisional rate of \(10^{-8}\,{\rm yr^{-1}}\) assuming that a small fraction of main-sequence stars are in primordial binaries. If we take the Milky Way as a reference point, then a galaxy should have of the order of 100 globular clusters, so that the rate is of \(10^{-6}\,{\rm yr^{-1}}\) per galaxy. This number might be larger, because collisions of binaries are more important Leonard & Fahlman (1991). It is important to note here that the average number of globular clusters correlates with the mass of the central massive black hole (Burkert & Tremaine, 2010) in early-type galaxies. In their Fig. (1) we can see that this number can go up by many orders of magnitude depending on the mass of the supermassive black hole. For instance, NGC 4594 has about \(2\times 10^{3}\) globular clusters. A few years later, Sigurdsson & Phinney (1995) carried out a detailed theoretical and numerical study of stellar collisions, and their results suggest a rate that ranges between \(10^{-6}\) and \(10^{-4}\) main-sequence stellar collisions per year and galaxy (assuming 100 globular clusters). For the arbitrary reference distance that we have adopted of the order of 100 Mpc, we have many clusters of galaxies such as the Virgo Cluster, with about \(10^{3}\) galaxies, the Coma Cluster (Abell, 1656), also with over \(10^{3}\) identified galaxies, and superclusters such as the Laniakea Supercluster (Tully et al., 2014) with about \(10^{5}\) galaxies and the CfA2 Great Wall (Geller & Huchra, 1989), one of the largest known superstructures, at a mere distance of \(\sim 92\,{\rm Mpc}\). Regardless of what the rates are, if we took an average of 1000 clusters and the larger rate of \(10^{-4}\) of Sigurdsson & Phinney (1995), the number of collisions would be a thousand times larger as compared to 100 clusters per galaxy and the rate of \(10^{-6}\). Although the authors did not address red giant collisions, the much larger cross section and the smaller relative velocities in globular clusters are an evidence that their rates must be, as in the case of galactic nuclei, much larger. ### Low relative velocities and impact parameters Until now we have had the advantage of dealing with collisions that kinematically are very powerful, so that after the collision we have no surviving parts of the star (section 3) or just the core (section 5). However, at a typical relative velocity of \(7\,{\rm km\,s^{-1}}\), the collision will have a much lower impact on the structure of the stars. We are looking at a different scenario. On Sec. (2) we mentioned that we neglect gravitational focusing in the case of galactic nuclei. For globular clusters we cannot do this anymore because of the low relative velocity. The probability of having a collision for a parameter \(d_{\rm min}\), as introduced in Eq. (16) with values ranging between \(d_{1}\) and \(d_{2}\) is \[P_{d_{1}\to d_{2}}=\int_{d_{1}}^{d_{2}}\frac{dP}{d(d_{\rm min})}d(d_{\rm min}), \tag{130}\] where \(f(d_{\rm min})=dP/d(d_{\rm min})\) is the probability density. If we consider a range of \(\Delta d:=d_{2}-d_{1}\ll d_{\rm min}\), then we can approximate the integral by \(P_{d_{1}\to d_{2}}\cong f(d_{\rm min})\Delta d\), as we can see in Fig. (31). When we consider the limit in which \(V_{\rm rel}\gg V_{\rm esc}\), which corresponds to a galactic nucleus, then \(f(d_{\rm min})\propto d_{\rm min}\), which is shown in Fig. (31). We can see that in this case, then, the probability of having a collision with \(d_{\rm min}<d\) is proportional to \(d^{2}\) (i.e. it is proportional to the "surface"). On the contrary, Figure 31.— Probability and probability density as a function of the impact parameter. We depict a generic curve and the two limiting cases we are addressing in this study, namely the case in which \(V_{\rm rel}\ll V_{\rm esc}\) and \(V_{\rm rel}\gg V_{\rm esc}\). Figure 30.— Same as Fig. (29) but for the density of the core. In its evolution, the different densities can span over two orders of magnitude. in the case of a globular cluster, \(V_{\rm rel}\ll V_{\rm esc}\), so that all impact parameters have the same probability. What this means is that in a galactic nucleus grazing collisions are more probable than head-on ones, while in a globular cluster a grazing collision and a pure head-on impact have exactly the same probability. The parameters we used in the previous two sections remain the same but for the relative velocity, which allows us to infer that the kinetic energy deposited on to one star (again, assuming that it is distributed equally) is of \(T_{\rm K}/2\sim 2.43\times 10^{44}\,{\rm ergs}\). Hence, after the collision, the star receives an amount of energy equivalent to \(3\times 10^{-3}\%\) its initial binding energy. This amount of energy is small enough so that we can investigate the evolution of one of the stars perturbatively. We will start exploring this situation in its simplest possible form. For that, we consider one collision at a \(d_{\rm min}=\epsilon\left(R_{\rm half,\,1}+R_{\rm half,\,2}\right)\) such that \(\epsilon\gtrsim 1\), which leads to contact between the stars after the first close encounter (when they are not bound). The fact that even if \(d_{\rm min}>R_{\rm half,\,1}+R_{\rm half,\,2}\) leads to a potential collision due to the formation of a binary is because of tidal resonances Fabian et al. (1975), because the cross-section is then as large as 1-2 times that of collisions. The stars we are considering are main-sequence, Sun-like ones. If we consider them (1) to be in hydrostatic equilibrium, (2) to be described by an equation of state of an ideal gas and (3) to be spherical symmetric, then our dynamically stable star reacts on a time given by the hydrostatic timescale \[\tau_{\rm hybrid}\approx\left(\frac{R_{\odot}^{3}}{GM_{\odot}}\right)^{1/2} \approx\frac{1}{2}(G\varrho_{\odot})^{-1/2}, \tag{131}\] where \(\varrho\) is the mean density of the star, which we assume to be like our Sun, so that \(\tau_{\rm hybrid}\approx 30\,{\rm min}\), orders of magnitude shorter than the Kelvin-Helmholtz timescale, which in the case of the Sun is \(\tau_{\rm KH}\sim 1.6\times 10^{7}\,{\rm years}\). This timescale is interesting because it can be envisaged as an approximation to the characteristic timescale of a thermal fluctuation, i.e. a thermal adjustment of the star to a perturbation (in the simplistic picture which we are assuming now, since we do not take into account the internal structure). If we are talking about a red giant of mass \(1\,M_{\odot}\) and a radius \(100\,R_{\odot}\), then \(\tau_{\rm hybrid}\approx 18\,{\rm days}\). ### Dynamical stability in the adiabatic approach Let us consider the collision to induce a small perturbation in the star. After the collision, we will assume for simplification that the energy is equally distributed over all the surface of the star, which therefore becomes denser because it is compressed. Since we are assuming this compression to be adiabatic and homologous, the star will abandon its hydrostatic equilibrium. The pressure in one layer of mass of the star can be obtained by evaluating the integral \(P=\int_{m}^{M}Gm\,dm/(4\,\pi r^{4})\). Because of homology and adiabaticity, by inspecting both sides of this equation we obtain that \[\left(\frac{\varrho^{\prime}}{\varrho}\right)^{\gamma_{\rm ad}}=\left(\frac{R ^{\prime}}{R}\right)^{-3\gamma_{\rm ad}}, \tag{132}\] where primes represent the values after the collision; i.e. we are dealing with Eq. 25.24 of Kippenhahn & Weigert (1991). This expression tells us that after the collision the star will be dynamically stable in the adiabatic regime if \(\gamma_{\rm ad}>4/3\) because the pressure's growth is more important than the weight's increase. Since we are assuming that the stars participating in the collision are Sun-like, we could draw the conclusion that they are stable after the collision since one can approach \(\gamma_{\rm ad}=5/3(>4/3)\). Indeed, in the case of the Sun the layer affected would the convective one, located between \(0.7\,R_{\odot}\) and the surface. However, this is a very crude approach in the evaluation of the dynamical stability which needs to be improved because the critical value depends on the simplifications we have adopted in this section (but for the exception of homology, since the threshold for \(\gamma_{\rm ad}\) is the same one for non-homologous scenarios). Moreover, even if the stars are dynamically stable, it is not discarded that they will be instable vibrationally or secularly. We have addressed the dynamical stability because timescale associated is the shortest one. ### Adiabatic pulsations after the collision and considerations about binary formation Since 1638 we have observed that stars pulsate thanks to the observations of Johannes Phocylides Holwarda of Mira. Arthur Ritter proposed in 1879 that these variations are due to radial pulsations (Gautschy 1997), and Shapley (1914) suggested that the temperature and brightness of Cepheid variables originated in radial pulsations. Later, Eddington (1917), with his piston analogy gave a working frame to describe them. In this valve approximation, the radial pulsation period \(\Pi_{r}\) can be estimated by calculating the time that a sound wave will need to pass through the star, i.e. \(\Pi_{r}=2R_{\odot}/C_{\rm s}\). We can determine \(C_{\rm s}\) from the pressure \(P\) and (mean) density of the star, \(C_{\rm s}^{2}=\gamma_{\rm ad}\,P/\varrho\), where \(\varrho\) is the average density and \(\gamma_{\rm ad}\) is the adiabatic index, the heat capacity ratio or Laplace's coefficient. It can be envisaged as a measure of the stiffness of the configuration (see e.g. section 38.3 of Kippenhahn & Weigert 1991). Assuming that \(\varrho\) is the actual value of the density througout the whole star and requiring hydrostatic equilibrium, so that \(dP/dr=-GM\varrho/r^{2}=-G\left(4\pi r^{3}/3\varrho\right)\varrho/r^{2}=-4G \pi r\varrho^{2}/3\), and requiring that \(P=0\) at \(r=0\), we derive that \(P(r)=2\pi Gp^{2}\left(R^{2}-r^{2}\right)/3\). Therefore we can obtain that \[\Pi_{r} = 2\int_{0}^{R}\frac{dr}{\sqrt{2\gamma_{\rm ad}\pi G\rho\left(R^{ 2}-r^{2}\right)/3}}\approx\sqrt{\frac{3\pi}{2}\left(\gamma_{\rm ad}G\varrho \right)^{-1}} \tag{133}\] \[\sim 44.5\,{\rm min},\] for \(\varrho\sim 5.9\,{\rm gr\,cm^{-3}}\) after the first collision. In the adiabatic, spherical approximation, the pulsation is stable and has an associated timescale of about 45 minutes. However, it would be interesting to know if more pulsations can be produced to maintain the rhythm of oscillations typical of the Cepheids. One possible way are further collisions. ### Maintened pulsations In this section we will quantitatively elucidate possible ways to produce repeated pulsations in a main sequence star that is not in the instability strip through dynamical phenomena, i.e. collisions. One first idea is that of recurrent collisions due to the formation of a binary after the first impact. The amount of energy loss per collision is \(2\times\delta\,E\), with \(\delta\,E=T_{\rm K}\sim 2.44\times 10^{44}\,{\rm ergs}\), as we have estimated before. If we just look at the energy, the question whether the two stars will form a binary seems too simple. If the stars are initially on a parabolic orbit, the orbital energy of the system, considered as two mass points (i.e. without taking into account the binding energy of each star) is zero at the beginning (since the relative velocity at infinity is zero, as is the gravitational energy). Any collision -in fact even a close pass without any kind of physical contact which produces tidal effects- will convert kinetic energy into thermal energy and thus leave the stars with negative orbital energy, thus forming a binary. The real question is how this binary will evolve once it has formed. And this is not a question which can be solved analytically in detail. It is worth to note however that if there is a real contact at the first pericentre passage, a collision, this will make the stars expand, so that further impacts will take place, probably more violent at each successive orbit. The possibility that the binary survives for a long time before the two stars merge is probably low. These considerations are regarding main-sequence stars, whose envelopes are rather dense. In the case of red giants, it is likely that the the collisions lead to the ejection of the envelope and we are left with a stable binary consisting of the two cores which will then follow the previous scheme: Evolution via gas drag, detection via gravitational radiation and an afterglow when they eventually collide. This reasoning is for main-sequence stars, whose envelope is quite dense. For giants, perhaps the collisions lead to the ejection of the envelope and we are left with a stable binary consisting of the two cores. A possible first estime from an energy point of view would be to look at the binding energy of the envelope of the giant; i.e. how much energy leads to an ejection of the envelope and then compare that energy to the orbital energy decrease from the parabolic trajectory to a circular binary formed by the two cores. This would allow us to estimate the semi-major axis of the final binary but this reasoning does not involve the impact parameter at all and is hence simplistic. The binding gravitational energy of stars in isolation is hence not a useful quantity for studying the formation of binaries. The interesting point has already been addressed: If the relative orbital energy of the two stars is smaller than the sum of the binding energies, it is impossible to destroy both stars completely. In a globular cluster, it is unlikely for a completely destructive collision to occur, because the relative velocities at infinity are very low. And even if there is enough energy to destroy the stars, we need also a very small impact parameter. Therefore, for main-sequence stars in a globular cluster, most collisions lead to the formation of a binary star that rapidly merges (in the classical meaning, not the relativistic one). A smaller subset of collisions, those with small impact parameters, produce a direct merger. Very little mass is ejected. But there is a possibility of non-colliding binaries forming due to tidal resonances (Fabian et al. 1975). Hence, it is difficult to assess analytically the duration and potential periodicity of such pulsations originating from stellar collisions. If they are vibrationally unstable, then we need to input a given amount of energy to maintain the pulses, since the oscillations will damp. The input of energy might (i) come from further collisions with the other star, if they build a binary, (ii) from other stars in the cluster or (iii) internally from the structure of the star, if we have amplitudes increasing in time because the vibrational or thermal instability have excited the star. Addressing this problem is out of the scope of this paper but it is important to note that pulsating stars are also used as another rung in the standard candle ladder, as pointed out by Henrietta Swan Leavitt (Fernie 1969). Since the implications are potentially important, it would be interesting to investigate the collisional pulsating nature of stars in globular clusters. This would not be the first time that there is the need to revisit the cosmic ladder argument due to anomalies found in globular clusters. Indeed, if we consider two stars, one of population I (classical Cepheids) and another of population II in the instability strip, they will pulsate due to the \(\kappa\) mechanism (see e.g. Kippenhahn & Weigert 1991). Having different masses but same radii because they are located at the same place in the Hertzsprung-Russell diagram. The lighter stars have lower \(\varrho\) and, hence, in principle, a longer period than classical Cepheids, even if they have the same luminosity. This is not correct, and the derivation of the correct periods led to Baade to realise that the cosmic distance scale was to be multiplied by a factor of 2 (Baade 1944). ### A scheme to study the injection of energy into the star Because in globular clusters the relative velocity at infinity is lower than the stellar escape velocity, of the order \(500-1000\,{\rm km\,s^{-1}}\), the relative velocity at contact is similar to the thermal velocity of stellar matter. Hence, such collisions are only mildly supersonic and entropy is nearly conserved. The entropic variable A defined as \(A:=P/\rho^{\gamma_{\rm eff}}\) (with \(P\) the pressure) of a fluid element is subject to increase because of the heat produced during the shock. Nonetheless, because the speed at contact is similar to the speed of sound in the stars participating in the collision, the shocks must have Mach numbers of about unity and hence a weak heating production during the shock. The important point here is that for these reasons, the considered fluid element will have a constant entropic variable during the collisional process, as demonstrated by Lombardi et al. (2002a). This allows us to treat the collision with a semi-analytical approach which is derived by conservation laws of the process. This scheme yields very good results when compared to three-dimensional computer simulations, including shock heating, hydrodynamic mixing, mass ejection, and angular momentum transfer (Lombardi et al. 1996b, 2002a,b, 2003). In Fig. (32) we show the correlation between the initial and final (i.e. after the collision) entropic index \(\Delta A:=A_{\rm fin}-A_{\rm in}\) as a function of the initial pressure of one of the parent stars, \(P_{\rm in}\). This finding was already presented in Lombardi et al. (2002b), their Fig. 3, using smoothed-particle hydrodynamics. It is interesting to see that the fluid sorting algorithm gives a result which is very close to what three-dimensional computer simulations yield. We can see that there is a proportion between both quantities such that \(\log(\Delta A)\propto\log(1/P_{\rm in})\). We can use this correlation to our benefit to understand how a collision will add energy to one of the stars after they have gone through an interaction. In particular \[\log(A_{\rm fin}-A_{\rm in})=b-\log(P_{\rm in}), \tag{134}\] so that \[A=A_{\rm in}+\frac{10^{b}}{\log P_{\rm in}}:=A_{\rm in}+\frac{B}{\log P_{\rm in}}. \tag{135}\] In this equation, \(b\) is a constant which contain information about the properties of the collision. For instance, the larger \(b\), the more energy will be deposited on to the surface of one of the two stars, and we have defined \(B:=10^{\,b}\), which has units of pressure times \(A\) (i.e. units of \(P^{2}/\rho^{\gamma_{\rm ad}}\)). If we consider a weak interaction, we assume that the entropy will be added instantaneously on to the star, and that the density profile will not change. The final pressure is hence \[P_{\rm fin}=\rho^{\gamma_{\rm ad}}\,A_{\rm fin}=P_{\rm in}+B\,\left(\frac{\rho^ {\gamma_{\rm ad}}}{P_{\rm in}}\right), \tag{136}\] and therefore the specific internal energy profile is \[u=\frac{3}{2}\frac{P_{\rm fin}}{\rho}=u_{\rm in}+u_{\rm fin}:=u_{\rm in}+\frac {3}{2}\left(\frac{B}{P_{\rm in}}\right)\rho^{2/3}, \tag{137}\] because we have adopted \(\gamma_{\rm ad}=5/3\). Because the density profile is unchanged, the gravitational potential energy is unchanged as well, which means that only the thermal energy changes, since we are neglecting rotation as a first approach. Therefore, the energy added over the star after the first "hit" is the following integral evaluated over the entire star \[E_{\rm hit}=\int u_{\rm fin}(m)dm=\int 6\pi\left(\frac{B}{P_{\rm in}(r)} \right)\rho(r)^{5/3}r^{2}dr, \tag{138}\] because \(dm=\rho 4\,\pi(r)\,r^{2}\,dr\) in spherical symmetry, which we are assuming. Eq. (138) allows us to determine \(B\) by evaluating the unperturbed parent star. I.e. we solve the equation while setting \(B=1\) and then we can choose B to be the desired energy input divided by the result of the equation. This scheme allows us to then evaluate the propagation of the energy through the star and the induced pulsations. Unfortunately the analytical calculations require solving the eigenvalue problems of the Sturm-Liouville type to calculate the overtones if we want to consider non-adiabatic, non-radial oscillations, although rotation might help with shearing deformation. Given that we have seen that any impact parameter has the same probability, we consider that it is not worth extending this article any further than we are already doing. We will therefore study this problem separately in a future publication, either analytically or numerically with the energy injection scheme we have outlined in this section. ## 8. The Cosmic Ladder Argument The event rate of colliding red giants and their observational nature is telling us that we might be misinterpreting SNe Ia observations and and be wrong by calling what we observe "standard" candles. Also, their collisions in globular clusters might trigger pulsating stars which are also used as reference points when deriving cosmological scales, as we just pointed out in the last section. There might be ways to tell them apart in the case of the SNe Ia observations, though. One unique observational signature for WD-WD collisions is the double-peak profile of Cobalt and Iron lines in late-time spectra (also called "nebular spectra") of SNe Ia Dong et al. (2015). At late times, supernova ejecta become optically thin, so that the line profiles reflect the underlying velocity distributions. Since both, Cobalt and Iron are decay product of Ni56 which is synthesized in the WD-WD merger, the profiles of these Co- and Fe nebular lines show the velocity distribution of Ni56 in the ejecta. The authors studied a sample of some 20 well-observed SNe Ia with nebular spectra, and found in the sample 3 objects showing double peaks and an additional one with a flat-top profile (i.e. departing from a single-peak profile). This bimodal velocity distribution is a feature of WD-WD mergers (see e.g. the top panels of their Fig. 5). These results are supported by the work of Kushnir et al. (2013), which shows from two-dimensional simulations of WD-WD mergers that the full range of \(\sim 0.1-1M_{\odot}\) Ni56 can be produced from (exactly head-on) collisions of WDs with masses between \(\sim 0.5-1M_{\odot}\). However, other models, such as the one by van Rossum et al. (2016), their Fig. (13), do not predict such double peaks although, as noted by Dong et al. (2015), the observed line profiles depend on the view angle, as well as in other parameters8, and their data is not homogeneous, statistically speaking. Footnote 8: Dong Subo, personal communication. Another feature, as shown in Dong et al. (2018) is that for SNe Ia at the very low end of luminosity function, Ni56 ejecta show significantly off-center distribution at about \(\sim 1000\,\)km/s, which can be explained by WD-WD mergers with significant mass ratios. We note that sub-Chandrasekhar merger models, the delay detonation model, can also produce a large off-center distribution, but not a bi-modal distribution. It is interesting to note that Wygoda et al. (2019a,b) also explore the WD-WD merger scenario of Kushnir et al. (2013) and they find that the Ni56 column density distribution of the SNe Ia population can be explained in terms of it. Also, Livneh & Katz (2020) find that the key signatures of SNe Ia near the peak, i.e. the diverse distribution of Si II line width distribution, which is usually referred to as the so-called "branch plot", and widely used to classify SNe Ia population, can be explained by asymmetry in ejecta from WD-WD mergers. We note that in supernova searches, galactic nuclei are usually left out from the survey because they are complex systems. However, (i) sometimes the whole galaxy is too small in the data to be able to tell apart the nucleus and (ii) as we have Figure 32.— Difference of the entropic index as a function of the initial pressure of the stars for the following values of the distance of closest approach: \(d_{\rm min}/(R_{1}+R_{2})=0,\,0.01,\,0.05,\,0.10,\,0.15,\,0.20,\,0.25,\,0.30,\, 0.35,\,0.40,\,0.45,\,0.50,\,0.55,\,0.60i\), \(0.65,\,0.70,\,0.75,\,0.80,\,0.85,\,0.90,\,0.95,\,0.99,\,0.999\). We cannot see the different curves because they all follow the same power-law relation, as given with the black, dashed line. In all of the calculations we have assumed \(M_{1}=M_{2}=0.8\,M_{\odot}\), a relative velocity at infinity of \(7\,\)km\(\,\)s\({}^{-1}\) and an initial separation normalized to the sum of the parent star radii of 5. mentioned in the introduction, in this work we are focusing on galactic nuclei to evaluate the lower-number case. In globular clusters collisions should happen more frequently due to the lower velocity dispersion, which approximately corresponds to the relative velocity of stars in the system. The lower the relative velocity, the more likely that a gravitational deflection ends up in a collision due to the larger exchange of energy and angular momentum. ## 9 Conclusions In this work we have made an analytical study of the electromagnetic- and gravitational radiation implications of stellar collisions between stars in dense stellar systems such as galactic nuclei and globular clusters, whether main-sequence or red giants. In the case of galactic nuclei, we analyse the remaining gaseous cloud which forms after the impact and its electromagnetic features, while taking into account the ulterior dynamical evolution of the gas, which is expanding and cooling down. In particular, we address the time evolution of the released energy and find that it resembles that of a stellar tidal disruption. Since we are interested in the observational prospects of detecting this phenomenon, we also describe the time evolution of the effective temperature, the evolution of the peak wavelength of the spectral radiance, as well as the evolution of the kinetic temperature as the outcome of the collision and the spectral power as a function of the frequency. We find that the electromagnetic traces left by these violent and transient processes strongly resemble over time periods tidal disruptions but also SNe Ia supernovae. Our complete analysis depends only on two free parameters, one appears in the electromagnetic study and the other one in the gravitational-waves one. In the part dedicated to the electrodynamics, the free parameter is responsible for the non-linearity of the collision, i.e. the transmission of the shocks and hence of the total efficiency conversion of kinetic energy into radiation. The second one, which is relevant for the total rates of gravitational-wave sources, is the number fraction of main-sequence stars whose cores form a binary. We parametrise the solution in terms of the non-linearity parameter and explore four different values. In order to derive this parameter one would need dedicated numerical simulations. From among the colliding stars, a subgroup of them leads to the formation of a binary consisting of their cores. This subgroup is interesting because it leads to the formation of a binary of two objects that is sufficiently massive and compact to detectably emit gravitational waves. We find that the friction exerted by the gas accelerates the approach of the surviving cores and brings them closer to eventually merge, with an electromagnetic afterglow such as in the case of binaries of neutron stars merging. Due to the time-varying properties of the gas (which our analytical model takes into account in all calculations), the observed appearance of the gravitational waves is very different from any known source. In particular, two nuclei of very low masses, \(0.34\,M_{\odot}\), will be perceived as two black holes of initially slightly above stellar masses, which later increase to become, apparently, two merging supermassive black holes. Something similar happens to the luminance distance, which apparently decreases and then increases very significantly. As noted in Sec. (5.4), the fact that the frequency evolution is different from the vacuum one, will be the first evidence that these are not black holes emitting gravitational radiation, but a stellar collision. Later, the absence of event horizon will make it obvious and, finally, the electromagnetic afterglow will confirm this. In this sense, the gravitational waves are a perfect tool to identify the nature of the source. We sketch in the second appendix a possible strategy to address the gravitational wave data analysis of the collisions. We calculate analytical characteristic strains and polarisations of the nuclei in vacuum, as a reference point, and then derive them in the gaseous case, also analytically. The changes are evident and very pronounced, differing by orders of magnitude, although the overall behaviour in the gas case captures, or rather tries to mimic, the behaviour of gravitational radiation emission. As the gravitational merger time is drastically reduced, electromagnetic and gravitational wave detection go practically hand in hand. This means that the collisions of main-sequence stars and red giants represent two multi-messenger probes that complement each other. This is particularly interesting in the case of red giants, since the core is a degenerate object that will be a more interesting source of gravitational radiation. In the case of red giants, we calculate the importance of the H-burning shell in the process, as this calculation was not found in the literature, to the best of our knowledge. This is important because this layer around the cores could strongly influence the further evolution of the binary of the two degenerate objects. However, we derive that the role of this shell can be disregarded in this study. According to our results, these degenerated cores, which can be envisaged as white dwarves, embedded in the host red giants, have a collisional event rate which can be of up to some hundreds of them a year within a volume of \(100\,\)Mpc. The properties of the collision will strongly vary in function of the mass of the cores and the impact parameter, which depends on the radii of the cores. The properties of these collisions are very similar to SNe Ia. In view of the event rate, this could pose a problem to the interpretations of SNe Ia, which are referred to as "standard candles" following the idea of Henrietta Swan Leavitt (Fernie 1969) as a way to derive cosmological distances following the ladder argument. This is because, as we have just explained, stellar collisions are not standard at all. Finally, collisions in globular clusters lead to different phenomena, in particular they might lead to stellar pulsation like in the classic problem of the Cepheids. The periodicity of these pulsations is to be investigated because the formation of a binary which is long-lived seems to be unlikely, but collisions arising from other stars can be a way to sustain the pulsations, or vibrational or thermal instabilities triggered in the interior of the star after the first collision. We have shown that their pulsations are stable in the case of the adiabatic, spherical special case, but it is worth to investigate (i) the non-adiabaticity of spherical pulsations and (ii) non-radial oscillations, in both the \(\kappa\) and \(\epsilon\) mechanisms. We think this is an interesting question because these pulsations are considered to be another rung in the cosmological ladder and, as noted in Sec. (7.4), a misclassification of these has already had an important impact in the past, also in globular clusters. We have not addressed this for the sake of the length of this article, but this is a part of current work and will be presented elsewhere. Finally, it is worth mentioning that our Galactic Centre is a known region of heightened cosmic ray abundance. Naively, one would have thought that the increase in cosmic ray abun dance we observe there would be brought about by a larger abundance of supernovae in this region. However, no such over abundance of supernova is observed in this region. Furthermore, the quiescence level of the supermassive black hole activity in this region casts doubt on an accretion episode being responsible for the cosmic rays. Consequently, a heightened cosmic ray abundance in galactic nuclei appears peculiar, demanding the existence of a regular non-thermal energy source within this region, which seems to be natural to be linked to stellar collisions. We thank Marc Freitag for many discussions, as well as Xian Chen and Dong Subo. We are indebted with Andrew Taylor, Stefan Ohm and Rolf Buhler for their input and in general to the THAT group of DESY for an extended visit in which part of this work was done during 2020-2021. We thank Jeremy Goodman and Jill Knapp to find the origin of the approximation used in the estimation of the density of Bohdan Paczynski. Jakob Nordin pointed us to the Zwicky Transient Facility observational data that seems to match the conceptual idea we have presented. Kostas Tzanavaris suggested to use an expansion in powers to solve the integral to derive the coalescence time, which has a faster convergence as compared to the incomplete beta function. This work was supported by the 111 Project under Grant No. B20063 and the National Key R&D Program of China (2016YFA0400702) and the National Science Foundation of China (11721303). ## Appendix 1: Analytical solution of the integral associated to \(T_{\rm GAS}\) The integral 83 to be computed is the following. \[I(\tau)=\int_{0}^{T_{\rm merg,m}}e^{\alpha\tau^{2}}\left(1+b\,\tau\right)^{-3}\, d\tau. \tag{139}\] We change now the notation, \(\tau=t\), \(x=T_{\rm merg,\,m}\), so that \[I(x)=\int_{0}^{x}\frac{e^{\alpha\tau^{2}}}{(1+bt)^{3}}dt. \tag{140}\] Expand the exponential as a power series of \(t\). \[I(x)=\sum_{n=0}^{\infty}\frac{c^{n}}{n!}\int_{0}^{x}\frac{t^{2n}}{(1+bt)^{3}}dt. \tag{141}\] We now reparametrize the variable \(x\) in such a way that the limits of the integral are 0 and 1. \[t=xs,\ \ s=t/x,\ \ dt=xds, \tag{142}\] \[I(x)=\sum_{n=0}^{\infty}\frac{c^{n}x^{2n+1}}{n!}\int_{0}^{1}\frac{s^{2n}}{(1+ bxs)^{3}}ds=\sum_{n=0}^{\infty}\frac{c^{n}x^{2n+1}}{n!}I_{n}(x), \tag{143}\] where \[I_{n}(x)=\int_{0}^{1}\frac{s^{2n}}{(1+bx)s^{3}}ds. \tag{144}\] At this step, we compute the integral \(I_{0}\). \[I_{0}(x) =\int_{0}^{1}\frac{1}{(1+bxs)^{3}}ds=\frac{1}{bx}\int_{0}^{1} \frac{(1+bxs)^{\prime}}{(1+bxs)^{3}}ds\] \[=-\frac{1}{2bx}\int_{0}^{1}\left[\frac{1}{(1+bxs)^{2}}\right]^{ \prime}ds\] \[=\frac{1}{2bx}\left[1-\frac{1}{(1+bx)^{2}}\right] \tag{145}\] We simplify the integral \(I_{n}\) for \(n\geq 1\) by reducing the power of the denominator, using the method of integration by parts. \[I_{n}(x) =\int_{0}^{1}\frac{s^{2n}}{(1+bxs)^{3}}ds=-\frac{1}{2bx}\int_{0}^ {1}s^{2n}\left[\frac{1}{(1+bxs)^{2}}\right]^{\prime}ds\] \[=-\frac{1}{2bx}\left[\frac{s^{2n}}{(1+bx)^{2}}\right]^{\prime=1}_ {s=0}+\frac{n}{bx}\int_{0}^{1}\frac{s^{2n-1}}{(1+bxs)^{2}}ds\] \[=-\frac{1}{2bx}\frac{1}{(1+bx)^{2}}-\frac{n}{bx}\int_{0}^{1}s^{ 2n-1}\left[\frac{1}{1+bxs}\right]^{\prime}ds\] \[=-\frac{1}{2bx}\frac{1}{(1+bx)^{2}}-\frac{n}{(bx)^{2}}\left[ \frac{s^{2n-1}}{1+bxs}\right]^{\prime=1}_{s=0}\] \[+\frac{n(2n-1)}{(bx)^{2}}\int_{0}^{1}\frac{s^{2n-2}}{1+bxs}ds\] \[=-\frac{1}{2bx}\frac{1}{(1+bx)^{2}}-\frac{n}{(bx)^{2}}\frac{1}{1 +bxs}\] \[+\frac{n(2n-1)}{(bx)^{2}}\int_{0}^{1}\frac{s^{2n-2}}{1+bxs}ds. \tag{146}\] So, we have to compute the integral \[f_{n}(x)=\int_{0}^{1}\frac{s^{2n-2}}{1+bxs}ds,\ \ n\geq 1. \tag{147}\] 1. The first thing to do is to simplify the denominator, by using the reparametrization \[z=1+bxs,\ \ dz=bxs,\ \ s=\frac{1}{bx}(z-1),\] (148) which gives us \[f_{n}(x)=\frac{1}{(bx)^{2n-1}}\int_{1}^{1+bx}\frac{1}{z}(z-1)^{2n-2}dz.\] (149) 2. Next, we use the binomial theorem to expand the polynomial inside the integral. \[(z-1)^{2n-2} =\sum_{k=0}^{2n-2}\binom{2n-2}{k}(-1)^{k}z^{k}\] \[=1+\sum_{k=1}^{2n-2}\binom{2n-2}{k}(-1)^{k}z^{k}.\] (150) 3. We substitute and have that \[\int_{1}^{1+bx}\frac{1}{z}(z-1)^{2n-2}dz=\ln(1+bx)+F_{n}(x),\] (151) where we have introduced the polynomial \[F_{n}(x)=\begin{cases}0&n=1\\ \sum_{k=1}^{2n-2}\binom{2n-2}{k}\frac{(-1)^{k}}{k}\left[(1+bx)^{k}-1\right]&n>1. \end{cases} \tag{152}\] 4. The integral \(f_{n}\) can be expressed via those functions \[f_{n}(x)=\frac{1}{(bx)^{2n-1}}\left[\ln(1+bx)+F_{n}(x)\right].\] (153) We now substitute and have that \[I_{n}(x) =\frac{n(2n-1)}{(bx)^{2n+1}}\left[\ln(1+bx)+F_{n}(x)\right]\] \[-\frac{1}{2bx}\frac{1}{(1+bx)^{2}}-\frac{n}{(bx)^{2}}\frac{1}{1+bx} \tag{154}\] Finally, we combine (4), (6) and (15). Note that all terms apart from the one containing the polynomial \(F_{n}\) yield elementary functions. \[\sum_{n=1}^{\infty} \frac{c^{n}x^{2n+1}}{n!}\left[-\frac{1}{2bx}\frac{1}{(1+bx)^{2}}\right]\] \[= -\frac{1}{2bx}\frac{1}{(1+bx)^{2}}\sum_{n=1}^{\infty}\frac{c^{n} x^{2n+1}}{n!}\] \[= -\frac{1}{2bx}\frac{e^{cx^{2}}-1}{(1+bx)^{2}}, \tag{155}\] \[\sum_{n=1}^{\infty} \frac{c^{n}x^{2n+1}}{n!}\left[-\frac{n}{(bx)^{2}}\frac{1}{1+bx}\right]\] \[= -\frac{1}{(bx)^{2}}\frac{1}{1+bx}\sum_{n=1}^{\infty}\frac{c^{n} x^{2n+1}}{n!}\] \[= -\frac{cx}{b^{2}}\frac{e^{cx^{2}}}{1+bx}, \tag{156}\] \[\sum_{n=1}^{\infty} \frac{c^{n}x^{2n+1}}{n!}\frac{n(2n-1)}{(bx)^{2n+1}}\] \[=\frac{1}{b}\sum_{m=1}^{\infty}\frac{2n-1}{(n-1)!}\left(\frac{c} {b^{2}}\right)^{n}=\frac{1}{b}\sum_{m=0}\frac{2n+1}{n!}\left(\frac{c}{b^{2}} \right)^{n+1}\] \[=\frac{c}{b^{3}}e^{c/b^{3}}+\frac{2c}{b^{3}}\sum_{n=0}^{\infty} \frac{n}{n!}\left(\frac{c}{b^{2}}\right)^{n}\] \[=\frac{c}{b^{3}}e^{c/b^{3}}+\frac{2c}{b^{3}}\sum_{m=1}^{\infty} \frac{1}{(n-1)!}\left(\frac{c}{b^{2}}\right)^{n}\] \[=\left(\frac{c}{b^{3}}+\frac{2c^{3}}{b^{3}}\right)e^{c/b^{2}}. \tag{157}\] Thus: \[I(x) =\frac{1}{2b}\left[1-\frac{1}{(1+bx)^{2}}\right]+\left(\frac{c}{b ^{3}}+\frac{2c^{2}}{b^{5}}\right)e^{c/b^{2}}\ln(1+bx)\] \[-\frac{1}{2bx}\frac{e^{cx^{2}}-1}{(1+bx)^{2}}-\frac{cx}{b^{2}} \frac{e^{cx^{2}}}{1+bx}\] \[+\sum_{n=1}^{\infty}\frac{n(2n-1)c^{n}}{n!}F_{n}(x). \tag{158}\] When comparing the polarizations in the evolving gaseous cloud and vacuum, because of the big difference in \(T_{\rm mrg.\,m}\) and \(\Lambda(\tau)\), the two polarizations diverge from the beginning. This leads to a significant mismatch of the waveforms. The reality is more complex. On the one hand, we have a real, physical source which is producing the gravitational radiation. We will refer to this source from now as the "real" source and will use the subscript "r" for it. On the other hand, detectors will receive data for a source which we describe as the "observed" source for obvious reasons, and use the subscript "o" for it. Finally, in order to extract parameters from the observed source, data analysts will use a theoretical model which assumes that the source is in vacuum. This is our "putative" source, and we will use the subscript "p" for it. The connection between these three different sources is displayed in Fig. (33). ## Appendix X: Neglection of the constant of integration to derive the density of the H-Fusing shell When trying to define the constants of integration of Eq. (119) and Eq. (121) we came accross the unpublished notes of Bohdan Paczynski, where he explains that _The constant (...) can be calculated from the matching conditions between the radiative zone and the outer convective envelope, and it is very important near the radiative - convective boundary. However, deep inside the radiative zone the other two terms in the equation (...) become much larger than the constant, and (it) may be neglected._ We found this explanation in the notes of Jill Knapp in Princeton, who told us it was not her work and after looking for the origin, she found out that the link to the original notes written by Paczynski was Jeremy Goodman. In his turn, he explained that "He (Bohdan Paczynski) taught a class in stellar structure to graduate students for many years, which I had the privilege of helping him with in later years." Unfortunately, Jeremy could not find a published version of this derivation by Paczynski, so that we acknowledge here the origin of what has led us to the neglection of the constant of integration, crucial in defining the analytical expression for the density of the H-fusing shell.
2309.12004
Safe Hierarchical Reinforcement Learning for CubeSat Task Scheduling Based on Energy Consumption
This paper presents a Hierarchical Reinforcement Learning methodology tailored for optimizing CubeSat task scheduling in Low Earth Orbits (LEO). Incorporating a high-level policy for global task distribution and a low-level policy for real-time adaptations as a safety mechanism, our approach integrates the Similarity Attention-based Encoder (SABE) for task prioritization and an MLP estimator for energy consumption forecasting. Integrating this mechanism creates a safe and fault-tolerant system for CubeSat task scheduling. Simulation results validate the Hierarchical Reinforcement Learning superior convergence and task success rate, outperforming both the MADDPG model and traditional random scheduling across multiple CubeSat configurations.
Mahya Ramezani, M. Amin Alandihallaj, Jose Luis Sanchez-Lopez, Andreas Hein
2023-09-21T12:22:11Z
http://arxiv.org/abs/2309.12004v1
# Safe Hierarchical Reinforcement Learning for CubeSat Task Scheduling Based on Energy Consumption ###### Abstract This paper presents a Hierarchical Reinforcement Learning (HierRL) methodology tailored for optimizing CubeSat task scheduling in Low Earth Orbits (LEO). Incorporating a high-level policy for global task distribution and a low-level policy for real-time adaptations as a safety mechanism, our approach integrates the Similarity Attention-based Encoder (SABE) for task prioritization and an MLP estimator for energy consumption forecasting. Integrating this mechanism creates a safe and fault-tolerant system for CubeSat task scheduling. Simulation results validate the HierRL's superior convergence and task success rate, outperforming both the MADDPG model and traditional random scheduling across multiple CubeSat configurations. ## I Introduction CubeSats have transformed the space industry, providing a cost-effective and efficient way to conduct diverse space missions, from scientific observations to advanced communications [1, 2]. A rising focus is on equipping spacecraft with advanced autonomous decision-making capabilities [3, 4]. Achieving this relies on using automated planning tools to reduce human involvement and effectively handle complex and uncertain environments. Implementing on-board planning mechanisms in spacecraft missions brings substantial benefits, including increased spacecraft availability, heightened reliability, and reduced ground segment operational costs. However, despite their potential, CubeSats face significant task scheduling challenges in distributed systems due to processing limitations [5]. Efficient energy management is a primary concern, given their reliance on limited solar panel-derived energy. Ensuring they operate within these constraints while maintaining high reliability in space underscores the importance of fault tolerance in satellite operations [6]. In CubeSat operations, the criticality of energy management is accentuated by their inherent power limitations [7]. The complexity of the energy consumption issue is compounded by task-dependent variability, especially in observation missions with sophisticated sensor payloads like high-resolution cameras, adaptive sampling, and data transmission. Solving planning problems in this context typically involves a constrained optimization [8]. However, the inherent uncertainties and complexities of space environments, combined with task variability and unpredictability, often surpass the capabilities of traditional tools [9]. One promising solution gaining attention involves applying artificial intelligence to dynamic task scheduling [10]. Artificial intelligence benefits from declining computational costs, abundant data, and advanced algorithms, with Deep Learning (DL) and Reinforcement Learning (RL) playing pivotal roles [11]. RL, a subset of Machine Learning (ML), focuses on training agents to make sequential decisions by interacting with an environment to maximize cumulative rewards [12]. RL has emerged as a crucial paradigm in ML with diverse applications and untapped potential across various domains. One promising application is dynamic task scheduling, where RL algorithms offer significant advantages. The repetitive nature of scheduling decisions aligns well with the data-intensive training methods of RL [13]. Moreover, RL's unique feature is its ability to adaptively make decisions in real-time without requiring a comprehensive environmental model [14, 15]. In the literature, the satellite task allocation field's primary focus is on Earth observation (EO) missions, which present a classic example of a challenging multi-objective combinatorial problem [16]. This complexity makes them suitable candidates for solutions using Deep Reinforcement Learning (DRL) methods. Huang, et al. [10] formulate EO task scheduling problems by introducing the concepts of visible window (VW) and observation window (OW). The decision variables in this context typically involve continuous Figure 1: The scenario diagram of the algorithm. parameters specifying the start time of OWs for specific targets, alongside binary variables indicating whether an observation task is scheduled or not. The primary objective of EO task scheduling is to maximize observation profit while respecting constraints. Haijiao, et al. [17] introduce dynamic real-time scheduling for image satellites. Here, observation tasks arrive dynamically with associated rewards, and each task is accepted or rejected based on a policy, provided that on-board data storage and timing constraints are satisfied. This dynamic scheduling problem is formulated as a Dynamic and Stochastic Knapsack Problem [18] and tackled using DRL techniques. The interaction between attitude changes and observation tasks introduces a time-dependent scheduling challenge, surpassing the complexity of EO systems [19]. In [20], the optimization goals involve maximizing total observation profit while ensuring image quality, which inherently conflict as scheduling more targets can boost profit but potentially compromise image quality. To address this, a two-phase algorithm is proposed. In the first phase, a recurrent neural network learns a scheduling policy for selecting tasks, while in the second phase, a Deep Deterministic Policy Gradient (DDPG) algorithm optimizes the choice of OWs to enhance image quality. A more efficient approach by Wei, et al. [21] introduces a dual objective, considering both the failure rate of observation requests and the timeliness of scheduled requests. Simulation experiments reveal superior performance and faster training, making it suitable for quicker re-training if needed. The EO scheduling problem becomes notably more intricate when multi-satellite systems, such as satellite constellations, are taken into account. Chong, et al. [22] address multi-satellite cooperative task scheduling using RL. The approach considers a set of satellites, each possessing varying on-board resource capabilities. The approach's objective is to assign each task to a specific satellite based on their available individual resources and the required resources of each task. To enhance the cooperative RL policies of each satellite, a hybrid learning approach that incorporates genetic algorithms is developed. While this approach demonstrates commonable performance in terms of cooperative optimization, it does have limitations. Specifically, it determines task acceptance without specifying OWs for task execution and it assumes complete knowledge of the resource requirements for each task, rendering it impractical for real-world implementation. Traditional reinforcement learning algorithms struggle with the complexity and scalability challenges inherent in multi-agent task scheduling. Moreover, fully decentralized approaches suffer from computational inefficiency, slow convergence, and demand consistent inter-agent communication, which may not be feasible due to energy and bandwidth limitations in CubeSat networks. On the other hand, Hierarchical Reinforcement Learning (HierRL) addresses these challenges by breaking the problem into high-level and low-level decision-making. This bifurcation expedites convergence, enhances robustness against failures, and allows for more efficient computational resource utilization [23] making it a compelling choice for task scheduling in CubeSat swarms. Such a structure can potentially combine the strengths of traditional algorithms with the adaptability of RL and DRL. In this paper, we introduce a novel methodology, shown in Fig. 1, that leverages the power of hierarchical deep reinforcement learning for task scheduling in CubeSats. Our primary contributions include the development of a multilayer perceptron (MLP) estimator for task energy consumption that aids both the low-level algorithm and the high-level algorithm in decision-making. We also present an attention-based encoder for scoring tasks, ensuring that the most critical tasks are addressed first. Our hierarchical decision-making structure with high-level and low-level policies act as a safety mechanism for algorithm, optimizes decisions at multiple levels. The detailed reward structure promotes energy consumption, spatial considerations, and deadline adherence. These advancements ensure robust satellite operations, fault-tolerant, and enhanced task prioritization also helps algorithms to faster convergence in essence. The methodology pioneers an energy-conscious perspective in CubeSat task scheduling, fulfilling an evident need in the field. ## II Problem Definition We examine a network of CubeSats, denoted as \(\mathcal{C}_{1}\)\(,\mathcal{C}_{2},...,\mathcal{C}_{N}\), operating in the Low Earth Orbits (LEO). It is assumed that they are in communication with a ground station, which receive tasks from. Once relayed, these tasks are aggregated within a centralized shared storage, which is universally accessible to each CubeSat constituting the network. Each CubeSat, \(\mathcal{C}_{i}\), is provisioned with a finite energy storage. Energy is depleted during task execution and any ancillary orbital adjustments. Insufficient energy reserves relegate the CubeSat to a dormant mode until replenishment occurs. The rate of energy recovery is primarily influenced by CubeSat's solar panel exposure to sunlight, which itself is a function of its orbital position and diurnal cycles, thereby imparting a cyclical pattern to energy availability. The energy replenishment rate \(E_{r_{i}}(t)\) for \(\mathcal{C}_{i}\) over time \(t\) can be mathematically encapsulated as follows: \[E_{r_{i}}(t)=\eta(t)Al_{Sc}cos(\theta_{s_{t}}) \tag{1}\] where \(\eta(t)\) is the efficiency of the solar panels over time, ranging between 0 and 1, \(A\) is the effective area of the solar panels exposed to sunlight, \(I_{Sc}\) is the solar constant, which is approximately \(1361\,W/m^{2}\) for LEO [24] but may vary with solar activity, \(\theta_{s_{t}}\) is the angle between the solar rays and the normal to the plane of the solar panel at time \(t\). The dynamics of the energy storage \(E_{i}(t)\) can thus be expressed as: \[\dot{E}_{i}(t)=E_{r_{i}}(t)-E_{consumed_{i}}(t) \tag{2}\] in which \(E_{consumed_{i}}\)represents the energy consumed for task execution and other activities at time \(t\). By solving this differential equation subject to the initial condition, the CubeSat's energy level at any time \(t\) can be obtained. The computational prowess of each CubeSat is principally characterized by its processor capabilities. Each observational task stipulates a unique set of computational prerequisites and a stringent deadline. The system maintains a central shared storage of tasks accessible to all CubeSats, each defined by attributes including priority from the ground station, computational requirements, OW, and spatial coordinates. Each task is exclusive to one CubeSat. A uniqueness constraint for tasks applied to prevent redundancy and real-time task status updates (_unassigned, in progress, completed_) ensure constraint satisfaction. The primary goal is to develop a scheduling algorithm optimizing a composite reward function, considering task priority, deadline adherence, resource efficiency, and CubeSat positioning. The algorithm should adapt to dynamic task arrivals, ensuring system flexibility. The objectives are achieved using a Markov game model approach [25]. ## III Methodology We present a HierRL method for the scheduling problem for CubeSats considering energy consumption. In the method, an encoder first prioritized tasks, emphasizing critical attributes that influence the success rate in task scheduling and energy consumption like task duration, spatial constraints, memory, and computational requirements. With the aid of an attention mechanism, the encoder assigns scores for prioritizing each task to speed up the convergence of high-level reinforcement learning. We introduce an MLP-based estimator that uses the encoder's output and task score. By employing similarity-attention-based mechanisms and referencing data from prior task executions stored in the high-level reinforcement learning replay experience, this estimator predicts the energy consumption for each task based on each task ID. Subsequently, tasks are allocated to individual CubeSats based on system states and complexity scores through a high-level reinforcement learning process. If a CubeSat is in a failure status or lacks adequate battery power or time for a task, a low-level reinforcement learning algorithm reassesses and reassigns tasks as needed. ### _Encoder_ The encoder is designed based on the attention-based mechanism by focusing on the failure and similarity of past tasks for feature extraction and task prioritization to assist the reinforcement learning algorithm and the energy consumption estimator in making more accurate decisions. ### _Similarity Attention-based Encoder (SABE)_ The primary objective of the SABE is to extract task features and utilize historical data in the experience replay of the high-level reinforcement learning for prioritizing and labeling tasks. These features are then standardized using MinMax [26] scaling, ensuring they all have equal importance during subsequent computations. #### Iii-B1 Task Similarity Using Attention Historical data from the RL's experience replay aids in gauging task similarities based on extracted features. Emphasis is on task ID and tasks with significant discrepancies between required and actual execution times and predicted and actual energy consumption. Cosine similarity is utilized to compute the proximity between the feature vectors of new tasks and those stored in the experience replay. Tasks with high discrepancies receive augmented attention weights. More similar tasks get greater attention. An exponential decay mechanism ensures recent tasks have more influence, given by the \(e^{-\lambda_{4}t}\), where \(\lambda_{1}\) is the decay rate. #### Iii-B2 Task Classification A task's complexity score incorporates computational demands \(C_{d}\), similar task historical failure in estimation energy consumption \(H_{f}\), and duration \(d\). Tasks are categorized into complexity tiers using a threshold-based [27] classification based on these scores. The complexity score \(C_{s}\) is given by \[C_{s}=w_{1}C_{d}+w_{2}H_{f}+w_{3}d \tag{3}\] It should be noted that weighting parameters calibrated to balance various operational aspects are shown by \(w_{i}\) in this paper. In addition, tasks are prioritized based on the ground system priority \(p\), deadline adherence \(D_{a}\), duration \(d\), and differences in required execution time and actual execution time for similar tasks \(\delta t_{e}\) by \[P_{s}=w_{a}p+w_{5}d+w_{6}\delta t_{e}+w_{7}D_{a} \tag{4}\] Using the TOPSIS method [28], tasks are scored and then classified into complexity and priority tiers, 1-5, based on thresholds. ### _Task Energy Consumption Estimator_ To accurately predict energy consumption before task execution, our methodology leverages an MLP network [29], informed by outputs from the encoder and the high-level experience replay including actual energy consumption of previous tasks. The estimator utilizes \(C_{s}\), which includes feedback on previous task assumption failures, along with task-specific feature vectors from the encoder. This approach gives more weight to tasks with greater complexity in energy estimation. After each step, we compare predicted energy consumption to actual consumption. Any consistent discrepancies observed serve as a trigger for model re-training. MLP contains two hidden layers containing 128 and 64 neurons, employing ReLU activation. The model is trained with a Mean Squared Error (MSE) loss and an Adam optimizer [30] at a learning rate of 0.001. In addition to the regular estimation, we incorporate a 5% energy safety buffer at the start of the process. This safety margin decreases based on the task estimator's error and complexity score. This approach ensures that there is an energy buffer to accommodate unexpected variables or minor inefficiencies that may occur. ### _Hierarchical Reinforcement Learning Framework_ To address the CubeSat task scheduling challenge, we propose a HierRL framework with two primary layers: a high-level policy for global task distribution based on broader constraints, and a low-level policy for real-time monitoring and system adjustment. This lower-level policy handles tasks such as tracking battery levels, meeting deadlines, and monitoring health status, taking corrective actions in the event of failures. #### Iii-B1 High-Level Task Assigning The high-level policy is pivotal in task allocation, utilizing the estimator output to speed up convergence and make informed decisions. Specifically, this policy assigns the most suitable task to each CubeSat based on the current state of the system. The state of the system consists of the state of the CubeSat includes remind storage and computational resource, remind energy levels of each CubeSat, the temperature of each CubeSat, CubeSats' orientation, required time for each task \(t_{r}\), and the queue of tasks waiting for execution (\(P_{s}\), computational requirement, QW, location, status of processed and estimated energy consumption from estimator). Actions include assigning and skipping a task, assigning a new task from the task queue to a specific CubeSat and Skip Task, deciding not to assign a task to any CubeSat, and keeping it in the queue for later assignment. The reward function for \(C_{i}\) at discrete time \(t\) is articulated as: \[\begin{array}{l}R_{i}(t)\\ =P_{s}\times w_{9}R_{a_{i}}(t)+w_{10}R_{p_{i}}(t)+w_{11}R_{e_{i}}(t)\\ +w_{12}R_{e_{i}}(t)\end{array} \tag{5}\] \(R_{e_{i}}(t)=-\alpha\;\Delta\theta_{i}(t)\), the observational efficiency, quantifies the CubeSat's efficiency in orienting itself relative to the observational target, where \(\Delta\theta_{i}\) represents the angular difference between the current orientation and the desired orientation of \(C_{i}\) and \(\alpha\) is constant that weighs the energy and time costs of reorientation based on the CubeSat's specifications. \(R_{e_{i}}(t)=t_{f}-(t+\Delta t_{i}(t)\;)\) is known as OW constraint, which rewards CubeSats that are optimally positioned to complete the observation within the designated window. \(t_{f}\) is the upper limit of the observation window, and \(\Delta t_{i}\) is the time required for \(C_{i}\) to reorient and perform the observation task. In addition, \(R_{p_{i}}\) penalizes excessive energy usage, thereby fostering energy-efficient operations. The penalty term \(R_{p_{i}}=\sum_{j=1}^{5}R_{p_{i,j}}\) of the reward function is meticulously crafted to penalize the CubeSat for various undesirable actions or states. A prime consideration is energy overconsumption. When a CubeSat's energy consumption surpasses its predefined capacity, a penalty ensues. Given \(E_{e_{i}}\)\((t)=\frac{E_{e_{i}}}{E_{\max}}\) be representing the normalized energy consumed by \(C_{i}\) for the task, where \(E_{e_{i}}\) is energy consumed, and \(E_{\max}\) is the maximum energy capacity of the CubeSat, when \(E_{e_{i}}>1\), the penalty is quantified as \(R_{p_{i,2}}=\beta\big{(}E_{e_{i}}(t)-1\big{)}\). In other instances, it defaults to \(E_{e_{i}}(t)\). Moreover, for the precision-timed completion of tasks, penalties are imposed when the CubeSat doesn't adhere to set deadlines. If the realized completion time \(T_{c}\) overshoots the task's deadline \(D_{a}\), a stringent penalty, encapsulated as \(R_{p_{i,2}}\) is level. The energy efficiency penalty is another paramount consideration. Any deviation between the expected energy requirement for a task \(E_{e_{i}}\) and the actual energy consumed \(E_{a_{i}}\) is penalized. The energy deviation is expressed as \(\delta E_{i}\left(t\right)=E_{a_{i}}(t)-E_{e_{i}}(t)\). The corresponding penalty, aiming to ensure energy-efficient operations, is articulated as \(R_{p_{i,3}}=-\lambda_{2}E_{e_{i}}(t)\delta E_{i}(t)\). Furthermore, the economical use of computational resources is a keystone in CubeSat operations. In case the computational load \(\xi(t)\) exceed its computational resource threshold \(\xi_{\max}\), it incurs a penalty delineated by \(R_{p_{i,4}}\). Additionally, in the quest to prevent redundant task initiations, penalties are instituted. A CubeSat embarking on a task already in progress by another unit is met with a standard penalty, designated as \(R_{p_{i,5}}\). #### Iii-B1 Learning Paradigm in High-Level Policy In tackling the task scheduling problem within a CubeSat swarm, we have opted for the Multi-Agent Deep Deterministic Policy Gradients (MADDPG) method. The decision is rooted in MADDPG's demonstrated efficacy in maintaining stability within dynamic environments, as well as its aptness for handling scenarios involving multiple agents. Please find the detailed formulation of the actor and critic networks in [31]. In this work, the actor-network is a feed-forward neural network with two layers comprising [256,128] neurons for each actor. Critic-Network has a more complex architecture with three layers of [256,128,64] neurons. The activation function for the output layer is SoftMax, while the hidden layers employ Rectified Linear Unit (ReLU) activation functions. The training process is structured in episodes, with the task scheduling undergoing training through 20,000 episodes, each representing ten complete orbit cycles. To maintain stability, the Critic network is updated at each time step upon receiving a new task or completing a current one, whereas the Actor-network follows less frequent updates to ensure stability. Significant hyperparameters include learning rates of 0.001 for the Actor and 0.002 for the Critic. A discount factor \(\gamma\) of 0.99 is selected to balance immediate and future rewards effectively. The system incorporates an Ornstein-Uhlenbeck noise process [32] in the action selection mechanism to facilitate the exploration-exploitation trade-off. A replay buffer of size \(10^{6}\) is employed, and mini-batch sizes of 128 are used during training. Following each episode, both the target Critic and Actor networks undergo soft updates with a coefficient of \(\tau=0.005\). #### Iii-B2 Low-Level Controller Design and Mechanics The low-level controller acts with a high-level scheduling mechanism, implemented using a multi-agent deep Q-learning algorithm. While the high-level MADDPG agent is primarily concerned with initial task assignments and global system efficiency, the low-level MADQN focuses on reassigning tasks when unforeseen circumstances arise, such as CubeSat failure or gross underestimation of resource needs. States include remaining energy level \(e_{i}\), \(E_{e_{i}}\), CubeSat status: _operational_ or _in a failure_ state, current task ID, the difference between \(e_{i}\) and \(E_{e_{i}}\), and \(C_{s}\). Moreover, the action space consists of two primary actions: keeping the task (\(a=0\)) and reallocating the task (\(a=1\)), which involves returning it to the global task queue for potential reassignment to another CubeSat. For the energy-based component of the reward, when the remaining energy surpasses the estimated energy requirement for the task a positive reward is conferred if the decision is to keep the task. Conversely, a penalty is introduced for unnecessary task reassignments. In situations where the energy is insufficient, a significant penalty is enforced to underline the criticality of energy constraints. Mathematically, the energy-based reward \(r_{e}\) is defined by \[r_{e}=\begin{cases}\lambda&\text{\emph{if}}\quad e_{i}>E_{e_{i}}(1+C_{s}/5\ )\ \text{and}\ a=0\\ -\lambda&\text{\emph{if}}\quad e_{i}>E_{e_{i}}(1+C_{s}/5\ )\ \text{and}\ a=1\\ \ell&\text{\emph{if}}\quad e_{i}\leq E_{e_{i}}(1+C_{s}/5\ )\ \text{and}\ a=0\end{cases} \tag{6}\] where \(\lambda\) and \(\ell\) are constants that determine the magnitude of rewards and penalties. Moving to the deadline-based reward, if there is ample time to complete the task (adjusted for its priority), the reward mechanism incentivizes retaining the task. However, unwarranted task reassignments in such situations are penalized. When time constraints are tight, a notable penalty is applied to emphasize the importance of task deadlines. Formally, the deadline-based reward is expressed as \[r_{d}=\begin{cases}\phi&\text{\emph{if}}\quad D_{a}>t_{r}(1+P_{s}/10\ )\ \text{and}\ a=0\\ -\phi&\text{\emph{if}}\quad D_{a}>t_{r}(1+P_{s}/10\ )\ \text{and}\ a=1\\ \psi&\text{\emph{if}}\quad D_{a}\leq t_{r}(1+P_{s}/10\ )\ \text{and}\ a=0\end{cases} \tag{7}\] where \(\phi\) and \(\psi\) are constants dictating the reward and penalty magnitudes, respectively. Furthermore, to account for the operational status of the CubeSat, a failure penalty is incorporated. In the unfortunate event of a CubeSat failure, a stringent penalty is applied, highlighting the gravity of such an event. This penalty, denoted as \(r_{f}\), is given by \[r_{f}=\begin{cases}-K&\text{\emph{if}}\quad\text{\emph{CubeSat} is failed}\\ 0&\text{\emph{otherwise}}\end{cases} \tag{8}\] with \(\kappa\) representing a significant constant value. Aggregating these components, the comprehensive reward of the low-level controller s expressed as \(r_{Low-Level}=r_{e}+r_{d}+r_{f}\). #### Training and Learning Paradigm in Low-Level Policy The architecture for each agent features two hidden layers, each with 64 neurons activated by ReLU functions. The output layer corresponds to the dimensions of the low-level action space, comprising either "_Reassigning Task"_ or "_Keeping Task_", and is activated by a linear function. Further details and the formulation process can be referenced in [33]. The low-level task reassignment is simulated when a CubeSat either failed or is deemed unsuitable for task execution. The DQN algorithm is trained over 20,000 episodes, each characterized by random failure patterns and energy fluctuations. The Q-network is updated at the end of each episode, and experiences are stored in a replay buffer for off-policy learning. The learning rate is set at 0.001, and a decay factor of 0.95 is chosen for the discount rate \(\gamma\). The replay buffer size is set at 5\(\times\) 10\({}^{5}\), and mini-batches of 64 are employed for training. Epsilon begins at 1.0 and decays to 0.01 over 50,000 steps. #### Iii-A3 Direct Feedback Mechanism Feedback from the low-level agent refines the high-level decision process in CubeSat task scheduling. This is achieved by quantifying deviations in task metrics: \[F=w_{13}(D_{a}-t-t_{r})+w_{14}(e_{i}-E_{e_{i}})+w_{15}N_{r} \tag{9}\] where \(N_{r}\) counts task reallocations. ## IV Result and Evaluation This section furnishes a meticulous empirical evaluation of the devised HierRL algorithm for CubeSat task scheduling. Comparative performance analyses are conducted with existing benchmarks, the random policy task assignment algorithm [34], and the MADDPG algorithm. The experiments centered on a constellation of 3, 4, and 5 CubeSats, each tasked with efficiently scheduling observational tasks. The scenarios encompass varying task quantities, specifically 100, 150, and 200 tasks, to evaluate algorithm performance under different workloads. ### _Experimental Setup_ We conducted a series of numerical simulations using MATLAB 2023 to evaluate the proposed scheduling network thoroughly. Given the lack of a standard benchmark for satellite scheduling, we developed a custom scenario generator. This program selects random ground positions and determines the available observation windows for each satellite using the Systems Tool Kit (STK) software. ### _Results_ #### Iv-B1 Evaluation of the algorithm Our initial focus was on evaluating the feasibility of the encoder and estimator components. During this phase, we examined the convergence of the task's score under different configurations. These configurations included three combinations of encoder attention weights and estimators. Our observations indicated that higher attention weights led to improved task prioritization, and specific configurations exhibited enhanced learning effects. During the initial stages of training, we noticed fluctuations in reward feedback. This is attributable to the model's exploration phase, which tries to understand the environment and optimal policy. As training progressed, the Actor-Critic models steadily converged to an optimal policy. Given the vital role of the low-level controller in handling unforeseen challenges, we evaluated its efficiency in reassigning tasks. During simulations where a CubeSat faced unexpected failures or severe energy deviations, the low-level DQN exhibited impressive resilience. The Q-values exhibited stabilization over time. This underscores the effectiveness of the low-level DQN in reallocating tasks when deemed necessary. In the preliminary stages of training, the estimator's performance was not optimal, primarily due to a scarcity of data. This limitation slightly impeded the quality of our reward function. However, as training progressed and with each successive episode, the estimator's proficiency improved significantly. Fig. 2 shows the mean cumulative rewards in the training phase of the proposed algorithm comparison MADDPG and HierRL without using an encoder and estimator. Our enhanced HierRL framework, integrated with an encoder and estimator, demonstrated rapid and efficient convergence during training. Initially, there was a minor computational overhead due to the encoder's task prioritization and the estimator's energy prediction functionalities. However, as training advanced, these components proved invaluable, steering the system towards optimal decisions faster than traditional algorithms. The proposed HierRL (with encoder and estimator) converges fastest, benefiting from the encoder's task prioritization and the estimator's energy prediction. Conversely, HierRL (without encoder and estimator) has a slower convergence rate, and MADDPG converges slowest. By convergence's end, the proposed HierRL achieves the highest reward, surpassing its counterpart and MADDPG. ### _Comparative Analysis with Existing Algorithms_ To ascertain the superiority of our proposed HierRL methodology, we compared it with the task scheduling algorithm using MADDPG and random scheduling, assessing metrics like task completion rate and adaptability. #### Iv-C1 Average task success Fig. 3 highlights the average task success count achieved by the three methodologies (HierRL, random task scheduling, and MADDPG) under examination. Conclusively, HierRL trims the make span by a minimum of 10% compared to the MADDPG, and by at least 15% relative to the Random scheduling, maintaining identical task and CubeSat configurations. #### Iv-C2 Scalability Performance Analysis Fig. 4 displays the make span comparison for the three algorithms under varying tasks and CubeSat counts. The make span, in the context of satellite observational tasks, represents the total time required to complete all tasks. From the figure, our proposed HierRL consistently outperforms both MADDPG and random scheduling across all scenarios. Notably, as the task count rises relative to the number of CubeSats (e.g., 3 CubeSats with 200 tasks), the efficiency of HierRL becomes even more pronounced. This can be attributed to the hierarchical structure and the safety low-level DQN, which reallocates tasks effectively during CubeSat failures and lets the high-level decide globally and divide the responsibility of each level. In scenarios with a higher CubeSat count relative to tasks (e.g., 5 CubeSats with 100 tasks), the make span difference between HierRL and MADDPG narrows. This suggests that the advantage of the HierRL optimization diminishes as resources (CubeSats) become more abundant. Nonetheless, HierRL still maintains a lead, emphasizing its robustness and adaptability. Random scheduling, lacking any adaptability or learning, consistently lags, showcasing the importance of intelligent scheduling in CubeSat constellations. ## V Conclusion The empirical evaluation validates the proposed hierarchical reinforcement learning algorithm's effectiveness, efficiency, and robustness for CubeSat task scheduling. Enhanced by advanced features using task scoring using attention mechanism and energy consumption estimator, our model exhibits resilience against network failures and adapts to evolving operational parameters. Therefore, our methodology signifies a groundbreaking advancement in CubeSat swarm task scheduling and resource allocation, warranting further exploration and real-world deployment. Continuous improvement mechanisms, feedback loops, and regular model refinements ensure the system remains cutting-edge and relevant to emerging challenges.
2303.18214
Particle-In-Cell Simulations of Sunward and Anti-sunward Whistler Waves in the Solar Wind
Spacecraft observations showed that electron heat conduction in the solar wind is probably regulated by whistler waves, whose origin and efficiency in electron heat flux suppression is actively investigated. In this paper, we present Particle-In-Cell simulations of a combined whistler heat flux and temperature anisotropy instability that can operate in the solar wind. The simulations are performed in a uniform plasma and initialized with core and halo electron populations typical of the solar wind. We demonstrate that the instability produces whistler waves propagating both along (anti-sunward) and opposite (sunward) to the electron heat flux. The saturated amplitudes of both sunward and anti-sunward whistler waves are strongly correlated with their {\it initial} linear growth rates, $B_{w}/B_0\sim (\gamma/\omega_{ce})^{\nu}$, where for typical electron betas we have $0.6\lesssim \nu\lesssim 0.9$. The correlations of whistler wave amplitudes and spectral widths with plasma parameters (electron beta and temperature anisotropy) revealed in the simulations are consistent with those observed in the solar wind. The efficiency of electron heat flux suppression is positively correlated with the saturated amplitude of sunward whistler waves. The electron heat flux can be suppressed by 10--60% provided that the saturated amplitude of sunward whistler waves exceeds about 1% of background magnetic field. Other experimental applications of the presented results are discussed.
Ilya V. Kuzichev, Ivan Y. Vasko, Anton V. Artemyev, Stuart D. Bale, Forrest S. Mozer
2023-03-31T17:14:26Z
http://arxiv.org/abs/2303.18214v1
# Particle-In-Cell Simulations of Sunward and Anti-sunward Whistler Waves in the Solar Wind ###### Abstract Spacecraft observations showed that electron heat conduction in the solar wind is probably regulated by whistler waves, whose origin and efficiency in electron heat flux suppression is actively investigated. In this paper, we present Particle-In-Cell simulations of a combined whistler heat flux and temperature anisotropy instability that can operate in the solar wind. The simulations are performed in a uniform plasma and initialized with core and halo electron populations typical of the solar wind. We demonstrate that the instability produces whistler waves propagating both along (anti-sunward) and opposite (sunward) to the electron heat flux. The saturated amplitudes of both sunward and anti-sunward whistler waves are strongly correlated with their _initial_ linear growth rates, \(B_{\rm{w}}/B_{0}\sim(\gamma/\omega_{ce})^{\rm{v}}\), where for typical electron betas we have \(0.6\lesssim\nu\lesssim 0.9\). The correlations of whistler wave amplitudes and spectral widths with plasma parameters (electron beta and temperature anisotropy) revealed in the simulations are consistent with those observed in the solar wind. The efficiency of electron heat flux suppression is positively correlated with the saturated amplitude of sunward whistler waves. The electron heat flux can be suppressed by 10-60% provided that the saturated amplitude of sunward whistler waves exceeds about 1% of background magnetic field. Other experimental applications of the presented results are discussed. ## 1 Introduction The early spacecraft measurements at 0.3-5 au [1, 2, 3] and recent Parker Solar Probe (PSP) measurements at 0.1-0.3 au [4] showed that electron heat conduction in the solar wind cannot be described by the Spitzer-Harm law [5]. The reason is that solar wind electrons are only weakly-collisional; the collisional mean free path typically exceeds the inverse gradient scale length of electron temperature in the heliosphere [6, 7, 4]. In accordance with previous observations at 1 au [8, 9, 10], PSP and Helios measurements showed that electron heat flux is bounded by a threshold dependent on local electron beta [11, 4, 12]. The beta-dependent threshold indicates that wave-particle interactions are probably regulating electron heat conduction in the solar wind, and whistler waves were suggested to be the most likely wave activity involved in the regulation process [13, 14, 3]. The modern spacecraft measurements have substantially advanced the understanding of electron heat conduction in the solar wind, but still have not established the heat flux regulation mechanism. Whistler waves involved in the electron heat flux regulation can be of different origin. First, the electron heat flux can be regulated by whistler waves naturally produced by turbulence cascade [15, 16, 17, 18]. While early observations of broadband magnetic field fluctuations between ion and electron kinetic scales were indeed interpreted in terms of whistler waves [19, 20], the modern spacecraft measurements showed that magnetic field turbulence at these scales is dominated by kinetic Alfven waves [21]. The presence of broadband whistler mode fluctuations cannot be entirely ruled out though [22] and the contribution of magnetic field turbulence to the electron heat flux regulation remains to be quantified [17]. On the other hand, whistler waves involved in the heat flux regulation can be produced by various electron-driven instabilities [e.g., 23, 14, 24]. The velocity distribution function of electrons in a pristine solar wind consists of a dense thermal core population contributing about 90% of total electron density and tenuous superthermal halo and strahl populations carrying the most of the electron heat flux [e.g., 25, 26, 27, 28]. The core and halo populations are relatively isotropic and can be described in the plasma rest frame by sunward-drifting Maxwell and anti-sunward drifting \(\kappa-\)distributions, respectively. In contrast, the strahl is a highly anisotropic population collimated around local magnetic field and streaming anti-sunward. It was recently suggested that oblique whistler waves driven by the strahl can potentially regulate the electron heat flux [29, 30]. Numerical simulations showed this instability can indeed suppress the electron heat flux by pitch-angle scattering the strahl and converting it into more or less isotropic halo [31, 32, 33]. The instability of oblique whistler waves could, in principle, explain the observed radial evolution of halo and strahl densities [25, 26] as well as the observed beta-dependent threshold on the electron heat flux [4, 12]. However, there are currently several indications that this instability _does not_ substantially regulate electron heat conduction in the solar wind. First, PSP and Helios measurements at 0.1-1 au showed the strahl parameters are statistically well below the instability threshold [34]. Second, PSP measurements at 0.1-0.5 au revealed the radial evolution of halo and strahl densities to be inconsistent with the halo being produced via pitch-angle scattering of the strahl [35]. Consistent with that, the recent analysis showed that halo electrons propagating sunward (almost a half of the halo population) originate in the outer heliosphere rather than evolve from the strahl [36]. Third, whistler waves observed in a pristine solar wind at 0.1-1 au usually propagate within a few tens of degrees of local magnetic field [37, 38, 39, 10, 40, 41, 12]. Oblique whistler waves do present in the solar wind, but typically occur around stream interaction regions and coronal mass ejections [42, 43]; thus, they are not likely substantially involved in the regulation of electron heat conduction in the solar wind. The fact that whistler waves in the solar wind are typically quasi-parallel stimulates the analysis of their origin and effects. The early theoretical analysis by [23, 44] showed that whistler waves in the solar wind can be produced by the whistler heat flux instability (WHFI). This instability operates, when core and halo populations, isotropic or parallel-anisotropic in temperature, drift relative to each other parallel to local magnetic field; there is a heat flux parallel to the halo drift (but the net current in the plasma frame is zero) and the fastest-growing whistler waves propagate parallel to the heat flux. A strahl population that is drifting anti-sunward does not affect the WHFI, because the unstable whistler waves are resonant only with a fraction of sunward-propagating halo electrons. The recent observations showed that the WHFI indeed operates in the solar wind and produces whistler waves propagating anti-sunward [45]. The recent Particle-In-Cell simulations showed that the WHFI can produce whistler waves with properties consistent with solar wind observations, but _cannot_ regulate the electron heat flux [46], contrary to previous speculations [14, 8, 47]. The fraction of whistler waves produced in the solar wind via the WHFI is still not known [10, 11]. There are indications that whistler waves can be also produced by the instability associated with a perpendicular temperature anisotropy of the halo population [48, 11]. These indications consist in statistically significant observations of the halo population with perpendicular temperature anisotropy [49, 11, 50, 51, 28] and preferential occurrence of whistler waves in association with isotropic or perpendicular anisotropic halo [45, 10, 11]. The recent reports of sunward and anti-sunward propagating whistler waves in near-Sun solar wind are also of relevance [52, 53]. In this paper, we present Particle-In-Cell simulations of a combined whistler heat flux and temperature anisotropy instability that is potentially operating in the solar wind and capable of producing both sunward and anti-sunward whistler waves. We determine saturation amplitudes of the whistler waves along with their dependence on plasma parameters and demonstrate that these amplitudes can be estimated using _initial_ linear growth rates of the whistler waves. The efficiency of this instability in electron heat flux regulation and other experimental applications of the presented results are discussed. ## 2 Linear instability and simulation setup We use Particle-in-Cell TRISTAN-MP code [54] and perform 1D3V simulations restricted to whistler waves propagating parallel and anti-parallel to background magnetic field. Ions are assumed to be an immobile neutralizing background. Electrons are represented by core and halo populations, whose initial velocity distribution functions (VDF) in the plasma frame in a non-relativistic limit, which is the case in our simulations, are described by Maxwell distributions \[f_{\alpha}(\mathbf{v})=\mathcal{N}_{\alpha}\exp\left[-\frac{m_{e}(v_{||}-u_{ \alpha})^{2}}{2\,T_{\alpha}}-\frac{m_{e}v_{||}^{2}}{2\,A_{\alpha}\,T_{\alpha }}\right], \tag{1}\] where \(\alpha=c,h\) correspond to core and halo populations, \(v_{||}\) and \(v_{\perp}\) are velocities parallel and perpendicular to background magnetic field, \(\mathcal{N}_{\alpha}=n_{\alpha}A_{\alpha}^{-1}\left(m_{e}/2\pi T_{\alpha} \right)^{3/2}\) is the normalization constant, \(n_{\alpha}\), \(u_{\alpha}\), \(T_{\alpha}\) and \(A_{\alpha}\) are respectively densities, drift velocities, parallel temperatures and temperature anisotropies. The electron current is assumed to be zero, \(n_{c}u_{c}+n_{h}u_{h}=0\). The electron heat flux is parallel to background magnetic field and carried predominantly by the halo population, \(q_{e}\approx-n_{c}u_{c}T_{h}(3/2+A_{h})\), because the halo is several times hotter than the core population, \(T_{h}/T_{c}\approx 3\)-7 [e.g., 25, 28]. The combination of core and halo populations relatively well describes the electron VDF beyond 0.2 au, where the halo density is several times larger than the strahl density [25, 28, 35]. Although the halo population is better described by a \(\kappa-\)distribution [e.g., 25], we consider Maxwellian halo to reduce the number of free parameters. The use of a \(\kappa-\)distribution would not affect the critical results of this study. The linear analysis of a combined whistler heat flux and temperature anisotropy instability shows that the growth rate normalized to electron cyclotron frequency \(\omega_{ce}\) depends on the wavenumber normalized to electron inertial length \(c/\omega_{pe}\) and the following parameters [48] * \(\beta_{c}=8\pi n_{c}T_{c}/B_{0}^{2}\): core electron beta. * \(n_{c}/n_{0}\): core density relative to total electron density, \(n_{0}=n_{c}+n_{h}\). * \(u_{c}/v_{A}\): core drift velocity in units of Alfven speed, \(v_{A}=B_{0}(4\pi n_{0}m_{p})^{-1/2}\). * \(T_{h}/T_{c}\): ratio of halo and core parallel temperatures. * \(A_{c}\) and \(A_{h}\): core and halo temperature anisotropies. Note that linear stability as well as nonlinear evolution also depend on the ratio between electron plasma and cyclotron frequencies, but this dependence is negligible once \(\omega_{pe}/\omega_{ce}\gg 1\)[46, 48]. In this paper, we keep \(n_{c}/n_{0}=0.85\), \(T_{h}/T_{c}=6\), \(A_{c}=1\), and present numerical simulations at various combinations of \(\beta_{c}\), \(u_{c}/v_{A}\) and \(A_{h}\). The typical values of these parameters in the solar wind are \(\beta_{c}=0.1\)-10, \(|u_{c}|/v_{A}=1\)-7 and \(A_{h}=1.1\)-1.5 [49, 9, 50, 51, 11, 28]. The values of these parameters used in three sets of simulations (25 runs per set) are presented in Table 1. We performed the simulations at \(\omega_{pe}/\omega_{ce}\sim 10\) that is about ten times smaller than in the realistic solar wind. More precisely, in all simulations, we assumed core electron temperature \(T_{c}\) of 2 keV, and computed \(\omega_{pe}/\omega_{ce}\) using the following identity, \(\omega_{pe}/\omega_{ce}\equiv\left(\beta_{c}n_{0}/n_{c}\right)^{1/2}\left(m_{ c}c^{2}/2T_{c}\right)^{1/2}\); for \(\beta_{c}=0.3\)-3 we have \(\omega_{pe}/\omega_{ce}\approx 7\)-20. In all simulation runs, the length of the simulation box was \(L\approx 105\ c/\omega_{ce}\) or, for \(\beta_{c}=1\), about \(1300\ c/\omega_{pe}\). The temporal and spatial integration steps were 0.09 \(\omega_{pe}^{-1}\) and 0.2 \(c/\omega_{pe}\), both adequate to resolve the expected whistler waves. The number of particles per cell for each population was \(4\cdot 10^{4}\). We will preface the presentation of simulation results by linear stability analysis. Figure 1 presents results of linear stability analysis of whistler waves at fixed values of core electron beta and halo temperature anisotropy (\(\beta_{c}=1\) and \(A_{h}=1.3\)), but various values of electron heat flux determined by core drift velocity \(u_{c}/v_{A}\). Panels (a) and (b) present the dispersion curves and growth rates of whistler waves propagating parallel and anti-parallel to the electron heat flux. When the electron heat flux is absent (\(u_{c}/v_{A}=0\)), _identical_ parallel and anti-parallel whistler waves are unstable due to the halo temperature anisotropy. The presence of electron heat flux breaks the symmetry resulting in larger growth rates of whistler waves propagating parallel to the electron heat flux. Panels (c)-(e) present the maximum growth rates along with corresponding frequencies and wave numbers of parallel and anti-parallel whistler waves unstable at various values of \(u_{c}/v_{A}\). In the considered range of \(u_{c}/v_{A}\) values, the parameters of the fastest-growing parallel whistler waves barely vary (\(\gamma_{+}/\omega_{ce}\approx 0.01\), \(\omega_{+}/\omega_{ce}\approx 0.1\) and \(k_{+}c/\omega_{pe}\approx 0.34\)). In contrast, the maximum growth rate of anti-parallel whistler waves monotonously decreases from \(\gamma_{-}/\omega_{ce}\approx 0.01\) to \(10^{-3}\); the frequency and wave number monotonously decrease by a factor of a few. \begin{table} \begin{tabular}{|c|c|c|c|} \hline run sets & \(\beta_{c}\) & \(-u_{c}/v_{A}\) & \(A_{h}\) \\ \hline I & 0.3 & 1.5:1.5:7.5 & 1.1:0.1:1.5, \\ \hline II & 1 & 1.5:1.5:7.5 & 1.1:0.1:1.5, \\ \hline III & 3 & 1.5:1.5:7.5 & 1.1:0.1:1.5, \\ \hline \end{tabular} \end{table} Table 1: The electron parameters for simulation sets I–III: each set consist of 25 simulation runs performed at a fixed value of core electron beta \(\beta_{c}\) and 25 pairs of core drift velocity \(u_{c}/v_{A}\) and halo temperature anisotropy \(A_{h}\); the values of \(A_{h}\) are from 1.1. to 1.5 with a step of 0.1, while the values of \(u_{c}/v_{A}\) are from \(-1.5\) to \(-7.5\) with a step of 1.5. In all simulation runs the relative core electron density was \(n_{c}/n_{0}=0.85\), the ratio of halo and core parallel temperatures was \(T_{h}/T_{c}=6\), and the core electron population was isotropic, \(A_{c}=1\). ## 3 Results of simulations at \(\beta_{c}=1\) and \(A_{h}=1.3\) Figure 2 presents results of a simulation run performed at \(\beta_{c}=1\), \(A_{h}=1.3\) and \(u_{c}/v_{A}=-3\). We consider the dynamics of magnetic field \(\delta\mathbf{B}(x,t)=\delta B_{y}(x,t)\hat{y}+\delta B_{z}(x,t)\hat{z}\) perpendicular to background magnetic field \(B_{0}\hat{x}\). Panel (a) presents the magnetic field magnitude \(\delta B(x,t)/B_{0}\) and demonstrates the growth of magnetic field fluctuations propagating both parallel and anti-parallel to the electron heat flux. Using Fourier transform, \(\delta\mathbf{B}(x,t)=\int\delta\mathbf{B}_{k\omega}\ e^{i(kx-\omega n)}\ dkd\omega\), we decompose magnetic field fluctuations into those propagating parallel and anti-parallel to the electron heat flux, \(\delta\mathbf{B}(x,t)=\delta\mathbf{B}_{+}(x,t)+\delta\mathbf{B}_{-}(x,t)\), where \(\delta\mathbf{B}_{+}(x,t)=\int_{\omega/k>0}\delta\mathbf{B}_{k\omega}\ e^{i(kx- \omega n)}\ dkd\omega\) and \(\delta\mathbf{B}_{-}(x,t)=\int_{\omega/k<0}\delta\mathbf{B}_{k\omega}\ e^{i(kx- \omega n)}\ dkd\omega\). Both \(\delta\mathbf{B}_{+}\) and \(\delta\mathbf{B}_{-}\) have right-hand polarization (not shown here) and correspond to parallel and anti-parallel whistler waves expected based on linear stability analysis (Figure 1). Panels (b) and (c) show that over the computation time parallel and anti-parallel whistler waves reach peak amplitudes of about \(0.1B_{0}\) and \(0.05B_{0}\), respectively. Figure 3 presents averaged amplitudes and growth rates of the parallel and anti-parallel whistler waves. Panel (a) shows the temporal evolution of magnetic field amplitudes averaged over the simulation box, \(\langle\delta B_{\pm}\rangle=\left[L^{-1}\int_{0}^{L}|\delta\mathbf{B}_{\pm}|^ {2}\ dx\right]^{1/2}\), and shows that within the computation time parallel and anti-parallel whistler waves saturate and the saturated amplitudes are \(B_{w}^{+}/B_{0}\approx 0.04\) and \(B_{w}^{-}/B_{0}\approx 0.02\). Panel (b) presents the temporal evolution of whistler wave growth rates computed as \(d/dt\left[\ \ln\ \langle\delta B_{\pm}\rangle\right]\). The initial growth rates of parallel and anti-parallel whistler waves are respectively around \(0.01\) and \(0.003\ \omega_{\alpha\epsilon}\), both consistent within a few tens of percent with linear stability results (Figure 1c). Figure 4 presents results of simulation runs performed at \(\beta_{c}=1\), \(A_{h}=1.3\) and various values of \(u_{c}/v_{A}\) indicated in panels (c)-(e) in Figure 1. In all these simulation runs, parallel and anti-parallel whistler waves saturated within the computation time and we computed the averaged amplitudes \(B_{w}^{+}\) and \(B_{w}^{-}\) reached by the end of each simulation run. Panel (a) shows that the saturated amplitude \(B_{w}^{+}\) of parallel whistler waves is around \(0.04B_{0}\) and varies by less than several tens of percent over the considered range of \(u_{c}/v_{A}\) values. In contrast, the saturated amplitude \(B_{w}^{-}\) of anti-parallel whistler waves monotonously decreases from \(0.025\) to \(0.005B_{0}\). Interestingly, according to panel (a) the dependencies of the saturated amplitudes \(B_{w}^{+}\) and \(B_{w}^{-}\) on \(u_{c}/v_{A}\) are almost identical with those of _initial_ linear growth rates \(\gamma_{+}\) and \(\gamma_{-}\). Panel (b) demonstrates that the ratio \(B_{w}^{+}/B_{w}^{-}\) is closely correlated with \(\gamma_{+}/\gamma_{-}\) and the best power law fit is \(B_{w}^{+}/B_{w}^{-}\approx(\gamma_{+}/\gamma_{-})^{0.73}\). This relation naturally predicts lower saturation amplitudes of anti-parallel whistler waves compared to parallel whistler waves, because the former always have lower _initial_ linear growth rates in our model (Figure 1). Figure 5 presents the temporal evolution of the electron heat flux in the considered simulation runs. We demonstrate the electron heat flux variation \(\delta q_{e}(t)=\left[q_{e}(t)-q_{e}(0)\right]/q_{e}(0)\) in percents, where \(q_{e}(t)\) is the electron heat flux averaged over the simulation box and \(q_{e}(0)\) is its initial value. The electron heat flux suppression is most efficient, \(\delta q_{e}\approx-10\%\), in the simulation run with \(u_{c}/v_{A}=-1.5\), while the efficiency drops to about \(1\%\) at \(u_{c}/v_{A}=-7.5\). There will be a natural positive correlation between the efficiency of electron heat flux suppression and the amplitude of anti-parallel whistler waves (shown further), because \(B_{w}^{-}/B_{0}\) is larger for smaller values of core electron drift velocity \(|u_{c}|/v_{A}\) (Figure 4a). Note that electron heat flux suppression of \(10\%\) is relatively large compared to a few percent variation observed in the simulations of a pure whistler heat flux instability [46]. ## 4 Results of all simulations Figure 6 presents averaged amplitudes \(B_{w}^{+}\) and \(B_{w}^{-}\) of parallel and anti-parallel whistler waves for all the 75 simulation runs (Table 1). Note that we demonstrate a whistler wave amplitude only if the initial linear growth rate of the whistler wave is larger than \(10^{-3}\ \omega_{\alpha\epsilon}\); otherwise the computation time of \(5000\ \omega_{\alpha-1}^{-1}\) is insufficient for whistler waves to saturate. For this reason the number of points corresponding to parallel and anti-parallel whistler waves in panels (a)-(c) can be different and also less than 25. Panels (a)-(c) show that at a fixed core electron beta \(\beta_{c}\) the saturated amplitudes are larger for larger halo temperature anisotropy \(A_{h}\) and increase by a factor of a few between \(A_{h}=1.1\) and \(1.5\). Also both parallel and anti-parallel whistler waves tend to saturate at larger amplitudes for larger core electron betas; for identical anisotropies saturated amplitudes increase by a factor of a few between \(\beta_{c}=0.3\) and \(3\). The observed dependencies of the saturated amplitudes on the halo temperature anisotropy and other plasma parameters could be actually inferred from a more fundamental relation to be presented below. Figure 7 shows that at every fixed core electron beta the saturated amplitudes of parallel and anti-parallel whistler waves are well-correlated with their _initial_ linear growth rates. The observed trends can be fitted to power-law func tions \[B_{w}^{\pm}/B_{0}=C_{\pm}(\gamma_{\pm}/\omega_{ce})^{\nu_{\pm}}, \tag{2}\] where the best fit parameters \(C_{\pm}\) and \(\nu_{\pm}\) are indicated in the panels and presented in Table 2. The power-law indexes and multipliers corresponding to parallel and anti-parallel whistler waves are different and vary with core electron beta; in the considered range of core electron beta, the power law indexes vary in a relatively narrow range, \(0.6\lesssim\nu_{\pm}\lesssim 0.9\). The fundamental relation between the saturated amplitude and the initial linear growth rate naturally predicts larger amplitudes of parallel and anti-parallel whistler waves for larger anisotropies and core electron betas, because the increase of these parameters results in larger linear growth rates [48]. This relation also indicates that anti-parallel whistler waves are expected to saturate at lower amplitudes than parallel whistler waves, because the presence of electron heat flux results in smaller growth rates of the former (Figure 1). Note that Eq. (2) naturally explains the correlations reported in the previous section in Figure 4. Figure 8 demonstrates the efficiency of electron heat flux suppression quantified by the relative heat flux variation \(\delta q_{e}\) reached at the saturation stage. We present \(\delta q_{e}\) only if both parallel and anti-parallel whistler waves saturated within the computation time. Panel (a) shows that electron heat flux suppression is within about 10% at low temperature anisotropies (\(A_{h}\lesssim 1.1\)), but can be as large as 60% at \(A_{h}=1.5\). At a fixed halo temperature anisotropy, the electron heat flux suppression is more efficient at larger core electron betas. The efficiency of electron heat flux suppression is expected to correlate with the saturated whistler wave amplitudes, since the latter are positively correlated with both electron beta and temperature anisotropy (Figure 5). Panel (b) shows that electron heat flux suppression \(\delta q_{e}\) is positively correlated with the saturated amplitude of anti-parallel whistler waves. The heat flux suppression is within 10% at \(B_{w}^{-}/B_{0}\lesssim 0.01\), but can be as large as 10-60% at \(B_{w}^{-}/B_{0}\gtrsim 0.01\). The electron heat flux suppression \(\delta q_{e}\) is also correlated with \(B_{w}^{+}/B_{0}\) (not shown here), but this correlation does not have direct experimental applications (see Section 5). We address spectral properties of the whistler waves by computing power spectral densities of parallel and anti-parallel whistler waves over the saturation stage, \(\text{PSD}_{\omega}^{\pm}=\langle\left|\int\delta\mathbf{B}_{\pm}(\mathbf{x},t)e^{i\omega t}dt\right|^{2}\rangle\), where the integration is over a time period where the instability has saturated, while \(\langle\cdot\rangle\) stands for spatial averaging over the simulation box. Gaussian fitting was done to determine the central wave frequency and the spectral width of the power spectral densities \(\text{PSD}_{\omega}^{\pm}\). Figure 9a demonstrates that both parallel and anti-parallel whistler waves have comparable relative spectral widths, \(\Delta\omega_{\pm}/\omega_{\pm}\sim 0.3\)-0.8. It is noteworthy that the relative spectral widths tend to be larger for larger core electron betas and positively correlated with initial linear growth rate of the whistler waves. The frequency of the saturated whistler waves is consistent within a few tens of percent with the frequency of initially fastest-growing whistler waves (not shown here). ## 5 Discussion We presented the first Particle-In-Cell simulations of a combined whistler heat flux and temperature anisotropy instability, which generalize our previous simulations of a pure whistler heat flux instability with isotropic halo population [46]. In contrast to the pure whistler heat flux instability capable of producing only whistler waves propagating parallel to the electron heat flux (anti-sunward), the combined whistler heat flux and temperature anisotropy instability produces both whistler waves propagating parallel (anti-sunward) and anti-parallel (sunward) to the electron heat flux. We showed that the saturated amplitudes of the whistler waves are correlated their initial linear growth rates, \(B_{w}/B_{0}\approx C(\gamma/\omega_{ce})^{\nu}\), where parameters \(C\) and \(\nu\) are a bit different for sunward and anti-sunward whistler waves. For typical solar wind conditions considered in our simulations the power-law index varies in a relatively narrow range, \(0.6\lesssim\nu\lesssim 0.9\) (Table 2). A similar scaling relation, though revealed using a few simulation runs, was reported for anti-sunward whistler waves produced by the pure whistler heat flux instability [46]. Whistler waves in our simulations saturated at \(B_{w}/B_{0}\sim 0.01\), because we required sufficiently high initial linear growth rates, \(\gamma/\omega_{ce}\gtrsim 10^{-3}\), to save computational resources (Figure 7). Whistler waves with such high amplitudes \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline \(\beta_{c}\) & \(C_{+}\) & \(\nu_{+}\) & \(C_{-}\) & \(\nu_{-}\) \\ \hline 0.3 & 0.44 & 0.66 & 1.76 & 0.91 \\ \hline 1 & 1.2 & 0.82 & 0.53 & 0.66 \\ \hline 3 & 1.84 & 0.9 & 0.39 & 0.58 \\ \hline \end{tabular} \end{table} Table 2: The best-fit parameters of the power-law fit given by Eq. (2) between the saturated amplitudes of sunward and anti-sunward whistler waves and their initial linear growth rates. The best power-law fits are demonstrated in Figure 7. rarely occur in the solar wind; the observed amplitudes are typically around \(10^{-3}B_{0}\)[10, 55, 41] and, hence, correspond to initial growth rates of about \(10^{-4}\omega_{ce}\). Nevertheless, we believe the revealed scaling relation is valid in a wide range of growth rates and its predictions can be compared with solar wind observations. Since linear growth rates of both sunward and anti-sunward whistler waves are larger for larger electron beta and temperature anisotropy [48], we expect whistler wave amplitudes to be positively correlated with these parameters. The corresponding positive correlations were indeed observed in the solar wind [10]. We also showed that the whistler waves have relatively large spectral widths positively correlated with electron beta (Figure 9a). Spacecraft observations revealed similar values of the spectral width and its positive correlation with the electron beta [10]. The revealed scaling relation (2) _does not_ correspond to whistler wave saturation via nonlinear trapping of cyclotron resonant electrons, which would occur once the bounce frequency of trapped electrons, \(\omega_{T}\approx\left(kv_{\perp}eB_{w}/m_{e}c\right)^{1/2}\), is comparable to the initial linear growth rate \(\gamma\), where \(v_{\perp}\sim(2T_{h}/m_{e})^{1/2}\) is a typical perpendicular speed of resonant electrons [56, 57, 58]. The saturation via nonlinear trapping would result in \(B_{w}/B_{0}\propto(\gamma/\omega_{ce})^{2}\) that is inconsistent with the observed scaling relation. The nonlinear trapping does not occur when whistler waves have a sufficiently large spectral width or low amplitude such that the resonant velocity \(v_{||}=(\omega-\omega_{ce})/k\) is distributed in a range much wider than the nonlinear resonance width \(\omega_{T}/k\)[59, 57]. In this case the nonlinear evolution and saturation of whistler waves could be described within the quasi-linear theory. Thus, the quasi-linear description applies provided that \[\frac{\partial}{\partial\omega}\left(\frac{\omega-\omega_{ce}}{k}\right) \Delta\omega\gg\frac{\omega_{T}}{k},\] which can be rewritten as follows \[\frac{\Delta\omega}{\omega}\gg\left(\frac{B_{w}}{B_{0}}\right)^{1/2}\left( \frac{\beta\ \omega}{\omega_{ce}-\omega}\right)^{1/4}, \tag{3}\] where \(\beta=\beta_{h}n_{0}/n_{h}\) and \(\beta_{h}=8\pi n_{h}T_{h}/B_{0}^{2}\) is the halo electron beta. We computed the ratio between the left- and right-hand sides of Eq. (3) for sunward and anti-sunward whistler waves observed in the simulations. Figure 9b shows that both sunward and anti-sunward whistler waves have relatively large spectral widths or low amplitudes to make quasi-linear description applicable at initial growth rates realistic of the solar wind. The same statement was shown to be valid for whistler waves actually observed in the solar wind [10]. Note that quasi-linear description may fail at growth rates exceeding \(0.01\omega_{ce}\), which are not realistic of solar wind plasma according to typically observed amplitudes of \(10^{-3}B_{0}\). The derivation of the scaling relation (2) using quasi-linear computations will be presented in a separate paper, where its validity in case of both Maxwell and \(\kappa-\)distributions of the halo population will be demonstrated. The combined whistler heat flux and temperature anisotropy instability is likely operating in the solar wind. Among strong indications for its operation are statistically significant observations of the halo population with perpendicular temperature anisotropy [49, 11, 28] and preferential occurrence of whistler waves in association with isotropic or perpendicular anisotropic halo [45, 10, 11]. The operation of this instability may seem to be at conflict with recent reports of predominantly anti-sunward whistler waves in the solar wind [41]. In fact, the small and still uncertain occurrence of sunward whistler waves can be caused by several reasons. First, whistler waves in the solar wind are indeed produced by other instabilities including a whistler heat flux instability with isotropic or parallel-anisotropic halo electrons [45] and a somewhat similar instability associated with a deficit of sunward electrons [40]; both these instabilities produce only anti-sunward whistler waves. Second, even in the presence of a perpendicular halo temperature anisotropy, the electron heat flux breaks the symmetry between sunward and anti-sunward whistler waves resulting in smaller growth rates and, hence, smaller saturated amplitudes of the former; therefore, the presence of sunward whistler waves is more likely obscured in magnetic field spectra by solar wind turbulence. Our previous simulations showed that anti-sunward whistler waves produced by the pure whistler heat flux instability are not efficient in the electron heat flux suppression [46]. In contrast, the combined whistler heat flux and temperature anisotropy instability can be more efficient in the electron heat flux suppression, especially at sufficiently large core electron beta and halo temperature anisotropy; at \(\beta_{c}\gtrsim 3\) and \(A_{h}\gtrsim 1.3\) the electron heat flux can be suppressed by up to 30-60% (Figure 8a). Note that more efficient suppression of the electron heat flux at higher electron betas is consistent with spacecraft observations [e.g., 8, 10, 4], while the effect of the halo anisotropy has not been addressed experimentally yet. Importantly, the efficiency of electron heat flux suppression is positively correlated with the amplitude \(B_{w}/B_{0}\) of sunward whistler waves and the electron heat flux can be suppressed by more than 10-60% at \(B_{w}/B_{0}\gtrsim 0.01\) (Figure 9b). This correlation allows the observed amplitude of sunward whistler waves to serve as an indicator of the efficiency of electron heat flux suppression. Since whistler waves in the solar wind have typical amplitudes below a few percent of background magnetic field (a fraction of them is sunward), we expect the typical efficiency of electron heat flux suppression within about 10%. The recent reports of sunward whistler waves with amplitudes of \(0.1B_{0}\)[52, 53] indicate however that electron heat flux can be occasionally suppressed by more than 60%. Note that the amplitude of anti-sunward whistler waves cannot similarly indicate the efficiency of electron heat flux suppression, since anti-sunward whistler waves are also produced by other instabilities proved to be inefficient in electron heat flux suppression [45, 46]. In conclusion, the combined whistler heat flux and temperature anisotropy instability is very likely operating in the solar wind and capable of producing both sunward and anti-sunward whistler waves. The presented Particle-In-Cell simulations revealed correlations between whistler wave properties (amplitude and spectral width) and various plasma parameters, which are consistent with previous solar wind observations. This instability can be efficient in electron heat flux suppression and the amplitude of sunward whistler waves can serve as an indicator of the efficiency. We expect future spacecraft measurements to reveal the occurrence and amplitude of sunward whistler waves and allow establishing the contribution of this instability to the electron heat flux suppression in the solar wind. ## Acknowledgments The work of I.K., I.V. and A.A. was supported by NASA grants 80NSSC21K0581 and 80NSSC23K0100. We would like to acknowledge high-performance computing support from Cheyenne (doi:10.5065/D6RX99HX) provided by NCAR's Computational and Information Systems Laboratory, sponsored by NSF grant No. 1502923. I.V. thanks the International Space Science Institute, Bern, Switzerland or supporting the working group on "Heliospheric Energy Budget: From Kinetic Scales to Global Solar Wind Dynamics". Figure 1: The results of linear stability analysis of a combined whistler heat flux and temperature anisotropy instability at fixed core electron beta and halo temperature anisotropy (\(\beta_{c}=1\) and \(A_{h}=1.3\)), but various values of the electron heat flux set by core drift velocity \(u_{c}/v_{A}\). Panels (a) and (b) present dispersion curves (\(\omega/\omega_{ce}\) vs. \(kc/\omega_{pe}\)) and growth rates (\(\gamma/\omega_{ce}\) vs. \(kc/\omega_{pe}\)) of whistler waves propagating parallel (\(k>0\)) and anti-parallel (\(k<0\)) to the electron heat flux, where \(\omega_{ce}\) and \(\omega_{pe}\) are respectively electron cyclotron and plasma frequencies. Panels (c)–(e) present the growth rate, frequency and wave number of the fastest-growing parallel and anti-parallel whistler waves at various values of core drift velocity \(u_{c}/v_{A}\). The green bars in panels (c)–(e) indicate \(u_{c}/v_{A}\) values used in simulation runs presented in Section 3. Figure 2: The results of a simulation run performed at \(\beta_{c}=1\), \(A_{h}=1.3\) and \(u_{c}/v_{A}=-3\). Panel (a) presents the magnitude of magnetic field \(\delta\mathbf{B}(x,t)=\delta B_{y}(x,t)\hat{y}+\delta B_{z}(x,t)\hat{z}\) perpendicular to background magnetic field \(B_{0}\hat{x}\). Panels (b) and (c) demonstrate the magnitude of magnetic fields \(\delta\mathbf{B}_{+}(x,t)\) and \(\delta\mathbf{B}_{-}(x,t)\) corresponding to whistler waves propagating parallel and anti-parallel to the electron heat flux. The magnetic field fluctuations were decomposed into those propagating parallel and anti-parallel to the electron heat flux using the Fourier transform (Section 2). Figure 4: The results of simulation runs performed at \(\beta_{c}=1\) and \(A_{h}=1.3\), but various values of core electron drift velocity \(u_{c}/v_{A}\) (Section 3). Panel (a) presents saturated amplitudes \(B_{w}^{+}/B_{0}\) and \(B_{w}^{-}/B_{0}\) of parallel and anti-parallel whistler waves along with their _initial_ linear growth rates \(\gamma_{+}/\omega_{c}\) and \(\gamma_{-}/\omega_{c}\) also shown in Figure 1. Panel (b) demonstrates that the ratios \(B_{w}^{+}/B_{w}^{-}\) and \(\gamma_{+}/\gamma_{-}\) are closely correlated, \(B_{w}^{+}/B_{w}^{-}\approx(\gamma_{+}/\gamma_{-})^{0.73}\). Figure 3: Panel (a) presents the temporal evolution of averaged magnetic field magnitudes of parallel and anti-parallel whistler waves observed in the simulation run shown in Figure 2; the magnetic field magnitudes were averaged over the simulation box, \(\langle\delta B_{\pm}\rangle=\left[L^{-1}\int_{0}^{L}|\delta\mathbf{B}_{\pm}|^ {2}\,dx\right]^{1/2}\). Panel (b) presents the corresponding growth rates, \(\gamma(t)=d/dt\left[\langle\,\delta B_{\pm}\rangle\,\right]\). Figure 5: The temporal evolution of the electron heat flux in simulation runs performed at \(\beta_{c}=1\) and \(A_{h}=1.3\), but various values of core electron drift velocity \(u_{c}/v_{A}\) (Section 3). The panel presents the relative electron heat flux variation in percents, \(\delta q_{e}=100\%\cdot\left[\,q_{e}(t)/q_{e}(0)-1\,\right]\), where \(q_{e}(t)\) is the electron heat flux averaged over the simulation box. Figure 6: The results of all the 75 simulation runs performed at various values of core electron beta \(\beta_{c}\), halo temperature anisotropy \(A_{h}\) and core electron drift velocity \(u_{c}/v_{A}\) (Table 1). Each of panels (a)–(c) presents saturated amplitudes of parallel and anti-parallel whistler waves observed in 25 simulation runs performed at \(\beta_{c}=0.3,1\) and 3. Note that we present the amplitude of a whistler wave provided that its initial linear growth rate is larger than \(10^{-3}\omega_{c}\); otherwise the computation time of \(5000\omega_{ce}^{-1}\) is insufficient for the whistler waves to saturate in our simulations. Figure 8: Panel (a) presents the relative electron heat flux variations reached by the end of our 75 simulation runs performed at various values of core electron beta \(\beta_{c}\), halo temperature anisotropy \(A_{h}\) and core electron drift velocity \(u_{c}/v_{A}\) (Table 1). Panel (b) shows that the electron heat flux variation versus the saturated amplitude of whistler waves propagating anti-parallel to the electron heat flux (sunward whistler waves). Note that we only present results of those simulations runs, where sunward whistler waves had initial growth rates larger than \(10^{-3}\omega_{ce}\) and could saturate over the computation time. Figure 7: Each of panels (a)–(c) presents saturated amplitudes of parallel and anti-parallel whistler waves observed in 25 simulation runs performed at \(\beta_{c}=0.3,1\) and 3. The saturated amplitudes \(B_{w}/B_{0}\) of the whistler waves are correlated with their initial linear growth rates \(\gamma/\omega_{ce}\) and the best power law fits are indicated in the panels; the best fit parameters are also presented in Table 2. Figure 9: Panel (a) presents the relative spectral widths \(\Delta\omega/\omega\) of parallel (anti-sunward) and anti-parallel (sunward) whistler waves observed over the saturation stage in the simulations performed at various background plasma parameters (Table 1). The spectral widths \(\Delta\omega\) and central frequencies \(\omega\) were computed using Gaussian fittings of the spectra. Panel (b) presents the ratio between the left- and right-sides of Eq. (3) computed separately for sunward and anti-sunward whistler waves and plotted versus their initial linear growth rates. The fact that this ratio is larger than one implies the nonlinear evolution and saturation of the whistler waves can described within quasi-linear theory.
2301.00222
Quarkonia production in ultra-peripheral PbPb collisions at LHCb
Measurements of coherent charmonium production cross sections together with their ratio in ultra-peripheral PbPb collisions are studied at a nucleon-nucleon centre-of-mass energy of $5.02\,\mathrm{TeV}$, the differential cross-sections are measured as a function of rapidity and transverse momentum, separately. The photo-production of \jpsi mesons at low transverse momentum is studied in peripheral PbPb collisions, which confirms coherent \jpsi production in hadronic collisions. These latest results significantly improve previous measurements and are compared with some theoretical predictions.
Xiaolin Wang
2022-12-31T15:27:45Z
http://arxiv.org/abs/2301.00222v1
# Quarkonia production in ultra-peripheral PbPb collisions at LHCb+ ###### Abstract Measurements of coherent charmonium production cross sections together with their ratio in ultra-peripheral PbPb collisions are studied at a nucleon-nucleon centre-of-mass energy of 5.02 TeV, the differential cross-sections are measured as a function of rapidity and transverse momentum, separately. The photo-production of \(J\!/\!\psi\) mesons at low transverse momentum is studied in peripheral PbPb collisions, which confirms coherent \(J\!/\!\psi\) production in hadronic collisions. These latest results significantly improve previous measurements and are compared with some theoretical predictions. ## 1 Introduction The LHCb detector is a single-arm forward spectrometer covering the pseudorapidity range \(2<\eta<5\)[1, 2], which has great performance with precise vertex reconstruction, high particle momentum resolution, and excellent particle identification capability. Measurements of quarkonia production in (ultra-)peripheral heavy-ion collisions play an important role in studying the photon-nucleus interaction, the mechanism of vector meson production, and the partonic structure of nuclei. Meanwhile, coherent photo-production would provide an excellent laboratory to study the nuclear shadowing effects and the initial state of heavy ion collisions at small-\(x\) at the LHC [3]. ## 2 Charmonia production in ultra-peripheral PbPb collisions at LHCb [4] Ultra-peripheral collisions(UPCs) occur when the impact parameter is larger than the sum of the radii of two incoming nuclei [5]. The two ions ###### Abstract The \(J\!\!/\!\psi\) and \(\psi(2S)\) mesons are produced from the colorless reaction of a photon from one nucleus and a Pomeron from the other. If the photon interacts with a Pomeron emitted by the whole nucleus, there would be no nucleus break-up, and this is called coherent production, which is the process we are going to study. The charmonia candidates are reconstructed through the \(J\!/\!\psi\,\to\mu^{+}\mu^{-}\) and \(\psi(2S)\to\mu^{+}\mu^{-}\) decay channels using a PbPb data sample corresponding to an integrated luminosity of \(228\pm 10\,\mu\)b\({}^{-1}\), which was collected by the LHCb experiment in 2018. According to the characteristics of UPCs, only events with low activity could be kept in the selection. After that, the signal extraction is done through two steps. A fit on the dimuon invariant mass spectrum is needed to estimate the non-resonant background yields within the \(J\!/\!\psi\,\) and \(\psi(2S)\) mass windows, then the fits to \(J\!/\!\psi\,\) and \(\psi(2S)\) transverse momentum distributions of selected candidates are performed to determine the coherent production events from the inclusive charmonia yields, as shown in Fig. 1. The measured differential cross sections as a function of \(y^{*}\) and \(p_{\rm T}^{*}\) of coherent \(J\!/\!\psi\,\) and \(\psi(2S)\) are shown in Fig. 2, where the starred notation indicates that the observable is defined in the nucleus-nucleus centre-of-mass frame. The differential cross-section ratio of coherent \(\psi(2S)\) to \(J\!/\!\psi\,\) production is calculated as a function of rapidity for the first time and shown in Fig. 3. Compared with theoretical predictions, which are grouped as perturbative-QCD calculations [6, 7] and colour-glass-condensate (CGC) models [8, 9, 10, 11, 12, 13, 14, 15], it could be found that the measurements are in agreement with most of the prediction curves in general. production yields versus transverse momentum as shown in Fig. 5. For the phenomenological models, one scenario does not consider the destructive effect due to the overlap between the two nuclei, whereas the other takes it into account. In general, the trends of \(J\!/\!\psi\) photo-production measurements and theoretical predictions are consistent. Meanwhile, the two theoretical curves do not show a significant difference because the collisions are peripheral and the nuclear overlapping effect is expected to increase in more central collisions. Currently, the measurement is limited to low values of \(N_{part}\) due to detector limitation, and the measurement of \(p_{\rm T}\) is consistent with the coherent \(J\!/\!\psi\) photo-production; the peak value of transverse momentum is around 60 \(\,\)MeV/\(c\). Figure 1: Fit to the invariant mass distribution of dimuon candidates (up) and the \(\ln(p_{\rm T}^{*2})\) distribution fit of dimuon candidates within the \(2.0<y^{*}<4.5\) range for \(J\!/\!\psi\) candidates (left) and \(\psi(2S)\) candidates (right), where the starred notation indicates that the observable is defined in the nucleus-nucleus centre-of-mass frame. Figure 3: Differential cross-section ratio of \(\psi(2S)\) to \(J\!/\!\psi\) as a function of \(y^{*}\), compared to theoretical predictions, which are separated into perturbative QCD calculations (red lines) and colour-glass-condensate models (blue lines). Figure 2: Differential cross-section as a function \(y^{*}\) (up) and \(p_{\rm T}^{*}\) (down) for coherent \(J\!/\!\psi\) (left) and \(\psi(2S)\) (right) production. Compared to theoretical predictions, which are grouped as perturbative-QCD calculations (red lines) and colour-glass-condensate models (blue lines). ## 4 Conclusion We report the results of coherent \(J\!/\!\psi\,\), \(\psi\)(2S) production cross sections in UPCs, as well as their ratio. It is the first coherent \(\psi(2S)\) and \(\psi(2S)\) to \(J\!/\!\psi\,\) ratio measurement in forward rapidity region for UPC at LHC, and the differential cross section of coherent \(J\!/\!\psi\,\) and \(\psi(2S)\) production in PbPb UPC is also measured as a function of \(p_{\rm T}^{*}\) for the first time. Also, it is the most precise measurement of \(J\!/\!\psi\,\) production in UPCs. The production of photo-produced \(J\!/\!\psi\,\) mesons in peripheral PbPb collisions is measured and by far it is the most precise coherent photo-produced \(J\!/\!\psi\,\) by LHCb.
2309.04666
Derivation of Instrument Requirements for Polarimetry using Mg, Fe, and Mn lines between 250 and 290 nm
Judge et al. (2021) recently argued that a region of the solar spectrum in the near-UV between about 250 and 290 nm is optimal for studying magnetism in the solar chromosphere due to an abundance of Mg II, Fe II, and Fe I lines that sample various heights in the solar atmosphere. In this paper we derive requirements for spectropolarimetric instruments to observe these lines. We derive a relationship between the desired sensitivity to magnetic field and the signal-to-noise of the measurement from the weak-field approximation of the Zeeman effect. We find that many lines will exhibit observable polarization signals for both longitudinal and transverse magnetic field with reasonable amplitudes.
A. G. de Wijn, P. G. Judge, R. Ezzeddine, A. Sainz Dalda
2023-09-09T02:55:52Z
http://arxiv.org/abs/2309.04666v1
Derivation of Instrument Requirements for Polarimetry using Mg, Fe, and Mn lines between 250 and 290 nm ###### Abstract Judge et al. (2021) recently argued that a region of the solar spectrum in the near-UV between about \(250\,\mathrm{\SIUnitSymbolMicro m}\) and \(290\,\mathrm{\SIUnitSymbolMicro m}\) is optimal for studying magnetism in the solar chromosphere due to an abundance of Mg ii, Fe ii, and Fe i lines that sample various heights in the solar atmosphere. In this paper we derive requirements for spectropolarimetric instruments to observe these lines. We derive a relationship between the desired sensitivity to magnetic field and the signal-to-noise of the measurement from the weak-field approximation of the Zeeman effect. We find that many lines will exhibit observable polarization signals for both longitudinal and transverse magnetic field with reasonable amplitudes. 0000-0002-1878-7886]A.G. de Wijn 0000-0002-4135-5235]P.G. Judge 0000-0002-4135-5235]R. Ezzeddine 0000-0002-1883-0886]A. Sanz Dalda ## 1 Introduction The region between \(250\) and \(290\,\mathrm{\SIUnitSymbolMicro m}\) of the solar spectrum contains the well-known Mg ii h and k lines but also a large number of Fe i and Fe ii lines. In particular, there are many strong Fe ii lines that sample various heights in the solar atmosphere. Judge et al. (2021) argue that these lines are highly promising for studying magnetism in the solar chromosphere based on a broad evaluation of possible diagnostics of magnetic field in the solar chromosphere. In order to evaluate the practical use of this region of the spectrum, they analyzed the signal levels expected in the Mg ii k line by synthesizing polarized spectra using the HanleRT code (del Pino Aleman et al., 2020) over a grid of line-of-sight angles and magnetic field strengths for a given field inclination and azimuth angle. They find that the combined Hanle and Zeeman effects produce measurable signals for this line in many geometries. In particular, they note that diagnostics of vector field with strengths of \(5\) to \(50\,\mathrm{G}\) are achievable for observation angles greater than \(45\,\mathrm{\SIUnitSymbolDegree}\), as a result of the Hanle effect in Stokes \(Q\) and \(U\). However, they do not evaluate _observability_ of magnetic field diagnostics using the many Fe ii lines, which underpin the value of this region of the spectrum for diagnostics of magnetic field in the chromosphere. In this paper we evaluate the signal-to-noise ratio (SNR) required to observe a signature of longitudinal or transverse magnetic field with a given field strength under the assumption of the weak-field approximation of the Zeeman effect. We do not treat the Hanle effect in our study. Judge et al. (2021) note that work on the Hanle effect in the Fe ii lines is underway, and will be reported elsewhere. However, the Stokes \(V\) signal is unaffected by the Hanle effect, and therefore the signal strength of the longitudinal component of the magnetic field can be estimated using the methods in this paper. In addition, many science cases will require observations on the disk or of magnetic field with strengths considerably exceeding the critical Hanle field strength. In those situations, the Hanle effect has a negligible contribution to the polarization signal, and therefore the methods used in this paper apply. We will first discuss briefly the weak-field approximation and its applicability, and then derive formulae to relate the error on a measurement of the magnetic field to the SNR of an observation for both the longitudinal and transverse components. We will derive parameters in several different ways to illustrate possible ways one may approach a similar problem for other spectral lines. Finally, for specific lines in this particular spectral region, we will investigate the instrumental effect of limited spectral resolution, and illustrate the method through an example calculation. ## 2 Analysis The Zeeman splitting of a spectral line is given by \[\Delta\lambda_{\mathrm{B}}=s\,B\,\lambda_{0}^{2}, \tag{1}\] where \(B\) is the magnetic field strength and \(\lambda_{0}\) is the rest wavelength of the line. We here work with vacuum wave lengths in units of \(\rm nm\) and the magnetic flux density \(B\) in units of \(\rm G\), and therefore have \(s=4.67\times 10^{-11}\,\rm nm^{-1}\,G^{-1}\). If the Zeeman splitting is much smaller than the Doppler width of the line, it is possible to apply a perturbative scheme to the radiative transfer equations and derive expressions for the circular and linear polarization signals in terms of the first and second derivative of the intensity profile, respectively (Landi Degl'Innocenti & Landi Degl'Innocenti, 1973). Landi Degl'Innocenti & Landolfi (2004) note that for iron lines in the visible spectrum that sample the photosphere, this approximation is valid up to \(\rm kG\) field strengths. At shorter wavelengths, the weak-field approximation is applicable for stronger field strengths because Zeeman splitting scales with the square of the wavelength, while Doppler broadening scales linearly. In addition, lines that form in the chromosphere have larger Doppler broadening than those that form in the photosphere. For these reasons we can safely assume the weak-field approximation is applicable for magnetic field up to at least \(\rm kG\) strength for the lines we consider here. The circular and linear polarization signals as a function of wavelength are \[V(\lambda) =-\Delta\lambda_{\rm B}\,\bar{g}\,\frac{\partial I(\lambda)}{ \partial\lambda}\,\cos\theta, \tag{2}\] \[L(\lambda) =-\frac{1}{4}\Delta\lambda_{\rm B}^{2}\,\bar{G}\,\frac{\partial^ {2}I(\lambda)}{\partial\lambda^{2}}\,\sin^{2}\theta, \tag{3}\] where \(\bar{g}\) and \(\bar{G}\) are the effective Lande factors for longitudinal and transverse magnetic field, respectively, \(I(\lambda)\) is the intensity, and \(\theta\) is the inclination of the magnetic field with respect to the line-of-sight. For convenience, we write \(B_{\parallel}=B\,\cos\theta\) and \(B_{\perp}=B\,\sin\theta\). An interpretation of the Stokes profiles effectively combines the signal over a wavelength range \(\Delta\lambda\). We are therefore interested in the integrated absolute signals \(\mathcal{V}\) and \(\mathcal{L}\), \[\mathcal{V} =\int_{\Delta\lambda}|V(\lambda)|\] \[=s\,\bar{g}\,\lambda_{0}^{2}\,\left|B_{\parallel}\right|\,\int_{ \Delta\lambda}\left|\frac{\partial I(\lambda)}{\partial\lambda}\right|\,{\rm d}\lambda, \tag{4}\] \[\mathcal{L} =\frac{1}{4}\,s^{2}\,\bar{G}\,\lambda_{0}^{4}\,B_{\perp}^{2}\, \int_{\Delta\lambda}\left|\frac{\partial^{2}I(\lambda)}{\partial\lambda^{2}} \right|\,{\rm d}\lambda. \tag{5}\] Our goal is to evaluate the expected errors \(\sigma_{B\parallel}\) on \(B_{\parallel}\) and \(\sigma_{B\perp}\) on \(B_{\perp}\) in some way that can be readily estimated, such as a function of the SNR of the intensity measurement that can be determined from a flux budget calculation. We therefore also define the integrated intensity signal, \[\mathcal{I}=\int_{\Delta\lambda}I(\lambda)\,{\rm d}\lambda, \tag{6}\] and note that the uncertainties \(\sigma_{\mathcal{V}}\) of \(\mathcal{V}\) and \(\sigma_{\mathcal{L}}\) of \(\mathcal{L}\) are related to the uncertainty \(\sigma_{\mathcal{I}}\) of \(\mathcal{I}\) through the modulation efficiencies \(\epsilon_{I}\), \(\epsilon_{Q}\), \(\epsilon_{U}\), and \(\epsilon_{V}\) in Stokes \(I\), \(Q\), \(U\), and \(V\)(del Toro Iniesta & Collados, 2000), \[\sigma_{\mathcal{V}} =\frac{\epsilon_{I}}{\epsilon_{V}}\,\sigma_{\mathcal{I}}, \tag{7}\] \[\sigma_{\mathcal{L}} =\frac{\epsilon_{I}}{\epsilon_{L}}\,\sigma_{\mathcal{I}}, \tag{8}\] where we have written \(\epsilon_{L}\) for the modulation efficiency of the linear polarization signal of interest. Since Eq. 4 relates \(\mathcal{V}\) to \(B_{\parallel}\), we can express the uncertainty \(\sigma_{B\parallel}\) as the uncertainty of \(\mathcal{V}\), and subsequently of \(\mathcal{I}\) by substituting Eq. 7, \[\sigma_{B\parallel} =\frac{1}{s\,\bar{g}\,\lambda_{0}^{2}}\,\left(\int_{\Delta \lambda}\left|\frac{\partial I(\lambda)}{\partial\lambda}\right|\,{\rm d} \lambda\right)^{-1}\,\sigma_{\mathcal{V}} \tag{9}\] \[=\frac{1}{s\,\bar{g}\,\lambda_{0}^{2}}\,\frac{\epsilon_{I}}{ \epsilon_{V}}\,\left(\int_{\Delta\lambda}\left|\frac{\partial I(\lambda)}{ \partial\lambda}\right|\,{\rm d}\lambda\right)^{-1}\,\sigma_{\mathcal{I}}. \tag{10}\] Note that we have used the property of the WFA that \(I\) does not depend on \(B\). If we can express the integral of \(|\partial I(\lambda)/\partial\lambda|\) in terms of \(\mathcal{I}\), then \(\sigma_{B\parallel}\) can be expressed in the SNR of the intensity measurement, \(\mathcal{I}/\sigma_{\mathcal{I}}\). Equivalently, propagating the error in \(B_{\perp}\) through Eq. 5 and substituting Eq. 8, we find \[\sigma_{B\perp} =\frac{2}{s^{2}\,\bar{G}\,\lambda_{0}^{4}}\,\frac{1}{B_{\perp}}\, \left(\int_{\Delta\lambda}\left|\frac{\partial^{2}I(\lambda)}{\partial\lambda^ {2}}\right|\,{\rm d}\lambda\right)^{-1}\,\sigma_{\mathcal{L}} \tag{11}\] \[=\frac{2}{s^{2}\,\bar{G}\,\lambda_{0}^{4}}\,\frac{1}{B_{\perp}}\, \frac{\epsilon_{I}}{\epsilon_{P}}\,\left(\int_{\Delta\lambda}\left|\frac{ \partial^{2}I(\lambda)}{\partial\lambda^{2}}\right|\,{\rm d}\lambda\right)^{-1} \,\sigma_{\mathcal{I}}. \tag{12}\] In this case, we want to express the integral of \(|\partial^{2}I/\partial\lambda^{2}|\) in terms of \(\mathcal{I}\). We note, however, that \(\sigma_{B\perp}\) is a function of \(B_{\perp}\), and therefore the same SNR in \(\mathcal{I}\) will yield a different measurement error depending on the strength of the field being measured. Notably, \(\sigma_{B\perp}\) is infinite for \(B_{\perp}=0\). In practice, \(\mathcal{I}\) will be the sum of a series of discrete measurements. Each pixel samples the signal weighted with some point spread function, which causes cancellation of some amount of signal. This effect is discussed in more detail in Sect. 3. We assume here that the instrument is a spectrograph that samples the spectrum critically with a resolution \(R\). However, the analysis for a different type of instrument, such as a wavelength-tunable imager, is analogous. The number of measurements \(N\) that spans the wavelength range \(\Delta\lambda\) is given by \[N=\frac{2R\,\Delta\lambda}{\lambda_{0}}. \tag{13}\] There are several ways to approach expressing the first and second derivatives of \(I(\lambda)\) in terms of itself. We will examine a simple approximation of the gradient in terms of the intensity and a characteristic wavelength interval, and a numeric calculation from simulated data or from observations of the intensity spectrum. ### Characteristic Wavelength Interval This simple approximation equates the gradient as the ratio of the intensity and some characteristic wavelength interval, \[\left|\frac{\partial I(\lambda)}{\partial\lambda}\right|\approx\frac{I(\lambda)} {\Delta\lambda}. \tag{14}\] Substituting Eq. 14 in Eq. 10, we find \[\sigma_{B\parallel}=\frac{\Delta\lambda}{s\,\bar{g}\,\lambda_{0}^{2}}\,\frac{ \epsilon_{I}}{\epsilon_{V}}\,\frac{\sigma_{\mathcal{I}}}{\mathcal{I}}. \tag{15}\] We approximate the pixel SNR, \[I\approx\frac{1}{N}\,\mathcal{I},\quad\sigma_{I}\approx\frac{1}{\sqrt{N}}\, \sigma_{\mathcal{I}}. \tag{16}\] If the measurement noise is dominated by photon statistics, as is often the case, that property is preserved also for the average intensity \(I\), i.e., \(\sigma_{I}\) is approximately the square root of \(I\), and therefore the SNR \(I/\sigma_{I}\) is the average SNR of the measurement over the wavelength range \(\Delta\lambda\). Substitution of Eqs. 16 and 13 in Eq. 15 yields \[\sigma_{B\parallel}=\frac{1}{s\,\bar{g}\,\lambda_{0}}\,\sqrt{\frac{\Delta \lambda}{2R\,\lambda_{0}}}\,\frac{\epsilon_{I}}{\epsilon_{V}}\,\frac{\sigma_{ I}}{I}. \tag{17}\] We apply this method to estimate the sensitivity of the Mg ii h line. For this line, we have \(\bar{g}=1.33\), and \(\lambda_{0}=280\,\mathrm{nm}\). We estimate \(\Delta\lambda=0.03\,\mathrm{nm}\) (equivalent to a veloc Figure 1: Blue lines: synthetic spectrum in two NUV windows that contain Fe i and Fe ii lines of interest. Orange lines: measured flux from the 1983 Air Force Geophysics Laboratory (AFGL) balloon measurement. The synthetic spectrum shows very good agreement with the measured spectrum for the Fe i and Fe ii lines. ity of \(\pm 15\,{\rm km\,s^{-1}}\), see Eq. 25). We now find \[\sigma_{B\parallel}=7.58\times 10^{5}\,\frac{1}{\sqrt{R}}\,\frac{\sigma_{I}}{I} \,\mathrm{G}. \tag{18}\] For example, if we evaluate this equation for a instrument with \(R=30\,000\) and require \(\sigma_{B\parallel}\leq 8\,\mathrm{G}\), we find that the SNR in \(I\) must be at least \(547\). ### Observed or Simulated Intensity Spectra The estimation in Sect. 2.1 is obviously crude as it relies on a good estimate of \(\Delta\lambda\), which is difficult without some prior knowledge of the line profile. A much better estimate can be derived from intensity spectra that were observed or computed with a numerical radiative transfer code such as RH (Uitenbroek, 2001) or TURBOSPECTRUM (Alvarez & Plez, 1998; Plez, 2012) using a model atmosphere. We can numerically calculate the integral of the absolute value of the gradient as a fraction of \(\mathcal{I}\), \[\gamma=\frac{1}{\mathcal{I}}\,\int_{\Delta\lambda}\left|\frac{\partial I( \lambda)}{\partial\lambda}\right|\,\mathrm{d}\lambda, \tag{19}\] Using again Eqs. 16 and 13, we now have \[\sigma_{B\parallel}=\frac{1}{s\,\bar{g}\,\lambda_{0}^{2}}\,\frac{1}{\gamma} \sqrt{\frac{\lambda_{0}}{2\Delta\lambda}}\,\frac{\epsilon_{I}}{\sqrt{R}\, \epsilon_{V}}\,\frac{\sigma_{I}}{I}. \tag{20}\] We now define the sensitivity factor for the error on the longitudinal field, \[\Gamma=\gamma\,s\,\bar{g}\,\lambda_{0}^{2}\,\sqrt{\frac{2\Delta\lambda}{ \lambda_{0}}} \tag{21}\] that captures the properties of a spectral line. Lines with larger \(\Gamma\) have higher sensitivity to longitudinal field, i.e., a requirement for a particular sensitivity of \(B_{\parallel}\) can be met with a measurement with lower SNR. Similarly for the transverse component of the magnetic field we can numerically calculate the integral of the second derivative, \[\chi=\frac{1}{\mathcal{I}}\,\int_{\Delta\lambda}\left|\frac{\partial^{2}I( \lambda)}{\partial\lambda^{2}}\right|\,\mathrm{d}\lambda, \tag{22}\] and find \[\sigma_{B\perp}=\frac{1}{s^{2}\,\bar{G}\,\lambda_{0}^{4}}\,\frac{1}{\chi}\, \sqrt{\frac{2\lambda_{0}}{\Delta\lambda}}\,\frac{1}{B_{\perp}}\,\frac{ \epsilon_{I}}{\sqrt{R}\,\epsilon_{L}}\,\frac{\sigma_{I}}{I}. \tag{23}\] Therefore, we define the sensitivity factor for the error on the transverse field, \[\mathrm{X}=\chi\,s^{2}\,\bar{G}\,\lambda_{0}^{4}\,\sqrt{\frac{\Delta\lambda} {2\lambda_{0}}}. \tag{24}\] We use the above procedure to calculate \(\Gamma\) and \(\mathrm{X}\) for the lines listed in Judge et al. (2021) from a synthetic spectrum and observations from the IRIS mission (De Pontieu et al., 2014). Figure 1 shows the synthetic spectrum calculated with TURBOSPECTRUM from a custom Sun-like MARCS model atmosphere (Gustafsson et al., 2008) and using a line list adopted from the VALD database (Piskunov et al., 1995; Ryabchikova et al., 2015). This spectrum was calculated under the assumption of local thermal equilibrium (LTE), but shows very good agreement with measured spectra for the Fe I and Fe II lines, such as the AFGL balloon measurements (Hall & Anderson, 1991) shown also in Fig. 1. The synthesis includes \(102\,691\) lines from \(115\) atomic and \(10\) molecular species between \(256\) and \(285\,{\rm nm}\). The MARCS model atmosphere does not include a chromospheric temperature rise. However, using a more realistic atmospheric model does not necessarily result in a more realistic synthetic spectrum. A model atmosphere like FAL-C (Fontenla et al., 1993) would result in emission peaks in the cores of strong lines that are not observed in the AFGL spectra. These peaks are the result of the assumption of LTE that is not valid in the line cores, and the peaks would not be present if non-LTE physics (e.g., scattering and partial redistribution) were included in the spectral synthesis. We can expect the intensity in the cores of strong lines to saturate in this synthesis (see, e.g., the upper-left panel of Fig. 2). The resultant line core profile is nearly flat and exhibits only a small gradient and second derivative, and hence little circular or linear polarization signal is produced in the presence of magnetic field. Therefore, the analysis presented here based on this synthesis will produce lower values of \(\Gamma\) and \(\mathrm{X}\) than one based on a synthesis that incorporates pertinent non-LTE effects and uses a more realistic atmospheric model. IRIS observes the solar spectrum around the Mg II h and k lines that also includes the Mn I lines used by Ishikawa et al. (2021) to infer longitudinal magnetic field from data from the CLASP2 flight (Narukage et al., 2016; Tsuzuki et al., 2020). We choose a data set of NOAA AR12957 taken on March 4, 2022 around 10 UT. This observation contains a region of plage, the edge of a sunspot, and some more quiet areas. The left panel of Fig. 3 shows the intensity of the Mg II k core. We could compute the longitudinal and transverse sensitivity factors for every pixel in the map. However, we would overestimate the factors due to measurement noise that creates spurious signals in first and second derivative of the intensity profile. We therefore use "representative profiles" (RPs) from the IRIS\({}^{2}\) database (Sainz Dalda et al., 2019). An RP is an average of many similar line profiles, and therefore has very low noise, so that we can accurately determine the first and second derivative of the intensity. There are 160 RPs in this map. More than half the pixels are represented by the most popular 25 RPs, and only 3 RPs represent less than 100 pixels each. We note that we can express \(\Delta\lambda\) in terms of a velocity \(v\), \[\Delta\lambda=2\,\frac{v}{c}\lambda_{0}. \tag{25}\] We use \(v=12.5\,\mathrm{km\,s^{-1}}\) for the Fe I, Fe II, and Mg II lines, and \(v=7.5\,\mathrm{km\,s^{-1}}\) for the Mn I lines. These velocities give reasonable integration intervals and more or less correspond to typical sound speed estimates for the chromosphere and photosphere. The values for \(\Gamma\) and X are not strongly dependent on the choice of \(v\), since the wings of the lines tend not to contribute significant polarization signal (see Fig. 2). However, other nearby spectral lines may contribute spurious polarization signal in the wavelength window. We therefore limit the window to the nearest local maximum of the intensity spectrum to reduce contamination by other lines (see Fig. 2 panels in the middle row left and center column). \begin{table} \begin{tabular}{c c c c c|c c|c c|c} \hline \hline \(\lambda_{0}\) & \(\lambda_{0,\mathrm{air}}\) & \(\Delta\lambda\) & Ion & \(\log\tau_{0}\) & \(g\) & \(\Gamma\) & \(G\) & X & Blend \\ (nm) & (nm) & (pm) & & & & (\(10^{-5}\,\mathrm{G^{-1}}\)) & & (\(10^{-8}\,\mathrm{G^{-2}}\)) & \\ \hline [MISSING_PAGE_POST] evere \\ \hline \end{tabular} \end{table} Table 1: Longitudinal and transverse field sensitivity factors for prominent lines in the solar chromospheric spectrum. \(\Delta\lambda\) denotes the integration window. Landé g-factors are computed using LS coupling. Wavelength in air is included for easy reference against Table 1 in Judge et al. (2021). Results are given in Table 1 and shown in Figs. 3 and 4. The line profiles were visually evaluated and qualitatively categorized as suffering from blends with varying severity given in the "Blend" column. The \(\Gamma\) and X values of a line with a major or severe blend are likely affected and overestimate the true values, as the blends create additional gradients. The values for the Mg ii and Mn i lines in the table are derived from the spectrum of the most popular RP that represents \(10\,697\) pixels (3.2% of the FOV). We processed each RP, and map the longitudinal and transverse sensitivity factors back to pixels in the FOV, shown in the center and right panels of Fig. 3, respectively. Figure 4 shows the cumulative probability density functions for the longitudinal and transverse sensitivity factors. The most popular RP is around the 70% and 85% percentiles for \(\Gamma\) and X, respectively. We select a series of Fe ii lines, a single Fe i line, and the Mg ii h and k lines for closer study. We pick Fe i and Fe ii lines that are preferentially unaffected by blends, sample at optical depths down to the photosphere in roughly equal steps of \(\log\tau\), have large \(\Gamma\) and X, and are nearby one another in the spectrum. This selection procedure results nine lines in two distinct wavelength regions of interest: between Fe ii\(260.018\,\)nm and Fe ii\(262.245\,\)nm, and between Fe i\(274.488\,\)nm and Mg ii h\(280.353\,\)nm. Figure 2 shows the \begin{table} \begin{tabular}{c c c c c|c c|c|c} \hline \hline \(\lambda_{0}\) & \(\lambda_{0,\rm air}\) & \(\Delta\lambda\) & Ion & \(\log\tau_{0}\) & \(g\) & \(\Gamma\) & \(G\) & X & Blend \\ (nm) & (nm) & (pm) & (pm) & & & (\(10^{-5}\,\)G\({}^{-1}\)) & & (\(10^{-8}\,\)G\({}^{-2}\)) & \\ \hline [MISSING_PAGE_POST] ii & \(-0.30\) & 1.33 & 0.31 & 1.33 & 0.15 & \\ \hline \end{tabular} \end{table} Table 1: _(continued)_ synthetic spectra for these lines, together with the first and second derivatives. ## 3 Instrumental Effects In practice, all measurements will be affected by instrumental effects. As already mentioned in Sect. 2, \(\mathcal{I}\) will be the sum of a series of discrete measurements that sample the signal weighted with some line spread function (LSF). We evaluate here how this affects the required measurement sensitivity. For the sake of simplicity, we assume each measurement is affected by the same LSF \(\rho(\lambda)\). Equations 6, 19, and 22 then become \[\mathcal{I}^{\prime} =\int_{\Delta\lambda}(\rho*I)(\lambda)\,\mathrm{d}\lambda, \tag{26}\] \[\gamma^{\prime} =\frac{1}{\mathcal{I}^{\prime}}\,\int_{\Delta\lambda}\left| \left(\rho*\frac{\partial I}{\partial\lambda}\right)(\lambda)\right|\,\mathrm{ d}\lambda,\] (27) \[\chi^{\prime} =\frac{1}{\mathcal{I}^{\prime}}\,\int_{\Delta\lambda}\left| \left(\rho*\frac{\partial^{2}I}{\partial\lambda^{2}}\right)(\lambda)\right| \,\mathrm{d}\lambda, \tag{28}\] where \(*\) denotes convolution. It is straightforward to implement this in the numerical analysis presented in Sect. 2.2. The LSF of a spectrograph depends on its specific configuration (Casini & de Wijn, 2014). The LSF of a typical spec Figure 2: Intensity, and first and second derivative of intensity for selected lines. trograph operating in Littrow condition that also satisfies the "pixel-matching" condition, i.e., the projected width of the slit is equal to the width of a camera pixel, is approximately Gaussian after accounting for sampling. We therefore choose to model the LSF as a Gaussian function. Signal loss factors are shown as a function of spectral resolution for a collection of Fe i and Fe ii lines and the Mg ii lines in Fig. 5. Stokes \(V\) signals are less affected than linear polarization signals at reasonable spectral resolution. Linear polarization signal loss varies considerably from line to line, but generally lines that form deeper in the atmosphere have narrower profiles that require higher spectral resolution to achieve the same loss factor. The Fe ii\(260.018\,\mathrm{nm}\) line is affected by a blend that at spectral resolutions below about \(40\,000\) starts to contaminate the polarization signal, causing a spurious rise in the \(L\) signal loss factor. ## 4 Example SNR Calculation We now show an example using the above calculations to derive measurement requirements, i.e., SNR on Stokes \(I\), for a hypothetical instrument that observes the nine spectral lines previously selected. Substituting \(\gamma^{\prime}\) and \(\chi^{\prime}\) for \(\gamma\) and \(\chi\) in Eqs. 21 and 24 to account for instrument spectral resolution yields \[\Gamma^{\prime} =2\,\gamma^{\prime}\,s\,\bar{g}\,\lambda_{0}^{2}\,\sqrt{\frac{v} {c}} \tag{29}\] \[\mathrm{X}^{\prime} =\chi^{\prime}\,s^{2}\,\bar{G}\,\lambda_{0}^{4}\,\sqrt{\frac{v} {c}}. \tag{30}\] The errors on the longitudinal and transverse magnetic field are given by \[\sigma_{B\parallel} =\frac{1}{\Gamma^{\prime}}\,\frac{\epsilon_{I}}{\sqrt{R}\, \epsilon_{V}}\,\frac{\sigma_{I}}{I}, \tag{31}\] \[\sigma_{B\perp} =\frac{1}{\mathrm{X}^{\prime}}\,\frac{1}{B_{\perp}}\,\frac{ \epsilon_{I}}{\sqrt{R}\,\epsilon_{L}}\,\frac{\sigma_{I}}{I}. \tag{32}\] We assume the instrument has a spectral resolution of \(30\,000\) and calculate \(\Gamma^{\prime}\) and \(\mathrm{X}^{\prime}\) for spectral lines of interest. Figure 4: Cumulative distribution function of the longitudinal and transverse sensitivity factors \(\Gamma\) and \(\mathrm{X}\) for the Mg ii h and k lines. The \(\Gamma\) and \(\mathrm{X}\) values and the cumulative probabilities for the most popular RP (also given in Table 1) are indicated by vertical and horizontal dotted lines, respectively. Figure 3: IRIS map used to calculate the longitudinal and transverse sensitivity factors for Mg ii and Mn i lines. Left panel: intensity in the core of the Mg ii k line. Center panel: longitudinal field sensitivity factor \(\Gamma\). Right panel: transverse field sensitivity factor \(\mathrm{X}\). A reasonable estimate for a near-optimal, balanced modulator is \(\epsilon_{I}/\epsilon_{V}\approx\epsilon_{I}/\epsilon_{L}\approx 1.8\)(e.g., Tomczyk et al., 2010). We thus set the factors \(\epsilon_{I}/(\sqrt{R}\,\epsilon_{V})\) and \(\epsilon_{I}/(\sqrt{R}\,\epsilon_{L})\) to be equal to \(0.01\). Finally, we have to estimate \(B_{\perp}\) in order to evaluate Eq. 32. We choose a stepped function for \(B_{\perp}\) starting at \(200\,\mathrm{G}\) at the top of the chromosphere to \(50\,\mathrm{G}\) in the photosphere based on simulations of a magnetic flux rope and a sheared arcade (M. Rempel, private communication). As an example, we calculate the required SNR \(\sigma_{I}/I\) required to detect the transverse field and a \(20\,\mathrm{G}\) longitudinal field with \(2.5\,\sigma\) significance. The results are summarized in Table 2. We note that the SNR values are the average pixel SNR over the spectral window between the vertical dotted lines in the panels of Fig. 2. Generally, we observe that lines that form higher in the atmosphere require higher SNR. To demonstrate that these SNR requirements are achievable, we also calculate the SNR of a \(12\,\mathrm{s}\) integration by such a hypothetical instrument with a \(30\,\mathrm{cm}\) aperture, 2.5% throughput, and \(1\,\arcsec\) spatial resolution. The deep Fe ii line at \(260.018\,\mathrm{nm}\) drives the instrument requirements. The Mg ii lines achieve higher SNR than that line because of the increased intensity in the 2V and 2R peaks. ## 5 Conclusion We have investigated the lines in the wavelength region between about \(250\) and \(290\,\mathrm{nm}\) identified by Judge et al. (2021) as being optimal for studying magnetism in the solar chromosphere. We have derived equations and procedures to quantify the sensitivity of spectral lines to magnetic field through the Zeeman effect based on observed or synthetic intensity spectra. While we have applied these methods to the spectral region suggested by Judge et al. (2021), they are applicable to any spectral line. An example calculation shows that observations with an instrument with a spectral resolution of \(30\,000\) need to reach achievable Stokes-\(I\) SNR ratios of a few hundred, which are achievable for an instrument with a \(30\,\mathrm{cm}\) aperture and 2.5% throughput at \(1\,\arcsec\) spatial resolution with an integration time of \(12\,\mathrm{s}\). We thus conclude that this region of the solar spectrum in the near-UV yields observable polarization signals with suitable diagnostic potential for studies of chromospheric magnetism through the Zeeman effect. We have not evaluated the _interpretability_ of observations of these lines. That work requires a more complex approach of synthesizing spectra from known model atmospheres, degrading those spectra as if they were observed by a hypothetical instrument, and attempting to recover the model parameters through interpretation using the WFA or with inversion codes like DeSIRe (Ruiz Cobo et al., 2022), STIC (de la Cruz Rodriguez et al., 2019), or TIC (Li et al., 2022). The Mg ii lines have been studied and used for diagnostics of chromospheric magnetism in recent years (e.g., del Pino Aleman et al., 2016; Manso Sainz et al., 2019; Ishikawa et al., 2021; Centeno et al., 2022; Rachmeler et al., 2022; Afonso Delgado et al., 2023; Li et al., 2023) These efforts should be continued and expanded to include the Fe i and Fe ii lines \begin{table} \begin{tabular}{c c c|r r|r|r} \hline \hline \(\lambda_{0}\) & Ion & \(\log\tau_{0}\) & \(B_{\perp}\) & Case 1 & Case 2 & Instrument \\ \hline \(260.018\) & Fe ii & \(-1.39\) & \(150\) & \(539\) & \(419\) & \(553\) \\ \(261.185\) & Fe ii & \(-4.23\) & \(50\) & \(185\) & \(127\) & \(771\) \\ \(261.265\) & Fe ii & \(-1.79\) & \(150\) & \(263\) & \(228\) & \(644\) \\ \(261.460\) & Fe ii & \(-2.20\) & \(100\) & \(351\) & \(184\) & \(581\) \\ \(262.119\) & Fe ii & \(-3.75\) & \(75\) & \(80\) & \(112\) & \(955\) \\ \(262.245\) & Fe ii & \(-2.76\) & \(100\) & \(26\) & \(65\) & \(700\) \\ \(274.488\) & Fe i & \(-5.51\) & \(50\) & \(108\) & \(76\) & \(971\) \\ \(279.635\) & Mg ii & \(0.00\) & \(200\) & \(818\) & \(768\) & \(1593\) \\ \(280.353\) & Mg ii & \(-0.30\) & \(200\) & \(739\) & \(724\) & \(1436\) \\ \hline \end{tabular} \end{table} Table 2: Required SNR to detect magnetic field with \(2.5\sigma\) significance. Case 1: detection of transverse field with strength \(B_{\perp}\). Case 2: detection of \(B_{\parallel}=20\,\mathrm{G}\). The Instrument column lists the projected performance of a hypothetical instrument. See text for details. Figure 5: Signal loss factor of SNR resulting from instrumental smearing as a function of the instrument spectral resolution. identified in this work. A first step in this direction was recently taken by Afonso Delgado et al. (2023b), who studied the magnetic sensitivity of Fe ii between \(250\) and \(278\,\mathrm{nm}\) using a many-level model atom and realistic physics using the HanleRT code. They note that observations of the solar spectrum are required to study the effects of UV line blanketing and validate the atomic data, in particular the rate of inelastic collisions with electrons. We assert based on the results presented in this paper that this region of the solar spectrum holds great promise and instrumentation to observe it should be developed. ###### Acknowledgements. The material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977. AdW acknowledges support by the National Aeronautics and Space Administration under Grant 80NSSC21K1792 issued through the Heliophysics Flight Opportunity Studies program. R.E. acknowledges support from NSF grant AST-2206263. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. The following acknowledgements were compiled using the Astronomy Acknowledgement Generator ([https://astrofrog.github.io/acknowledgment-generator/](https://astrofrog.github.io/acknowledgment-generator/)). This research has made use of NASA's Astrophysics Data System, NumPy (van der Walt et al., 2011), matplotlib, a Python library for publication quality graphics (Hunter, 2007), SciPy (Virtanen et al., 2020), and the IPython package (Perez & Granger, 2007). ## Appendix A Comments This version of the article has been accepted for publication after peer review but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at [https://doi.org/10.3847/1538-4357/ace041](https://doi.org/10.3847/1538-4357/ace041).
2309.10613
An overview of time series point and interval forecasting based on similarity of trajectories, with an experimental study on traffic flow forecasting
The purpose of this paper is to give an overview of the time series forecasting problem based on similarity of trajectories. Various methodologies are introduced and studied, and detailed discussions on hyperparameter optimization, outlier handling and distance measures are provided. The suggested new approaches involve variations in both the selection of similar trajectories and assembling the candidate forecasts. After forming a general framework, an experimental study is conducted to compare the methods that use similar trajectories along with some other standard models (such as ARIMA and Random Forest) from the literature. Lastly, the forecasting setting is extended to interval forecasts, and the prediction intervals resulting from the similar trajectories approach are compared with the existing models from the literature, such as historical simulation and quantile regression. Throughout the paper, the experimentations and comparisons are conducted via the time series of traffic flow from the California PEMS dataset.
İlker Arslan, Can Hakan Dağıdır, Ümit Işlak
2023-09-19T13:41:15Z
http://arxiv.org/abs/2309.10613v1
An overview of time series point and interval forecasting based on similarity of trajectories, with an experimental study on traffic flow forecasting ###### Abstract The purpose of this paper is to give an overview of the time series forecasting problem based on similarity of trajectories. Various methodologies are introduced and studied, and detailed discussions on hyperparameter optimization, outlier handling and distance measures are provided. The suggested new approaches involve variations in both the selection of similar trajectories and assembling the candidate forecasts. After forming a general framework, an experimental study is conducted to compare the methods that use similar trajectories along with some other standard models (such as ARIMA and Random Forest) from the literature. Lastly, the forecasting setting is extended to interval forecasts, and the prediction intervals resulting from the similar trajectories approach are compared with the existing models from the literature, such as historical simulation and quantile regression. Throughout the paper, the experimentations and comparisons are conducted via the time series of traffic flow from the California PEMS dataset. ## 1 Introduction A time series is a list of data points that are indexed by time. Examples of times series include the stock price of a given company, hourly electricity demand, retail sales of a product and the daily temperature at a given city. The problems related to time series forecasting has become of extreme importance in recent decades due to their wide applications. We refer to the texts [12] and [15] for two general references on time series analysis. The purpose of this manuscript is to give an overview of a certain time series forecasting methodology in detail, namely, the use of similar trajectories. The experiments below will be based on the well studied problem of traffic flow forecasting. The data set we use is the traffic flow data of California PEMS data, and the relevant details are explained in Section 2.1. Our forecasting approach follows the lines of [17] in which they use similar trajectories to obtain a point forecast for the next time step. The method in cited work can be summarized as follows. In order to provide a point (or, interval) forecast for time \(T+1\) when you are at time \(T\in\mathbb{N}\), first look for (windows of) trajectories in the past that are similar to the recent observations up to time \(T\). Then after obtaining some candidates from these similar trajectories, ensemble them via an appropriate function to reach at a final forecast. Since the process requires the selection and assembly of nearest neighbors, the method is sometimes known to be the \(K\)-NN, \(K\) referring to the number of similar trajectories that are used in obtaining the forecast. Focusing on the traffic flow case, note that one has seasonal patterns such as daily, weekly and monthly seasonality, and these need to be taken into account in selection of the nearest neighbors, and relevant discussions will be included below. We refer to the papers [17], [40], [44] for the use of similarity of trajectories in traffic flow. Among these, as noted earlier, [17] should be emphasized in our case since our setup will primarily use the same reasoning behind it. Of course the idea here is quite natural and has emerged in various other fields other than the traffic flow. For a general discussion of the topic centered around economics, but also applicable to other fields, see [8]. The papers [9] and [10] are again on similarity based approaches for predictions, with more technical notes included. Trajectory similarity is used in various other fields as well, see [38] for prediction of remaining life time, [24] for location prediction and [47] for predicting daily tourist arrival as examples. The literature is too vast to list here, but let us note that most of the work in terms of time forecasting is within the title \(K\)-NN. Although our main interest here is not the particular case of traffic flow forecasting, we mention some pointers to the literature in this area too for the sake of completeness. One classical approach in traffic flow forecasting is the use of econometric models, such as ARIMA and its variations. See, for example [5], [22] and [23], for relevant work. Both machine learning models and deep learning models are extensively used lately. Let us point out the papers [23] on SVR, [19] on ANN and Gradient Boosting on [37], as some exemplary work. Also note that these can be combined with ARIMA and related ones in an additive decomposition manner ([25], [45]). Regarding the deep learning models, we mention a few work based on LSTM: [20], [35], and [43]. Lastly note that [3], [23] and [27] provide comparisons on different aspects of traffic flow forecasting. Besides the point forecast problem in time series, we will also be interested in interval forecasts below. Although these intervals are not studied in the traffic flow case, they are widely used in several fields in order to supplement the point forecasts. As an example, electricity price forecasting is a popular topic in which prediction intervals are widely used, and obtaining such intervals in this case is quite challenging due to the spiky behavior of the corresponding time series. See [28] and [29] for relevant discussions. There are various methods for obtaining prediction intervals in the literature, for example, the historical method, bootstrapping and distributional prediction intervals [29]. To the best of our knowledge, the attempts using a direct use of similar trajectories in this manuscript are new, and we were not able to find such references neither in the field of traffic flow nor in other ones. Overall, our experiments showed that the point forecasting methods based on similarity of trajectories are competitive with various models used in the literature. We will provide the comparison results between AR, ARIMA, Random Forest and a basic version of the similarity method in Section 3.2. Our experimentations actually included various other models such as gradient boosting, ANN and SVR, but we will not be include a comprehensive list, since our main purpose is to experiment, discuss and compare the methodologies within the similarity of trajectories framework. As will be clear from the upcoming sections, the similarity approach has an extensive flexibility in it, and the theoretical classification of problems that can be handled with a certain given method can be interesting. Due to the presence of several options in design of the algorithms here, the computational cost may or may not be satisfying. The proposed methodologies in general are grounded in the premise that historical patterns sharing similar characteristics can provide valuable insights for estimating future trajectories. Similarity models benefit from the observations from both distant history and outlier (or minority) segments of historical data. With this ability, similarity models can utilize the diversity in the historical data without sacrificing the overall accuracy. In addition to the accuracy and robustness of our forecasts, analyzing similarity models may enhance our understanding of the underlying system dynamics. The contributions of the current work can be summarized as follows: 1. The basics of point and interval forecasting based on similarity of trajectories are explained in a simple setup in detail in Section 3. We hope that this exemplary case gives a clear description of the method to a general audience. 2. Various point forecast mechanisms based on similarity of trajectories are discussed. In particular, the multi-hourly model and the local regression, that are to be described at length, approaches provide the best results among our experiments. The designs are flexible by nature and seem to be open to further improvement. Also the forecasts obtained via the direct use of similar trajectories help in terms of the interpretability. 3. Results corresponding to various distances are collected and listed together, and are compared with the methodologies suggested in the current manuscript. Some models outside of our approach are also included to make the position of the present attempts clear. 4. Experimentations for multi-step forecasts via the use of similarity are done, and the results seem to be promising, especially under the setting of seasonal similarity. 5. Similarity of trajectories approach is used to obtain prediction intervals in time series framework, complementing the point forecasts. Comparisons for this case with other standard methods are included as well. A new algorithm for obtaining prediction intervals is introduced which is a mixture of the so-called historical simulation and the similarity of trajectories methods. The rest of the paper is organized as follows. The next section is on a description of the traffic flow data set we use, and it further contains the relevant background on the distances of interest, the forecast evaluation metrics, and sample quantiles. In Section 3, we stop for a case analysis based on the weighted \(\ell^{2}\) distance. Section 4 is devoted to discussions on selection and handling of the candidate trajectories that are to be used for forecasting purposes. The point forecast methods handled in this paper are listed in Section 5. The results of our experiments for point forecasts are also given in this part. Section 6 includes the approaches for interval forecasts based on similarity, and also provides the experimentation results for the prediction intervals. We conclude the paper with a short discussion in Section 7. ## 2 Background ### Problem Description and Data set Given the first \(t\) instances \(X_{1},X_{2},\ldots,X_{t-1},X_{t}\) of a time series \(X\), the general problem we are interested in is to provide a forecast for \(X_{t+1}\). Using similarity of trajectories, this problem can be approached as follows. First, the time series data is transformed into a trajectory-based representation. (Trajectories can be thought as finite subsequences corresponding to consecutive indices of sequence of time series. The trajectories to be considered here will be of the same length.) Secondly, similar trajectories are identified using \(K\)-NN. Then, the succeeding observations of similar trajectories, which are to be called _candidates_, are used for forecasting. In our experiments, we use real world traffic data from California Freeway Performance Measure Systems (PeMS). In particular, the experiments are done with the flow data for Station 312757. The data contains five-minute intervals of observations such as total traffic flow and average speed. The use of 15-minute long intervals is advocated in the literature since using 15 minutes instead of 5 smoothens out the noise while keeping the data meaningful [23]. In our case, we achieve this by combining 3-tuples of non-overlapping five-minute intervals by taking their sum. During this study we use the data of 20 months duration from October 2020 to June 2022. The data split is designed to contain at least a year long observations in each split to capture seasonality. We split the data into four parts: tune reference, tune query, test reference and test query. Tune splits are used for optimizing the hyperparameters and test splits are used for measuring the performance. Tune split, containing tune reference and tune query, ranges from October 2020 to February 2022 and test split ranges from February 2021 to June 2022. Due to the nature of similarity based models, query and reference splits are introduced. Query split contains the observations that are desired to be estimated. For each observation in the query split, a reference set, consisting of past observations, is defined. Reference splits contain past observations ranging along at least 12 and at most 16 months long time interval. Tune query split ranges from October 5, 2021 to February 5, 2022 and for each observation in the query split, a reference split is defined to range from October 5, 2020 to the date (and time) of that observation. Test query split ranges from February 2022 to June 2022 with reference split starting from February 2021, again to the date (and time) of that observation. Here we emphasize that the size of reference sets differ for each observation. To be more concrete, for two observations at November 10, 2021, 10:30 AM and January 3, 2022, 6:00 AM, two different reference sets are defined. The former reference set ranges from October 5, 2020 to November 10, 2021, 10:30 AM and the latter reference set again starts from October 5, 2020 but ends in January 3, 2022, 6:00 AM. The following figure demonstrates the data split done in our study. Note that the significant change in traffic flow around October 2020 is due to the relaxation of Covid-19 measures. Figure 1: Weekly traffic flow data The missing data in our study is handled in two different ways. First, when the length of consecutive missing data is less than one hour, the corresponding flow data from previous week is substituted. Secondly, for longer consecutive periods (which actually only occurs in February 2022) we use the average of the same days and hours of previous three weeks. ### Distances In this subsection we define certain distance functions that can be used for measuring the distance between two trajectories. Before moving further let us provide some references that contain more details about the distances discussed here, and several other ones. [32] compares various distances, and discuss their use in the clustering framework. [34] gives an overview of distances such as Frechet distance, the dynamic time warping, longest common subsequences. The paper contains four experimental studies for comparison purposes. [33] and [36] are two other references that discuss various aspects of measures for similarity of trajectories. Now, let us provide the definitions of some common distances between trajectories, and discuss them briefly. Below \(\mathbf{x}=(x_{1},\ldots,x_{L})\) and \(\mathbf{y}=(y_{1},\ldots,y_{L})\) are considered to be \(L\in\mathbb{N}\) instances of a real valued time series. **1. \(\ell^{p}\) distance and related.** For \(p\geq 1\), \(p\)_-distance_ between \(\mathbf{x}\) and \(\mathbf{y}\) is defined to be \[d_{p}(\mathbf{x},\mathbf{y})=\left(\sum_{i=1}^{L}|x_{i}-y_{i}|^{p}\right)^{1/ p}.\] \(\|\mathbf{x}-\mathbf{y}\|_{p}\) is sometimes written for \(d_{p}(\mathbf{x},\mathbf{y})\). In this notation the norm of \(\mathbf{x}\) is \(\|\mathbf{x}\|_{p}\). Two special cases should be noted. When \(p=2\), \(\ell^{p}\) distance is known as the **Euclidean distance** between \(\mathbf{x}\) and \(\mathbf{y}\), perhaps the most well-known distance in the literature. Some authors Figure 2: Data split also consider the **squared Euclidean distance** \[d_{2}^{2}=\sum_{i=1}^{L}|x_{i}-y_{i}|^{2}\] in order to avoid dealing with the square roots, but that will not be an issue for us. The second special case of the \(\ell^{p}\) distance we shall mention is the **Manhattan distance** corresponding to \(p=1\): \[d_{M}(\mathbf{x},\mathbf{y})=\sum_{i=1}^{L}|x_{i}-y_{i}|.\] Manhattan distance is also known as \(\ell^{1}\) distance and the taxicab distance. Computations involving Manhattan distance are faster than the Euclidean distance due to absence of taking square roots. We lastly note the weighted version of the Euclidean distance. Given weights \(w_{i}\in(0,1)\) adding up to \(1\), the corresponding **weighted Euclidean distance** is defined by \[d_{w}(\mathbf{x},\mathbf{y})=\left(\sum_{i=1}^{L}w_{i}|x_{i}-y_{i}|^{2} \right)^{1/2}.\] Our preliminary analysis of the similarity based forecasting methods in Section 3 will be using the weighted Euclidean distance, where the weights will be selected in a linear way according to the recentness of observations [17]. In present notations, considering \(\mathbf{x}\) and \(\mathbf{y}\) as the compared trajectories, these weights can be written as \(w_{i}=\frac{i}{L(L+1)/2}\). **2.** sup **distance** is defined by \[d_{\infty}(\mathbf{x},\mathbf{y})=\max_{i=1,\ldots,L}|x_{i}-y_{i}|.\] Other names for the sup distance include the maximum distance and Chebyshev distance. Since the sup distance looks for uniformly similar patterns, it can be difficult to find similar trajectories in our setting below. In case one is interested in only the similarity of certain coordinates of \(\mathbf{x}\), \(\mathbf{y}\), one may consider the maximum in previous definition over a subset \(S\) of \(\{1,\ldots,L\}\). Below we will focus on a specific case which is to be called the head-tail distance. Letting \(\ell_{1},\ell_{2}\) be two positive integers for which \(\ell_{1}+\ell_{2}\leq L-1\), the (\(\ell_{1}\)-\(\ell_{2}\)) **head-tail distance**, based on some underlying distance \(d\) (such as Euclidean), between the given two windows is defined by \[d_{HT}(\mathbf{x},\mathbf{y})=\sum_{i=1}^{\ell_{1}}d(x_{i},y_{i})+\sum_{i=L- \ell_{2}+1}^{L}d(x_{i},y_{i}).\] Note that one may further prefer to choose the distances in these two summations differently. **3. Cosine distance** is defined to be \[d_{\text{cos}}(\mathbf{x},\mathbf{y})=1-\frac{\sum_{i=1}^{L}x_{i}y_{i}}{\| \mathbf{x}\|_{2}\|\mathbf{y}\|_{2}}.\] Writing \(\theta\) for the angle between \(\mathbf{x}\) and \(\mathbf{y}\), \(d_{\text{cos}}(\mathbf{x},\mathbf{y})=1-\cos\theta\). The cosine distance is known to work effectively in higher dimensions, and it is used in areas such as natural language processing and text mining. A closely related distance is **Pearson correlation** defined by \[d_{\text{Pearson}}(\mathbf{x},\mathbf{y})=1-\frac{\sum_{i=1}^{L}(x_{i}-\overline {x})(y_{i}-\overline{y})}{\left(\sum_{i=1}^{L}(x_{i}-\overline{x})^{2}\right) ^{1/2}\left(\sum_{i=1}^{L}(y_{i}-\overline{y})^{2}\right)^{1/2}}\] Note that correlation distance is not a metric. When \(\overline{x}=\overline{y}=0\), \(\|x\|_{2}=\|y\|_{2}=1\), \(d_{\text{Pearson}}(\mathbf{x},\mathbf{y})\) simplifies to \(1-\sum_{i=1}^{L}x_{i}y_{i}\). Then under the same assumptions, we have \(d_{\text{Pearson}}(\mathbf{x},\mathbf{y})=d_{\text{cos}}(\mathbf{x},\mathbf{y})\), and also \[d_{2}^{2}(\mathbf{x},\mathbf{y})=\sum_{i=1}^{L}(x_{i}-y_{i})^{2}=\sum_{i=1}^{L }x_{i}^{2}+\sum_{i=1}^{L}y_{i}^{2}-2\sum_{i=1}^{L}x_{i}y_{i}=2d_{\text{Pearson}}( \mathbf{x},\mathbf{y}).\] Let us note that another distance similar to Pearson correlation can be defined by using Spearman correlation. The main difference from Pearson correlation is that the variables of interest in this case are rank-ordered. **4. Canberra distance** is defined to be \[d_{\text{Canberra}}(\mathbf{x},\mathbf{y})=\sum_{i=1}^{L}\frac{|x_{i}-y_{i}|}{ |x_{i}|+|y_{i}|}.\] This distance is a weighted version of the \(\ell^{1}\) distance, where the weights help robustness to outliers. It is used in various areas including clustering and classification, and has several variations which we will not discuss here. **5. Others.** There is vast list of other similarity measures in the literature that can be adapted to measure similarity of trajectories. These include but are not limited to mean character difference, Frechet distance, coefficient of divergence, matching subsequence counts, dynamic time warping, longest common subsequence ([2], [18], [32], [31], [34]). In our study we had some preliminary experiments with the longest common subsequences approach. The length of the longest common subsequences is a classical measure of similarity for discrete sequences. It is used, for example, in molecular biology for DNA comparison, computer science for binary sequence comparison, among others. See [4], [30] and [31] for various applications of the longest common subsequences as well as algorithmic discussions. Recent theoretical advances on the common subsequences problem can be traced in [13]. The theoretical background can also be carried over to a score function setting which can be used to treat continuous data. See [11] for relevant discussions and references. Following the discussion in [34], the variant of the longest common subsequences for comparison of continuous trajectories can be defined as a dynamic programming algorithm as follows. Letting \(\mathbf{x}\) and \(\mathbf{y}\) be as before, the length of the longest common subsequence \(LC(\mathbf{x},\mathbf{y})\) is defined by \[LC(\mathbf{x},\mathbf{y})=\begin{cases}0,&\text{if $\mathbf{x}$ or $\mathbf{y}$ is empty},\\ 1+LC(\mathbf{x}[1:k-1],\mathbf{y}[1:m-1]),&\text{if $|x_{k}-y_{m}|<\epsilon$ and $|k-m|\leq\delta$},\\ \max\{LC(\mathbf{x}[1,k-1],\mathbf{y}),LC(\mathbf{x},\mathbf{y}[1:m-1])\},& \text{otherwise}.\end{cases}\] Here, the values of the \(\epsilon\) and \(\delta\) are to be chosen by the researcher. The choice of \(\delta=L-1\) yields the natural analogue of the longest common subsequences in a continuous setup. Our basic experiments with the longest common subsequences yielded poor performance, and we therefore do not list them below. A variant of this subsequence approach is the dynamic time warping for which relevant discussions can be found in [34] and [2]. We hope to get back to the longest common subsequence, dynamic time warping and their variations in more detail in a subsequent work. ### Evaluation metrics In this subsection we briefly describe the evaluation metrics for point and interval forecasts that are used in this manuscipt. **Point forecasts.** Below we will be using the mean absolute error (MAE) and the mean absolute percentage error (MAPE) for the evaluation of our point forecasts. One may refer to [15] and [16] for general discussions on the point forecast evaluation metrics. MAE corresponding to the forecasts \(\hat{y}_{1},\ldots,\hat{y}_{m}\) for the realized sequence \(y_{1},\ldots,y_{m}\) is defined by \[MAE=\frac{1}{m}\sum_{i=1}^{m}|\hat{y}_{i}-y_{i}|.\] The mean absolute percentage error (MAPE) is defined by \[MAPE=\frac{1}{m}\sum_{i=1}^{m}\left|\frac{\hat{y}_{i}-y_{i}}{y_{i}}\right|.\] Some researchers in the traffic literature use other related metrics such as \(MAE_{100}\) for the traffic flow prediction, which basically ignores the time instances \(t\) for which \(y_{t}<100\). For example, see [23]. We do not consider a similar strategy here since minimal flow in our case is not too small, and making use of such a metric does not make much difference than using the standard \(MAE\). Once point forecasts are obtained via two distinct models, one may also like to know whether one method is significantly different than (better or worse) the other one. For this purpose we will be using the Diebold-Mariano test, see [6] and [7] for details. **Interval forecasts.** The two criteria used for comparison of performance of prediction interval mechanisms below are (i) Unconditional coverage, and (ii) Winkler score. Let us now briefly go over these. * **Unconditional coverage.** The simplest metric one may consider for a prediction interval is the unconditional coverage, which is just the proportion of samples in test data covered by our prediction intervals. Formally, given the prediction intervals \([\hat{L}_{t},\hat{U}_{t}]\), \(t=1,\ldots,S\), with the corresponding actual values \(y_{1},\ldots,y_{S}\), the unconditional coverage is defined to be \[\text{UC}=\frac{1}{S}\sum_{t=1}^{S}\mathbf{1}(y_{t}\in[\hat{L}_{t},\hat{U}_{t }]),\] where \(\mathbf{1}\) is the characteristic function giving \(1\) if the input is true, and \(0\) otherwise. For \(\alpha\in(0,1)\), a \((1-\alpha)100\%\) prediction interval construction will be considered to work _well_ if the UC is _close_ to \((1-\alpha)100\%\). Note that there is also the conditional coverage of prediction intervals which also penalizes the clustering of \(0\)'s and \(1\)'s (for the sequence \(\mathbf{1}(y_{t}\in[\hat{L}_{t},\hat{U}_{t}])\)). Such clusterings may occur in presence of data with significant outliers. The electricity price forecasting with its spiky behavior is one such example, see [29]. We will not be reporting on conditional coverage below. * **Winkler score.** Besides the obvious coverage probabilities, one other natural criteria to consider is the length of prediction interval. One naturally wishes to have the prediction intervals as short as possible (keeping the UC around the desired level). For this purpose we will be using Winkler score and and penalize the longer prediction intervals [41]. Before giving the definition for the Winkler score of a prediction interval, let us note that the constructions of our prediction intervals are symmetric around the point forecasts. Now, for \(\alpha\in(0,1)\), for the time series \(y_{t}\), \(t\geq 1\), and for the corresponding prediction intervals \([\hat{L}_{t},\hat{U}_{t}]\), we follow [29] and define the corresponding Winkler score by setting \[W_{t}=\begin{cases}\hat{U}_{t}-\hat{L}_{t}&\text{if }y_{t}\in[\hat{L}_{t},\hat{U}_{t}],\\ \hat{U}_{t}-\hat{L}_{t}+\frac{2}{\alpha}(\hat{L}_{t}-y_{t})&\text{if }y_{t}<\hat{L}_{t} \\ \hat{U}_{t}-\hat{L}_{t}+\frac{2}{\alpha}(y_{t}-\hat{U}_{t})&\text{if }y_{t}>\hat{U}_{t}. \end{cases}\] As is clear from the definition, Winkler score penalizes the cases where the actual value of the time series is outside the prediction interval, and in this case, it is proportional to the distance between the actual value and the closest point on the prediction interval. We may then define the average Winkler score over the time interval \(1,\ldots,S\) to be \[\overline{W}=\frac{1}{S}\sum_{t=1}^{S}W_{t}.\] It is also clear from these definitions that a lower Winkler score is an indicator of a better prediction interval performance. ### Sample Quantile For our prediction interval models, we need to estimate upper and lower quantiles from a sample. There are multiple ways to define the corresponding sample quantiles, and in this section we fix one that is to be used throughout the paper. When the sample size is small (e.g. less than 50), a precise calculation of the quantile would be more desirable. Due to this, the (precise) sample quantile is defined as follows. Assume we have an ordered sample \(\{a_{1},a_{2},\ldots,a_{N}\}\) of size \(N\), such that \(a_{i}\leq a_{i+1}\) for each \(i\). Then \(i\)-th element of the sample \(a_{i}\), corresponds to the \(\frac{i}{N+1}\)-th quantile. With that reasoning we define the \(q\)-th sample quantile in the following way. For \(q\in[0,1]\), \[a_{(q)}=\left\{\begin{array}{ll}a_{q(N+1)},&\mbox{ if }q(N+1)\in\mathbb{Z}^{ \geq 0}\\ c_{1}a_{\lfloor q(N+1)\rfloor}+c_{2}a_{\lceil q(N+1)\rceil},&\mbox{ otherwise}\end{array}\right.\] where \(c_{1}=1-(q(N+1)-\lfloor q(N+1)\rfloor)\) and \(c_{2}=q(N+1)-\lfloor q(N+1)\rfloor\). ## 3 Analysis for weighted Euclidean case This section contains a detailed analysis of the general methodology in this paper in a specific setting. Here, following [17], we specialize on the case where the similar trajectories are obtained via a weighted variant of \(\ell^{2}\) distance as explained in Section 2.2 (In particular, the weights will be \(w_{i}=i/(L(L+1)/2)\), \(i=1,\ldots,L\)). In later sections, there will be various approaches in use, but they share certain common characteristics. Focusing on a special case will retain us repeating several issues, and we hope that the study in this section will clearly expose the ideas which are involved in general. Let us start by recalling the general reasoning of the similarity of trajectories approach. First, we generate trajectories based on lagged observations of traffic flows. Then we identify the nearest neighbors for each trajectory and save the corresponding flow values as candidates. These candidates should closely resemble the flow that we aim to predict, but as usual they may contain outliers, which are to be handled before obtaining point and interval forecasts via the candidate sets. After the outlier removal is done, the remaining candidates are used to arrive at the final forecasts. The following diagram provides a broad description of the methods we use throughout this paper. ### Obtaining the Nearest Neighbors The selection of the nearest neighbors is one of the most critical parts of historical similarity methods. In order to have promising results, it is crucial that the nearest neighbors have a common and meaningful pattern. Furthermore, the number of nearest neighbors used must be sufficient enough to provide a good view. Keeping these in mind, we must select optimal values for the three primary hyperparameters: * Window size, \(L\), for defining the trajectories, * Distance function, \(d\), for measuring similarity, * Number of nearest neighbors, \(K\), for obtaining as many neighbors as required in order to have satisfactory relationships. Recall that we fixed the distance measure to a weighted Euclidean distance throughout this section. The time windows that are to be considered in this section will be ranging from 30 minutes to 5 hours (i.e., 2 to 20 past observations) resulting from 15-minute intervals in our data set. We save up to 200 nearest neighbors for forecasting purposes. Further hyperparameter selection discussions will be given in the sequel. Now, before proceeding further, let us include an independent note on the distribution of nearest neighbors. One may expect to find seasonal or other distinguishable patterns in historical search, but our experimentations do not reveal any visible pattern at all. In particular, in Figure 4, we present an exemplary distribution of the 10 nearest neighbors for 100 specific consecutive trajectories (To find the nearest neighbors in this specific case, we set the trajectory length to 16 and use the weighted Euclidean distance). Although the most of the nearest neighbors were from more recent observations, we observed that there were several candidates from as far back as a year ago. This may imply that extending the interval of the data may be beneficial for our models. Figure 3: Flowchart showing the general similarity of trajectories methodology ### Benchmarks In this section we divert from the main discussion in order to describe our benchmark models. We have chosen four methods for this purpose, namely, Naive (or, Random Walk), Autoregressive model (AR), Autoregressive Integrated Moving Average (ARIMA) and Random Forest. The naive approach simply uses the current flow value as the forecast for the next time step (\(X_{t}=X_{t-1}\)). This method sets the baseline performance results and any proposed method is expected to outperform it. A slightly more complicated version of the stantard naive method is the seasonal naive, in which the forecast we give is based on an averaging with respect to certain previous time instances (such as previous days or weeks), taking seasonality into account. Since the results of seasonal naive were not close to the ones that are to be described below, we will only provide the ones for standard naive approach. AR and ARIMA models are econometric models that are widely used in time series analysis and forecasting. AR models use previous values of the target variable to forecast future values. In particular, if the series of interest is \(X_{t}\), then the AR(\(p\)) model can explicitly be written as \(X_{t}=\sum_{i=1}^{p}\alpha_{i}X_{t-i}+\epsilon_{t}\), where \(\epsilon_{t}\) is white noise and \(\alpha_{1},\ldots,\alpha_{p}\) are the parameters of the model. On the other hand, ARIMA and SARIMA models combine autoregressive and moving average components to account for trends and seasonal patterns in the data. These models are capable of capturing data patterns and dependencies, such as day-of-week effects and hourly variations, and have the potential to provide relatively accurate predictions even with a limited data. As a result, AR and ARIMA serve as robust and simple benchmarks methods for our research. We refer to [12] and [14] for further details. Random Forest (RF) is widely recognized for its effectiveness in capturing nonlinear relationships within high-dimensional data. In short-term traffic flow forecasting literature, Random Forest has demonstrated competitive performance as an ensemble learning method [46]. Given the resemblance in methodological approaches between our proposed model and Random Forest, together with the demonstrated efficacy of Random Forest in previous applications, we selected it as the primary benchmark here. The benchmark results we obtained are summarized in the following table. Figure 4: A sample distribution of the nearest neighbors Note that in Table 1 and below, the result on left-hand-side shows the tune performance and the one on right-hand-side the test performance. Now we continue with some comments on the results of benchmark models, and on some notes relevant to their implementation. First let us mention the setup for ARIMA and AR models which are seen to improve the naive model. For the AR model, we identified the optimal value of \(p\) to be 9, yielding a mean absolute error (MAE) of 52.62. The ARIMA model, with optimal values of \(p=9\), \(d=0\), and \(q=3\), exhibited superior performance with an MAE of 51.83. Finally, the SARIMA model demonstrated the best performance among the three benchmarks, with optimal hyperparameters \(p=9,d=0,q=3,P=3,D=0,Q=2\), and seasonal frequency \(S=96\), achieving an MAE of 46.08. The results highlight the importance of seasonality. Random Forest models consist of several hyperparameters. We refer to _ranger_ R package [42] for naming and detailed description. In our models, the "mtry" hyperparameter that determines the number of features randomly sampled at each split, ranges from 4 to 11. Meanwhile the "min.node.size" hyperparameter, determining the minimal node size to split at, varies from 2 to 20. There are various studies regarding the "num.trees" hyperparameter that determines the number of trees in each forest. These studies imply that higher values simply yield better performance. However, our experiments indicate that the improvement in performance becomes insignificant after the number of trees exceed 150. Therefore, for our experimentation, the maximum value for "num.trees" in the grid search is set to 200. Also, the models trained with minimizing mean absolute error performed better, compared to the ones that minimize mean square error. We had two main approaches for the random forest models. The first one only used lagged observations as the features. We tried various lengths for window size and stopped at 30 since no further improvement was observed in the accuracy. The first approach yielded an MAE of 47.01, performing better than non-seasonal autoregressive benchmarks but worse than SARIMA, implying a motivation for a random forest model with seasonal parameters (SRF). In the second approach, we also included observations from one day and one week before. Again we conducted a similar grid search for window sizes and other hyper-parameters. SRF performed best among other benchmarks with MAE of 45.85. Lastly, the train/test split of benchmark models differ from the splits of similarity models. We keep the size of test split the same, 4 months, and use 12 months of observations for training.1 Footnote 1: As an additional note, using larger sets for training resulted in worse test scores. ### Point Forecasts: Similarity of Trajectories There are various issues to be considered in providing point forecasts via similarity of trajectories. In this section, focusing on the aforementioned weighted Euclidean distance used in this section, we give a general overview of these, by focusing on hyperparameter tuning, outlier handling and multi-step forecasting. \begin{table} \begin{tabular}{c c c} \hline \hline Model & MAE & MAPE \\ \hline Naive & 54.74 / 56.73 & 5.35 / 5.36 \\ AR & 50.34 / 52.62 & 4.92 / 4.94 \\ ARIMA & 49.34 / 51.83 & 4.82 / 4.83 \\ SARIMA & 42.09 / 46.08 & 4.18 / 4.39 \\ RF & 43.58 / 46.85 & 4.35 / 4.40 \\ Seasonal RF & 42.91 / **45.99** & 4.31 / **4.32** \\ \hline \hline \end{tabular} \end{table} Table 1: Benchmark results for point forecast **Tuning Hyperparameters.** Hyperparameter optimization is a crucial step in forecasting problems. Recall in our case that the hyperparameters are the distance function \(d\), the number of neighbors selected \(K\) and the window size \(L\). Since the distance function is already fixed, we are left with adjusting \(K\) and \(L\). Similar tuning is required to be done for the forecasting mechanisms described in Section 5, but not all the details will be presented there. In our treatment below in this section, the optimal values of window size \(L\) is searched from \(2\) to \(20\). Then for each \(L\), average of \(K\) candidates from nearest neighbors is used as the forecast, where \(K\) ranges from \(5\) to \(200\). The following heat map summarizes our experiment results for this range of \(K\) and \(L\). In Figure 5, the left hand side shows a general view, and then with zooming in on the right hand side we see a nicely shaped region containing the optimal hyperparameters. This region suggests us to make a second grid search with more frequent values for \(K\) between \(10\) and \(50\), and their pairings with \(9\leq L\leq 17\). Note that we were conducting this initial experimentation on the tuning split, so the results differ from the final results given in Section 5. In Figure 6, we observe that \(L=14\) provides the best results and that there is a steep increase in error after \(L>15\) for all values of \(K\). Then, while searching for the optimal \(K\) value, a distinguishable tick pattern can be observed in Figure 7. These observations suggest finalizing the hyperparameter search, and set \(L=14\) and \(K=25\) as the optimal values. While analysing the other aspects of the similarity models, we will mainly be using the optimal hyperparameters. However, we will include results with other values of \(K\) and \(L\) whenever it contributes to the discussion. Figure 5: Initial Grid Searches for \(K\) and \(L\). Left figure shows a wider search. Right figure focuses on the better performing hyperparameters. Figure 6: Best \(L\) values The following table shows the result of the similarity of trajectories method discussed in this section along with the previously listed benchmark results. Recall that this basic similarity approach was based on using the weighted Euclidean distance and the arithmetic mean of the candidates as forecast. Table 2 shows that the similarity of trajectories model outperform autoregressive models (without seasonal parameters) but it performs worse than both Random Forest models and SARIMA. This leads to a motivation for using seasonality in similarity models. **Handling Outliers.** Extreme values among candidates may lead to biased predictions. Just like any other method, removing these outliers can improve the performance of forecasting based on similarity of trajectories. In our specific case based on traffic data, it is observed that the flow values of candidates for certain observations have an interesting distribution. Figure 8 below shows one such example. \begin{table} \begin{tabular}{c c c} \hline \hline Model & MAE & MAPE \\ \hline Naive & 54.74 / 56.73 & 5.35 / 5.36 \\ AR (9) & 50.34 / 52.62 & 4.92 / 4.94 \\ ARIMA (9,0,3) & 49.34 / 51.83 & 4.82 / 4.83 \\ SARIMA (9,0,3), (3,0,2) & 42.09 / 46.08 & 4.18 / 4.39 \\ RF & 43.58 / 46.85 & 4.35 / 4.40 \\ Seasonal RF & 42.91 / **45.99** & 4.31 / **4.32** \\ Similarity (L = 14, K = 25) & 45.46 / 47.84 & 4.44 / 4.50 \\ \hline \end{tabular} \end{table} Table 2: Preliminary comparison of similarity approach with the best benchmark results Figure 7: Best \(K\) values the candidate distribution and addressing this characteristic may be helpful while handling the outliers. [17] uses winsorization for smoothening the extreme values, and they observe that this process improves their forecasting results. The details of winsorization are discussed in Section 4.3. Let us note here that this method is computationally appealing and computational efficiency should be kept as a priority as this procedure is an extra step. The definition of winsorization deals only with the largest and smallest values, but as seen in Figure 8, there can be many outliers that need to be taken into account. With this in mind, we experimented with various other approaches towards the outlier values. In particular, it is observed that using the z-score to capture outliers is slightly more successful than winsorization in our setting. Besides this, a straightforward approach which we call "Symmetric Tail Removal" is described in Section 4.3, again providing small improvements in results. Below is a comparison of outlier removal methods with respect to the weighted Euclidean distance, where optimal hyperparameters are used. Let us note that since some of the outliers are removed, the optimal value of \(K\) may be subject to change. More specifically, here \(L\) is fixed to be \(14\) and a grid search among various values of \(K\) is conducted. The results in Table 3 suggest that the use of simple outlier removal methods may improve the performance slightly. Although we tried several distinct values for \(K\), we have seen that the optimal value tends to be around \(25\). We observe that symmetric tail removal is the best method for our case. In the following discussions and the next sections we will only use this method. **Multi-step forecasting.** For obtaining multi-step ahead forecasts, a similar strategy to the one-step ahead forecast is used, but the candidates are chosen according to the number of steps of interest. Since similarity models are non-parametric and they rely on use of patterns, one may hope Figure 8: Exemplary distribution of candidate flow values. Red line shows the observed flow. \begin{table} \begin{tabular}{c c c c} \hline \hline Outlier Method & \(L\) & \(K\) & MAE \\ \hline None & 14 & 25 & 45.46 / 47.84 \\ Winsorization & 14 & 21 & 45.31 / 47.71 \\ Z-Score & 14 & 27 & 45.21 / 47.54 \\ **Sym-Tail-Remove** & 14 & 25 & 45.10 / **47.39** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of outlier methods that a successful similarity model perform well even when the step-size is huge. Table 4 demonstrates that the non-seasonal similarity based model performs well when the step size is 2. However, the performance rapidly falls as the step size increases. This could be an indication that the model under consideration misses important patterns from the data. On the other hand, the table contains the results from a "seasonal similarity" model, details of which to be discussed in Section 5.3.1, showing Seasonal Similarity model outperforms all of the benchmarks as the step size increases. ### Prediction Intervals Instead of giving a point forecast, providing a prediction interval may be desirable in many scenarios. To the best of our knowledge, the similarity of trajectories approach has not been explored in the literature in this aspect yet. The purpose of this section is to describe one particular strategy for such a construction. The idea is simple and can be described as follows: After obtaining the candidates, sort them by their value in ascending order and match each observation with the corresponding sample quantile. Afterwards, the required prediction interval is generated using the corresponding quantile forecasts. Note that we use the definition of the sample quantile given in Section 2.4. Before proceeding further, let us mention the benchmark methods that are to be used in the prediction interval framework. A possible natural approach here is to pass from point forecasts to prediction intervals by making use of the sample standard deviation of the errors. A second possible option is training the models with quantile loss function. There are of course several other directions for obtaining prediction intervals, but our experiments for benchmarking purposes will be based on seven variations around these. Among these seven, three of them use point forecast benchmarks to generate prediction intervals and the other four are Quantile Autoregression (QAR) [21], Seasonal Quantile Autoregression (SQAR), Quantile Random Forest (QRF) [26] and Seasonal Quantile Random Forest (SQRF) that use quantile loss function. Note that the prediction interval from ARIMA models are only obtained by using sample variance of the point forecast errors.2 Footnote 2: Training ARIMA (or moving average models) with quantile loss function does not have a standard definition and we could not find any discussion in the literature. Also, calculation of moving average part using the regression error from a quantile forecast does not seem to be practical. The optimal hyperparameters for quantile autoregressive models are given in Table 5. QRF and SQRF models are trained simultaneously with their point forecast version using _ranger_ R package [42]. The details regarding hyperparameters can be found on Page 12, and the R package follows [26] for obtaining quantile predictions. We measure performance of these models by using the unconditional coverage and Winkler score defined in Section 2.3. Table 5 shows that the use of quantile loss function provides significantly \begin{table} \begin{tabular}{c c c c} \hline \hline Model & MAE (Step-2) & MAE (Step-3) & MAE (Step-5) \\ \hline Naive & 78.160 / 79.889 & 98.890 / 101.991 & 143.850 / 146.936 \\ AR & 64.325 / 67.552 & 73.414 / 77.103 & 106.138 /111.294 \\ ARIMA & 58.702 / 59.905 & 61.027 / 63.243 & 88.823 / 92.032 \\ RF & 53.975 / 56.480 & 59.056 / 62.658 & 70.866 / 76.639 \\ SRF & 53.571 / 54.836 & 58.260 / 60.568 & 69.863 / 73.694 \\ Similarity & 53.743 / 54.852 & 59.125 / 60.764 & 80.308 / 83.789 \\ Seasonal Similarity & 49.812 / 51.072 & 54.297 / 55.516 & 61.596 / 62.197 \\ \hline \end{tabular} \end{table} Table 4: Multi-step ahead forecasts better results. In particular, Seasonal Random Forest trained with quantile loss function. Note that (SD) represents the models that are obtained with adding standard deviation to point forecast models and the numbers near QAR and SQAR represent the optimal autoregressive parameters. **Hyperparameter Tuning.** The optimal value of the window size \(L\) is expected to be similar to the point forecasting case. However, the value of the number of nearest neighbors \(K\) may require to be larger in order to capture the underlying distribution better. The initial grid search is done similar to the point forecasting case, with \(L\) between \(2\) and \(20\), and \(K\) between \(10\) and \(200\). Figure 9 demonstrates that the unconditional coverage stabilizes for \(K\geq 70\). Also the Winkler score does not appear to change significantly for \(K\geq 50\), for all values of \(L\). The best Winkler score is obtained for \(L=9\) and \(K=60\). \begin{table} \begin{tabular}{c c c} \hline \hline Model & Unconditional Coverage & Winkler Score \\ \hline Naive (SD) & 0.9665 / 0.9665 & 426.75 / 420.46 \\ AR (SD) & 0.9671 / 0.9633 & 391.38 / 389.16 \\ ARIMA (SD) & 0.9421 / 0.9393 & 378.12 / 373.05 \\ QAR (9) & 0.9655 / 0.9591 & 368.91 / 369.95 \\ SQAR (8)(6) & 0.9450 / 0.9386 & 337.50 / 366.81 \\ QRF & 0.9291 / 0.9548 & 341.08 / 321.89 \\ SQRF & 0.9349 / 0.9625 & 329.34 / 314.48 \\ \hline \end{tabular} \end{table} Table 5: Benchmark results for prediction interval forecasts Figure 9: Unconditional coverage and Winkler score Table 6 shows the comparison between the best similarity model and benchmark models. We see that quantile random forest models are better than the basic similarity approach of this section. A more detailed treatment will be given in Section 6. ## 4 Obtaining and Processing Candidates ### The design of the query set and reference sets Let \(L\in\mathbb{Z}^{+}\) be the window size and \(T\in\mathbb{Z}^{+}\) be the size of the traffic flow data \(\{x_{1},x_{2},\ldots,x_{T}\}\). In this section, by a _target_, we mean a _candidate_ in our previous setup. Now we design the time series data as trajectories and their targets as follows: \[x_{1}^{\text{traj}}=(x_{1},x_{2},\ldots,x_{L}),\ y_{1}^{\text{ targ}}=x_{L+1}\] \[x_{2}^{\text{traj}}=(x_{2},x_{3},\ldots,x_{L+1}),\ y_{2}^{\text{ targ}}=x_{L+2}\] \[\vdots\] \[x_{T-L}^{\text{traj}}=(x_{T-L},x_{T-L+1},\ldots,x_{T-1}),\ y_{T-L }^{\text{targ}}=x_{T}\] We define the query test set as \(Q_{\text{test}}=\{[x_{i}^{\text{traj}},y_{i}^{\text{targ}}]:s\leq i\leq T-L\}\) and the query tune set as \(Q_{\text{tune}}=\{[x_{i}^{\text{traj}},y_{i}^{\text{targ}}]:u\leq i\leq s-1\}\), where \(s\) and \(u\) are chosen in our case to be natural numbers satisfying \(T-L-s=s-1-u\) so that we have \(|Q_{\text{tune}}|=|Q_{\text{test}}|\). Note that the data in \(Q_{\text{tune}}\) and \(Q_{\text{test}}\) correspond to disjoint sets of time intervals. We train models through \(Q_{\text{tune}}\) by tuning hyperparameters/parameters and measure accuracies on the test set \(Q_{\text{test}}\). Predictions are based on the previously observed trajectories. We pick one trajectory (i.e. \(x_{\cdot}^{\text{traj}}\)) from \(Q_{\text{tune}}\) and define a reference set for it. We also pick one trajectory from \(Q_{\text{test}}\) and again define a reference set for it. For this purpose, we let \(R_{q}^{\text{tune}}=\{[x_{i}^{\text{traj}},y_{i}^{\text{targ}}]:1\leq i\leq q-1\}\) a reference set for each \(q\in\{u,u+1,\ldots,s-1\}\) and \(R_{q}^{\text{test}}=\{[x_{i}^{\text{traj}},y_{i}^{\text{targ}}]:w\leq i\leq q ^{{}^{\prime}}-1\}\) a reference set for each \(q^{{}^{\prime}}\in\{s,s+1,\ldots,T-L\}\), where we choose \(w\) satisfying \(s-1=T-L-w\). Choosing \(w\) a particular value subject to this condition guarantees the fact that we consider trajectories with dates going back to at most one year for both the trajectories in the query tune set and the query test set. In fact, putting \(w=1\) leads to no problem in practice. However, we preferred to see the results by considering trajectories through a history of equal lengths for both the query tune set and the query test set. Let \(q\) be in \(\{u,u+1,\ldots,s-1\}\) and \(q^{\prime}\) be in \(\{w,w+1,\ldots,T-L\}\) and let \(Z_{i}=[x_{i}^{\text{traj}},y_{i}^{\text{targ}}]\). The following figure demonstrates the construction of these sets. \begin{table} \begin{tabular}{c c c} \hline \hline Model & Unconditional Coverage & Winkler Score \\ \hline Naive (SD) & 0.9665 / 0.9665 & 426.75 / 420.46 \\ AR (SD) & 0.9671 / 0.9633 & 391.38 / 389.16 \\ ARIMA (SD) & 0.9421 / 0.9393 & 378.12 / 373.05 \\ QAR & 0.9655 / 0.9591 & 368.91 / 369.95 \\ SQAR & 0.9450 / 0.9386 & 337.50 / 366.81 \\ QRF & 0.9291 / 0.9548 & 341.08 / 321.89 \\ SQRF & 0.9349 / 0.9625 & 329.34 / 314.48 \\ Similarity & 0.9501 / 0.9451 & 333.85 / 329.50 \\ \hline \hline \end{tabular} \end{table} Table 6: Prediction interval results, similarity approach included. ### Obtaining the Nearest Trajectories Below we let \([x_{q}^{\text{traj}},y_{q}^{\text{targ}}]\in Q_{\text{tune}}\). In general, we are willing to forecast \(y_{q}^{\text{targ}}\) and the prediction provided by the similarity model is denoted by \(y_{q}^{\text{pred}}\). In this brief subsection we define the necessary notation in order to explain the selection of the nearest trajectories rigorously. Let \(K\in\mathbb{Z}^{+}\) be the number of closest trajectories drawn from \(R_{q}^{\text{tune,traj}}=\{x_{i}^{\text{traj}}:1\leq i\leq q-1\}\) with respect to some _distance_ function \(d\). We set \(D_{q}=\{d(x_{q}^{\text{traj}},x):x\in R_{q}^{\text{tune,traj}}\}\) and denote by \(\{d_{q,s}\}_{s=1}^{K}\) the \(K\) smallest numbers of \(D_{q}\) and \(\{x_{s}^{q,\text{near-traj}}\}_{s=1}^{K}\subset R_{q}^{\text{tune,traj}}\) the \(K\) trajectories which are closest to the trajectory \(x_{q}^{\text{traj}}\). So, \(d_{q,s}=d(x_{q}^{\text{traj}},x_{s}^{q,\text{near-traj}})\) for \(s=1,2,\ldots,K\). Here, without loss of generality we consider \(d_{q,s}\leq d_{q,s+1}\) for \(1\leq s\leq K-1\). Finally, that gives us a sub-collection of the reference set \(R_{q}^{\text{traj}}\) defined by \(C_{q}=\{[x_{s}^{q,\text{near-traj}},y_{s}^{q,\text{near-targ}}]:1\leq s\leq K\}\), where \(y_{s}^{q,\text{near-targ}}\)'s are tagetsof \(x_{s}^{q,\text{near-traj}}\)'s, equipped with the distances \(\{d_{q,s}\}_{s=1}^{K}\) corresponding to \(x_{q}^{\text{traj}}\). ### Handling Outliers Once the nearest neighbors are selected, outlier detection and removal may as usual help in obtaining better forecasts. Smoothing the possible outliers via _winsorization_ is used in [17] in setting of similar trajectories. Below we briefly discuss the _tail removal_ along with winsorization, although we experimented with other approaches as well. **Winsorization:** We recall the definition of winsorization from [17]. Assuming the candidate values for our forecast are given by \(\{f_{1},f_{2},\ldots,f_{k}\}\) where \(f_{i}\leq f_{i+1}\) for each \(i\), the corresponding winsorized candidates are defined by \[f_{i}^{(w)}=\left\{\begin{array}{rl}f_{i+1},&\text{ if }f_{i}=\min(f_{j})\\ f_{i-1},&\text{ if }f_{i}=\max(f)\\ f_{i},&f_{i}\notin\{\min(f),\max(f)\}\end{array}\right.\] Simply, this procedure replaces the smallest and highest values of the candidates with the values to them. Another natural approach to outliers is the following. **Tail Removal:** The method removes the smallest \(r_{1}\) and the highest \(r_{2}\) candidates. The values of \(r_{1}\) and \(r_{2}\) can be set in the following ways. * (Constant) \(r_{1}=c_{1}\) and \(r_{2}=c_{2}\) for some properly chosen \(c_{1},c_{2}\). * (Percentile) \(r_{1}=\lfloor\gamma_{1}K\rfloor\), \(r_{2}=\lfloor\gamma_{2}K\rfloor\) for some positive \(\gamma_{1},\gamma_{2}\) such that \(\gamma_{1}+\gamma_{2}<1\). When \(\gamma_{1}=\gamma_{2}\), these two can be called as the symmetric constant and symmetric percentile approaches. ## 5 Methodologies for point forecasting Recall from Section 4.2 how we obtain \(C_{q}=\{[x_{s}^{q,\text{near-traj}},y_{s}^{q,\text{near-targ}}]:1\leq s\leq K\}\) which is the collection of nearest trajectories and the distances \(\{d_{q,s}\}_{s=1}^{K}\) with respect to some distance function \(d\). We use these for point-forecasting in various ways. A basic method for reaching \(y_{q}^{\text{pred}}\) is just a simple averaging: \[y_{q}^{\text{pred}}=\frac{1}{K}\sum_{s=1}^{K}y_{s}^{q,\text{near-targ}}.\] One perspective in our approaches below is to provide forecasts using weighted means of targets \(y_{s}^{q,\text{near-targ}}\) where the weights are obtained in specific ways that are to be described in Section 5.1. Another one is making use of local regression which is discussed in Section 5.2. Before proceeding further into these, let us compare the performance of using different distance functions and forecasting via just the arithmetic mean of the candidates.3 Footnote 3: In Table 7 (s) means seasonal filtering and \(\mathcal{R}\) represents radius, both of which are to be described in Section 5.3.1. To compare different distance functions, first a grid search for the hyperparameters is conducted on tune set. Even values between 2 and 20 were possible candidates for hyperparameter \(L\), and the optimal \(K\) value is searched from the set \(\{10,15,20,\ldots,200\}\). For the seasonal filtered models, \(\mathcal{R}\) hyperparameter ranges from 0 to 6. Table 7 shows that the model using the weighted Euclidean distance is clearly the best among listed. Also, one may observe that applying seasonal filters improves the performance regardless of the distance function used. In the following sections, unless otherwise is stated, the distance function used for obtaining the candidates will be the weighted Euclidean function. ### Weights for closest trajectories * In general, one would expect that if a trajectory from reference set is closer than another one, then its target should be weighted more. Instead of taking the aritmetic mean of corresponding targets we may consider specific weights and take the weighted average: \[y_{q}^{\text{pred}}=\sum_{s=1}^{K}w_{s}y_{s}^{q,\text{near-targ}},\] \begin{table} \begin{tabular}{c c c c} \hline \hline Distance & best hyperp. & MAE & MAPE \\ \hline Canberra & \(L=8,K=25\) & 46.80 / 48.69 & 4.58 / 4.60 \\ Correlation & \(L=20,K=10\) & 146.02 / 130.58 & 13.89 / 11.82 \\ Euclidean & \(L=8,K=30\) & 46.40 / 48.40 & 4.54 / 4.57 \\ Head-Tail & \(L=8,K=75\) & 48.16 / 49.45 & 4.68 / 4.65 \\ Maximum & \(L=8,K=30\) & 47.01 / 49.08 & 4.59 / 4.62 \\ W. Euclidean & \(L=14,K=25\) & 45.46 / **47.84** & 4.44 / **4.50** \\ \hline Canberra (s) & \(L=6,K=25,\mathcal{R}=3\) & 45.66 / 46.45 & 4.64 / 4.40 \\ Correlation (s) & \(L=20,K=20,\mathcal{R}=2\) & 90.98 / 82.46 & 9.22 / 8.13 \\ Euclidean (s) & \(L=8,K=20,\mathcal{R}=3\) & 45.41 / 46.66 & 4.60 / 4.40 \\ Head-Tail (s) & \(L=8,K=20,\mathcal{R}=1\) & 46.97 / 47.33 & 4.80 / 4.49 \\ Maximum (s) & \(L=8,K=20,\mathcal{R}=5\) & 46.08 / 47.21 & 4.65 / 4.47 \\ W. Euclidean (s) & \(L=10,K=25,\mathcal{R}=3\) & 44.59 / **45.82** & 4.51 / **4.32** \\ \hline \hline \end{tabular} \end{table} Table 7: Accuracy comparison with respect to different distances. The bottom half of the table uses seasonal filtering. where \(w_{s}>0\) and \(\sum\limits_{s=1}^{K}w_{s}=1\). We define the weights via a non-decreasing function \(f:\{1,\ldots,K\}\rightarrow\mathbb{R}\) by setting \(w_{s}=\frac{f(K-s+1)}{S}\), where \(S=\sum_{x=1}^{K}f(x)\). For instance, if \(f=1\), then the prediction is just the arithmetic mean of the targets. Our experiments involved the following choices of the functions: \(f_{1}(x)=1,\ f_{2}(x)=x,\ f_{3}(x)=\sqrt{x},\ f_{4}(x)=\ln(1+x)\) and \(f_{5}(x)=\ln^{2}(1+x)\). * Instead of choosing weights uniformly, we may also customize them according to the distances: \[y_{q}^{\text{pred}}=\sum_{s=1}^{K}w_{q,s}y_{s}^{q,\text{near-targ}},\] where \(w_{q,s}=\frac{g(d_{q,s})}{\sum\limits_{s=1}^{K}g(d_{q,s})}\) with \(g\) an arbitrary positive decreasing function on \([0,\infty)\). This is based on the intuition that the closer a trajectory is the greater weight of its target should be. The functions experimented consisted of the following: \(g_{1}(x)=\frac{1}{x+0.01},\ g_{2}(x)=\frac{1}{\sqrt{x}+0.01},\ g_{3}(x)=\frac {1}{x\sqrt{x}+0.01}\) and \(g_{4}(x)=\frac{1}{x^{2}+0.01}\). * We may also determine the weights of \(y_{s}^{q,\text{near-targ}}\)'s using linear regression. The linear regression is used in the following form: \[y=\sum_{s=1}^{K}w_{s}y_{s}+w_{K+1}.\] We find the minimizers as weights \(\tilde{w}_{1},\ldots,\tilde{w}_{K},\tilde{w}_{K+1}\) and then consider: \[y_{q}^{\text{pred}}=\left[\sum_{s=1}^{K}\tilde{w}_{s}y_{s}^{q,\text{near-targ }}\right]+\tilde{w}_{K+1}\] The training is processed using the tune query set. Then, we measure the trained model via test query set. ### A local regression approach We may apply linear regression to a local data in contrast to the one discussed in the previous section. Given \(x_{q}^{\text{traj}}\), we consider \(C_{q}=\{[x_{s}^{q,\text{near-traj}},y_{s}^{q,\text{near-targ}}]:1\leq s\leq K\}\) and set up a linear regression model on \(C_{q}\). If we denote by \(x_{s}^{q,\text{near-traj}}(t)\) the \(t^{th}\) component of the vector \(x_{s}^{q,\text{near-traj}}\) for \(s\in\{1,2,\ldots,K\}\), and \(x_{q}^{\text{traj}}(t)\) the \(t^{th}\) component of the vector \(x_{q}^{\text{traj}}\), then we indeed minimize the following function \[L(w_{1},w_{2},\ldots,w_{L},w_{L+1})=\sum_{s=1}^{K}\bigg{|}\bigg{(}\sum_{t=1}^{ L}w_{t}x_{s}^{q,\text{near-traj}}(t)\bigg{)}+w_{L+1}-y_{s}^{q,\text{near-targ}} \bigg{|}^{2},\] and use the prediction: \[y_{q}^{\text{pred}}=\bigg{(}\sum_{t=1}^{L}\tilde{w}_{t}x_{q}^{\text{traj}}(t) \bigg{)}+\tilde{w}_{L+1},\] where \(\tilde{w}_{1},\tilde{w}_{2},\ldots,\tilde{w}_{L},\tilde{w}_{L+1}\) are the optimum weights. ### Variations #### 5.3.1 Filtering & Seasonality An immediate variation of the basic method can be given by first using a filter on the overall observed trajectories in the corresponding reference set, and then doing a search for the similar trajectories over the set of remaining candidate trajectories. One particular strategy that can be used is as follows. Start by fixing a trajectory size, say \(L_{1}\), and a distance function \(d_{1}\), and choose similar trajectories with respect to \(d_{1}\). Calling the collection of chosen trajectories \(R\), we may then select the nearest trajectories from \(R\) with respect to another distance function \(d_{2}\) (and with possibly a different window size \(L_{2}\)). An important related example would be the case where \(R\) holds a certain seasonal characteristics. In general, the set \(R\) may consist of the trajectories corresponding to previous days, or weeks. This is implicitly done in [17] where the candidates are only selected from the same time of the previous days. Extending their work, we define a seasonal filter with a radius hyperparameter, which we denote by \(\mathcal{R}\). When \(\mathcal{R}=0\), the set \(R\) below will only contain the observations from the same hour and minute of the day. However, as expected, this filter results with a set \(R\) with very few trajectories. This is especially undesirable for making interval forecasts. To increase the number of possible candidates, \(\mathcal{R}\) is introduced. This hyperparameter loosens the seasonality condition via appending \(R\) by adding the consecutive observations of the observations in \(R\). As an example, when \(\mathcal{R}\) is 0, for an observation made at 15:00, set \(R\) would only contain the observations made at 15:00 on previous days. However, setting \(\mathcal{R}\) to 1 would add the observations made at 14:45 and 15:15 to set \(R\). Our experiments show that such a modification improves the results significantly, especially in interval forecasting. #### 5.3.2 Hourly Model Forecasting The underlying patterns in different parts of the day may vary from each other and the optimal hyperparameters may change with respect to these underlying patterns. Thus, using multiple models to make forecasts in different parts of the data could improve overall performance. There are numerous ways to split the data. Time of the day is a simple and reasonable way to split in case of forecasting traffic flow. Our experiments reveal that such partitioning and using different models for each hour improves the performance significantly. In particular, in our experiments below, for each hour of the day, hyperparameters of the best performing similarity models are selected on tune set. Let us provide a brief discussion on the details of experimentation. We used 10-fold cross validation to choose the optimal hyperparameters. For simplicity we fixed the distance to be the weighted Euclidean, used seasonal filtering, and searched for optimal \(L\), \(K\) and \(\mathcal{R}\) values from 2 to 20, 10 to 200, and 0 to 6, respectively. The grid can be extended to contain other distances. To give a motivation for using multi-hourly models, we provide an hour by hour comparison of some similarity models with different hyperparameters. Figure 10 displays an example of performance differences between two different similarity models in various hours of the day. Overall, the model with optimal hyperparameters (shown by red line) performs better. However, in some hours of the day the other model exhibits significantly better performance. ### Results Table 9 below shows the results of implementations of Model-1, Model-2, Model-3, the Local regression, and their seasonal versions, and lastly the Multi-hourly model, by running a grid search through hyperparameters \(L\) and \(K\) shown in Table 8 below. The Euclidean (E) and the weighted Euclidean (WE) distances were used for Model-1, Model-2, Model-3, the Local Regression and their seasonal versions Model-1-S, Model-2-S, Model-3-S and Local R.-S. The weighted Euclidean is the only one used for the multi-hourly model.4 Footnote 4: In the following tables, \(\mathcal{R}=0\) except the multi-hourly model. Figure 10: Hourly performance comparison of different similarity models (Point Forecast). It is seen from Table 9 that the local regression (with similarity) and the multi-hourly model performs better than both the benchmark and other similarity based methods. In particular, let us note that the similarity approach of Model-1-S was used in [17]. \begin{table} \begin{tabular}{c c} \hline \hline Model & Hyperparameter range \\ \hline Model-1 & \(L=2,3,\ldots,20\); \(K=10,20,\ldots,100\); \(f_{1},f_{2},f_{3},f_{4},f_{5}\); E, WE \\ Model-2 & \(L=2,3,\ldots,20\); \(K=10,20,\ldots,100\); \(g_{1},g_{2},g_{3},g_{4}\); E, WE \\ Model-3 & \(L=2,3,\ldots,19,20\); \(K=10,20,\ldots,800\); E, WE \\ Local R. & \(L=2,3,\ldots,20\); \(K=400,410,\ldots,800\); E, WE \\ Model-1-S & \(L=2,3,\ldots,20\); \(K=10,20,\ldots,100\); \(f_{1},f_{2},f_{3},f_{4},f_{5}\); E, WE \\ Model-2-S & \(L=2,3,\ldots,20\); \(K=10,20,\ldots,100\); \(g_{1},g_{2},g_{3},g_{4}\); E, WE \\ Model-3-S & \(L=2,3,\ldots,20\); \(K=10,20,\ldots,360\); E, WE \\ Local R.-S & \(L=2,3,\ldots,20\); \(K=100,110,\ldots,360\); E, WE \\ Multi-hourly model & \(L=2,3,\ldots,20\), \(K=10,20,\ldots,200\), \(\mathcal{R}=0,1,\ldots,6\); WE \\ \hline \end{tabular} \end{table} Table 8: Hyperparameters used for grid search \begin{table} \begin{tabular}{c c c c} \hline \hline Model & best hyperp. & MAE & MAPE \\ \hline Naive & - & 54.74 / 56.73 & 5.35 / 5.36 \\ AR & \(p=9\) & 50.34 / 52.62 & 4.92 / 4.94 \\ ARIMA & \(p=9,d=0,q=3\) & 49.34 / 51.83 & 4.82 / 4.83 \\ SARIMA & \(p=9,d=0,q=3,P=3,D=0,Q=2\) & 42.09 / 46.08 & 4.18 / 4.39 \\ RF & See page 12 & 43.58 / 46.85 & 4.35 / 4.40 \\ Seasonal RF & See page 12 & 42.91 / 45.99 & 4.31 / 4.32 \\ Model-1 & \(f_{2},L=14,K=40\), WE & 45.17 / 47.69 & 4.47 / 4.59 \\ Model-2 & \(g_{4},L=13,K=40\), WE & 45.08 / 47.59 & 4.39 / 4.47 \\ Model-3 & \(L=15,K=780\), WE & 43.82 / 49.88 & 4.33 / 4.70 \\ Local R. & \(L=19,K=670\), E & 44.19 / 46.74 & 4.33 / 4.39 \\ Model-1-S & \(f_{2},L=4,K=30\), WE & 43.23 / 44.39 & 4.43 / 4.23 \\ Model-2-S & \(g_{3},L=4,K=40\), WE & 43.21 / 44.36 & 4.43 / 4.23 \\ Model-3-S & \(L=4,K=350\), WE & 43.01 / 45.39 & 4.40 / 4.32 \\ Local R.-S & \(L=5,K=260\), E & 41.13 / **43.16** & 4.08 / **4.09** \\ Multi-hourly model & Multiple Models & 42.88 / **43.28** & 4.35 / **4.09** \\ \hline \end{tabular} \end{table} Table 9: Best results of the models for certain choices of hyperparameters. Also, one may also like to see whether this conclusion can be shown to hold via a statistical test. When we used the Diablod-Mariano test to compare the benchmarks, we have observed that seasonal random forest model is significantly the best benchmark model. Comparing Model-1-S with benchmarks revealed that Model-1-S is significantly better than the naive, ARIMA and random forest models. However, we have observed no superiority comparing Model-1-S with SARIMA and seasonal random forest. Multi-hourly model significantly outperforms the benchmarks and Model-1-S. Finally, seasonal local regression model exhibited a superior performance against all of the other models shown in Table 9. ## 6 Prediction intervals based on similarity The purpose of this section is to discuss our approaches on obtaining prediction intervals using similarity of trajectories. The section begins with a general discussion on existing methods in the literature, and then introduces two approaches which we call ST (Similarity Trajectory) and MDST (Model Dependent Similarity Trajectory). ### Background Let \(X_{t}\), \(t\geq 1\), be the time series of interest. One naive standard strategy for obtaining a prediction interval is the historical simulation approach. In this case, there is some underlying model (e.g., ARIMA, LSTM,...) that provides point forecasts for the next time instance. Let us write \(F_{t}\) for the model predictions provided by this model. Now suppose that we are at some time \(T\in\mathbb{N}\), and that we would like to obtain a prediction interval for time \(T+1\) based on observations corresponding to times \(1,\ldots,T\).5 For some predetermined \(L\in\mathbb{N}\), the historical simulation approach begins by considering the actual values \(X_{t}\), and the model forecasts \(F_{t}\), for \(t\in\{T-L+1,\ldots,T\}\), and looks Figure 11: Diebold-Mariano test results. Dark green means column-model is better than row-model. Black means no significant superiority can be deduced. at the corresponding errors \(\epsilon_{t}=F_{t}-X_{t}\). Then, for given \(\alpha\in(0,1)\), in order to obtain a \((1-\alpha)100\%\) prediction interval for \(X_{T+1}\), we choose the sample quantiles \(\epsilon(\alpha/2)\), \(\epsilon(1-\alpha/2)\) of the sequence \(\epsilon_{T-L+1},\ldots,\epsilon_{T}\). Then the \((1-\alpha)100\%\) prediction interval is given by \[(F_{T+1}+\epsilon(\alpha/2),F_{T+1}+\epsilon(1-\alpha/2)).\] We may summarize the discussion on historical simulation prediction intervals, which we call HS, as follows: 1. Consider the errors \(\epsilon\) made done once a specific model is used for point forecasts. 2. Fix some window size \(K\), and consider the most recent \(K\) errors. Choose the corresponding \(\alpha/2\) and \(1-\alpha/2\) sample quantiles. 3. Construct the prediction interval by adding these quantiles to the point forecasts provided by the model. The choice of \(L\) in this standard setup can be done by using grid search. Also let us note that a more reasonable way to choose the errors to compute the quantiles could be the use of seasonality. For example, instead of looking at the previous \(L\) errors, one could go back further, and look at the \(L\) errors corresponding to that instance of previous \(L\) days. Such an approach will be called the seasonal historical simulation (HS-S). Below, we will be using the HS method as a benchmark method in our comparisons. However, there are various other techniques for obtaining prediction intervals that are not discussed below. These include but are not restricted to the distributional prediction interval approach, bootstrap method, and the quantile regression averaging (QRA). Among these, QRA, in which the regressors are the forecasts of certain individual models, is of particular interest to us. In an independent work [1] in preparation, we combine the basic ideas behind QRA and the similar trajectories, and introduce some novel methodologies for providing prediction intervals. ### Similarity based prediction intervals In this subsection we propose two methods for obtaining prediction intervals. The following flowchart provides a general guideline for the tools that are used in these two approaches: ST and MDST. The first method we propose is analogous to the case of similarity based point forecasting, and it will be called the similarity of trajectories (ST) approach. It consists of the following steps: 1. Fix a distance function, a window size, and choose \(K\) most similar trajectories in the past. 2. Consider the set of possible candidates \(\{f_{1},\ldots,f_{K}\}\), corresponding to these \(K\) similar trajectories, as the actual forecast. 3. Compute the \(f(\alpha/2)\), \(f(1-\alpha/2)\) sample quantiles and form the prediction interval as \[(f(\alpha/2),f(1-\alpha/2)).\] Note that the described method is different than the historical simulation approach in the sense that it is independent of any model assumptions. No probabilistic assumptions are made, and the given methodology is totally deterministic in nature. When the trajectories are selected among a filtered data according to seasonality (Section 5.3.1), the method will be called ST-S, and when the forecasting is done on an hourly basis as in Section 5.3.2, it will be called multi-hourly model. Beyond the ST approach, one may wonder how we may combine the HS and ST methods for obtaining prediction intervals. Towards this direction, we introduce and experiment on the following strategy, which we call the model dependent similarity trajectory (MDST): 1. Consider the error sequence \(\epsilon\) discussed in HS approach. 2. Fix a distance function, a window size, and choose \(K\) most similar trajectories in the past, with respect to the corresponding part of the error sequence. 3. Consider the set of possible candidates \(\{\epsilon_{1},\ldots,\epsilon_{K}\}\) for the errors. 4. Form the prediction interval by adding the \(\alpha/2\) and \(1-\alpha/2\) sample quantiles of the candidate errors to the point forecast provided by the model. When daily seasonal seasonality is involved, the method will be called MDST-S. This seasonal variation includes the trajectories only starting from the same time of the day as possible candidates, as described in Section 5.3.1. This approach improves the performance significantly as can be seen in the Table 10. Figure 12: Summary of ST and MDST ### Results The purpose of this section is to experiment on the performance of using similarity of trajectories in obtaining interval forecasts. Table 10 compares the benchmark and similarity based models using unconditional coverage and the Winkler score. The models are trained to make interval forecasts with 95% coverage. While unconditional coverage shows the accuracy of the models in terms of covering 95% of the observations, the Winkler score is helpful for comparing these models with each other. The results from Table 10 suggests that the best approach among listed is the multi-hourly model which is the ST-S method in which forecast models are obtained in an hourly basis. None of the other similarity models were better than seasonal random forest but we have seen that the MDST-S and seasonal random forest model exhibit competitive performance with each other. \begin{table} \begin{tabular}{c c c c} \hline \hline Model & Best Hyperp. & Unconditional Coverage & Winkler Score \\ \hline Naive & — & 0.9665 / 0.9665 & 426.75 / 420.46 \\ QAR & \(p=9\) & 0.9655 / 0.9591 & 368.91 / 369.95 \\ SQAR & \(p=8,P=6\) & 0.9450 / 0.9386 & 337.50 / 366.81 \\ QRF & See page 16 & 0.9443 / 0.9548 & 315.75 / 321.89 \\ SQRF & See page 16 & 0.9433 / 0.9625 & 304.95 / 314.48 \\ HS & \(L=60\) & 0.9145 / 0.9168 & 388.46 / 374.24 \\ HS-S & \(L=60\) & 0.9165 / 0.9208 & 335.26 / 327.11 \\ ST & \(L=9,K=60\) & 0.9427 / 0.9358 & 333.85 / 329.50 \\ ST-S & \(L=4,K=150,\mathcal{R}=5\) & 0.9499 / 0.9457 & 349.43 / 317.32 \\ Multi-hourly Model & Multiple Models & 0.9498 / 0.9424 & 335.99 / 298.44 \\ MDST & \(L=8,K=220\) & 0.9301 / 0.9308 & 340.35 / 326.20 \\ MDST-S & \(L=8,K=220,\mathcal{R}=6\) & 0.9410 / 0.9406 & 320.53 / 314.56 \\ \hline \end{tabular} \end{table} Table 10: Prediction interval performance comparison Figure 13: Diebold Mariano Test - Hour based comparison for prediction intervals. Also, one may also like to see whether these conclusions can be shown to hold via a statistical test. Following [29], we used Diebold-Mariano test for comparing the performance of interval forecasts via their Winkler score. First, we conducted the test on the whole data (as in Section 5.4). The test resulted in multi-hourly model significantly outperforming all other models and among the benchmarks, seasonal random forest was the best model. Also, MDST-S turns out to outperform all models except multi-hourly model and seasonal random forest, and no conclusions could be drawn between MDST-S and seasonal random forest. Secondly, using the hour based Diebold-Mariano test approach, again similar to [29], we have seen that multi-hourly model performs better than all other models in most hours of the day. As demonstrated in Figure 13, following the multi-hourly model, MDST-S, ST-S and seasonal random forest were the successful ones. Lastly, let us note that multi-hourly model was superior against seasonal random forest in only 13 hours of the day, and it was significantly worse in 1 hour of the day. So the test resulted inconclusive for 10 hours. The competition between the two models in these hours of the day motivates the search for possible conditions where similarity models work well and the ones open to improvement. ## 7 Conclusion The literature on time series forecasting, and in particular on traffic flow forecasting, is very rich, with various approaches and methodologies. Our motivation in this work was to have a general look at the use of similarity of trajectories, and to discuss its possible variations. The experiments consisted of obtaining prediction intervals together with point forecasts. Our conclusions above can be summarized as follows: * The methods used in this manuscript seem to yield competitive results against the more sophisticated models such as the random forest. The similarity of trajectories approach has a great flexibility and provides the chance to explain the forecast by a direct reference to historic behavior. * Using local regression and including seasonality improve the point forecast results significantly. However, each additional step comes up with new hyperparameters and computational issues, and these can be analyzed in an upcoming work. * Multi-step forecasts using seasonal similarity seem to perform very well, and this can be further analyzed in a separate experimental study. * Obtaining prediction intervals based on similar trajectories in the same manner seem to be a natural choice, and it yielded results comparable to the ones we used as benchmarks, but we believe that the performance of the proposed strategies are open to improvements. * Furthermore, in point forecasting, we may customize local regression by replacing linear regressing with other machine learning algorithms (e.g., gradient boosting, support vector machhine) for locally obtained data. In conclusion, the methods we surveyed around are flexible, and are open to further improvements. Our plan in subsequent work is to specialize on certain aspects of the problem and develop the current strategies in different perspectives via the use of techniques from the statistics and machine learning literature. **Acknowledgements** The authors would like to thank Elif Yilmaz for helpful discussions. Second and third authors are supported partially by BAP grant 20B06P.
2302.14707
Two qubits in one transmon -- QEC without ancilla hardware
We show that it is theoretically possible to use higher energy levels for storing and controlling two qubits within a superconducting transmon. This is done by identifying energy levels as product states between multiple effecitve qubits. As a proof of concept we realise a complete set of gates necessary for universal computing by numerically optimising control pulses for single qubit gates on each of the qubits, entangling gates between the two qubits in one transmon, and an entangling gate between two qubits from two coupled transmons. The optimisation considers parameters which could make it possible to validate this experimentally. With these control pulses it is in principle possible to double the number of available qubits without any overhead in hardware. The additional qubits could be used in algorithms which need many short-living qubits such as syndrom qubits in error correction or by embedding effecitve higher connectivity in qubit networks.
Alexander Simm, Shai Machnes, Frank K. Wilhelm
2023-02-28T16:18:00Z
http://arxiv.org/abs/2302.14707v2
# Two qubits in one transmon - QEC without ancilla hardware ###### Abstract We show that it is theoretically possible to use higher energy levels for storing and controlling two qubits within a superconducting transmon. This is done by identifying energy levels as product states between multiple effecitve qubits. As a proof of concept we realise a complete set of gates necessary for universal computing by numerically optimising control pulses for single qubit gates on each of the qubits, entangling gates between the two qubits in one transmon, and an entangling gate between two qubits from two coupled transmons. The optimisation considers parameters which could make it possible to validate this experimentally. With these control pulses it is in principle possible to double the number of available qubits without any overhead in hardware. The additional qubits could be used in algorithms which need many short-living qubits such as syndrom qubits in error correction or by embedding effecitve higher connectivity in qubit networks. ## I Introduction We are currently in the NISQ era of quantum computing in which different platforms are under active development and processors with over 50 qubits are being realised [1; 2]. Further progress needs both an increase in the numbers of accessible qubits an an increase in fidelity of operations on these qubits. While the fidelity of the qubits under gate operations, especially two-qubit entangling gates is the primary bottleneck, increasing the number of qubits also presents challenges due to overhead in control and readout hardware. A usual approach for realising qubits is using a quantum subsystem with several energy levels and restricting the computational subspace to two of them. All other states are discarded and transitions to them are considered leakage that needs to be avoided. In a superconducting device with Josephson junctions the nonlinear potential allows to address individual transitions between states which effectively separates the ground and first excited state from higher excited states. In addition to cooling and decoupling the system from the environment, control pulses are engineered to prevent leakage out of this computational subspace. In contrast to discarding all except two states of the system, there are several ideas how to exploit them. First, higher excited states can be used as transient states for gate operations (for example bSWAP gates [3] or multi-qubit gates [4]). In this case the computational subspace stays limited to two states. Another option is to include higher excited states into the computational subspace by redefining what the fundamental computational unit is: the qubit is replaced by an N-level qudit[5]. Qudits have been shown to be controllable [6] and gates have been realised, including an entangling gate between two qudits up to dimension 5 [7]. A disadvantage of this approach is that known algorithms might not be easily adaptable for qudits. Another application is to use nonlinear resonators as continuous variables, i.e., fundamentally change the encoding scheme [8; 9; 10; 11]. We propose a scheme to stay with the concept of qubits but to use more than two energy levels for storing more than one qubit in a superconducting device. This is simply done by relabeling the energy levels as product states, i.e., interpreting a subspace of dimension \(2^{n}\) and usually composed as a direct sum as a direct product of Hilbert spaces. As a proof of concept we use four levels in a model for superconducting transmons (ground state and three excited states) to store two qubits. This concept makes sense, if the resulting effective qubit lattice has the appropriate connectivity, i.e., if we can make gates between different qubit realised in distinct transmons. We show that it is possible to realise a complete set of gates necessary for universal computing by numerically optimising control pulses for single qubit gates on each of the qubits, entangling gates between the two qubits within one transmon, and an entangling gate between two qubits from two coupled transmons. With these control pulses it is in principle possible to double the number of available qubits without any overhead in hardware, although the signal generation is more complicated than in usual two-level systems, as will be explained below. The additional qubits could be used in algorithms which need many qubits with large local connectivity, such as qubits in error correction. This is not as extreme as molecular quantum computing as proposed in Ref. [12] which aims at putting many qubits into a single degree of freedom. This paper complements the work [13] as it is giving explicit pulse constructions. The paper is organized as follows: In section II we review the model of the transmon and the numerical methods we used for finding optimal pulse shapes. Section III and IV explain the results for single- and two-qubit gates within a transmon. Entangling gates between two and four qubits in two coupled transmons as well as the method we used to find those gates are explained in section V. All parameter values and plots of the propagators can be found in appendix A. Model ### Transmon and drive Hamiltonian We consider a superconducting device containing a single Josephson junction that is described by the Hamiltonian [14] \[H_{0}=4E_{c}n^{2}-E_{J}\cos(\phi) \tag{1}\] where \(n\) is the number of Cooper pairs on the effective capacitance and \(E_{c}\) is the charge energy required to add an electron there. The Josephson junction contributes the cosine potential with energy \(E_{J}\) and the flux \(\phi\). In the transmon limit, \(E_{J}\gg E_{c}\), the cosine can be expanded around its minimum at \(\phi=0\). Up to the quartic term this leads to an anharmonic oscillator \[H_{0}=\omega a^{\dagger}a+\frac{\lambda}{2}a^{\dagger}a^{\dagger}aa \tag{2}\] where \(a,a^{\dagger}\) are the bosonic operators for the Cooper pairs, \(\omega=\sqrt{8E_{c}E_{J}}-E_{c}\) is the resonance frequency between ground and first excited state, and \(\lambda=-E_{c}=E_{2}-2E_{1}<0\) denotes the anharmonicity. In this notation the approximation is valid in the limit \(\lambda\ll\omega\), which is justified for transmons that operate in the range of \(\omega\sim 3\ldots 6\) GHz and \(|\lambda|\sim 100\ldots 300\) MHz. In the following, this anharmonic oscillator will simply be called the transmon [15]. In addition to the drift \(H_{0}\), the full system \(H(t)=H_{0}+H_{d}(t)\) contains the time-dependent drive \[H_{d}=Av(t)(a+a^{\dagger}) \tag{3}\] which models the microwaves used to control superconducting qubits. It consists of a constant amplitude scale factor \(A\) and a dimensionless time-dependent function \[v(t)=s(t)\cos(\omega_{d}t+\phi_{d})=s(t)\left(I\cos(\omega_{d}t)-Q\sin(\omega_ {d}t)\right) \tag{4}\] with constant inphase component \(I=\cos(\phi_{d})\) and quadrature component \(Q=\sin(\phi_{d})\). The system is thus driven by a carrier signal at a frequency \(\omega_{d}\) with additional phase shift \(\phi_{d}\). The signal is shaped by an envelope \(s(t)\) which will be the main object of optimisation. Typical intuitive choices for the envelope are Gaussians or piecewise constant functions. On occasion, the DRAG scheme is used to eliminate unwanted transitions out of the computational subspace or between qubit subspaces [16]. It transforms the envelope function into \(s(t)\to s(t)-i\delta\frac{\delta(t)}{\lambda}\) where \(\delta\ll 1\) is a free parameter that scales the correction term. In order to store two qubits we use the four lowest energy levels (Fig. 1) identified as the product states of two qubits. We will denote the qubits as \(Q_{1}\) (\(Q_{2}\)), referring to the left (right) qubit in the ket \(|Q_{1}Q_{2}\rangle\). States in this relabeled 2-qubit basis will be written as normal kets, while states in the transmon's eigenbasis will be written with a bar. The mapping is therefore \(|\bar{0}\rangle=|00\rangle\), \(|\bar{1}\rangle=|01\rangle\), \(|\bar{2}\rangle=|10\rangle\), and \(|\bar{3}\rangle=|11\rangle\). Higher excited states \(|\bar{n}\rangle\), \(n>3\), are considered leakage and will only be denoted by the eigenstate ket. In the simulations one additional fifth level \(|\bar{4}\rangle\) was included to simulate leakage out of the computational subspace. The energy eigenstates \(|\bar{n}\rangle\) have eigenvalues \(E_{n}=n\omega-\frac{n(n-1)}{2}\lambda\). The Bohr frequency between two levels will be denotes as \(\omega_{m\to n}=|E_{n}-E_{m}|\). This labelling is compatible with the usual computational subspace in the lowest two levels. As long as the additional qubit is not needed, algorithms can fall back to using two levels, effectively assuming that \(Q_{1}\) is in state \(|0\rangle\). Projected onto the computational subspace, the system can be written in the basis \(\{\mathbb{1},\sigma_{x},\sigma_{y},\sigma_{z}\}^{\otimes 2}\) of \(SU(4)\), i.e. in tensor products of Pauli matrices and the unit matrix. The Hamiltonian, eq. (2), then becomes \[\begin{split} H_{0}&=\left(\frac{3}{2}\omega+ \lambda\right)\mathbb{1}\otimes\mathbb{1}\ -\ \frac{1}{2}(\omega+\lambda)\mathbb{1}\otimes Z\\ &-\ (\omega+\lambda)Z\otimes\mathbb{1}\ +\ \frac{\lambda}{2}Z\otimes Z \end{split} \tag{5}\] while the drive, eq. (3), is \[\begin{split} H_{d}&=Av(t)\Big{[}\frac{1}{2}(1+ \sqrt{3})\mathbb{1}\otimes X\ +\ \frac{1}{2}(1-\sqrt{3})Z\otimes X\\ &+\ \frac{1}{\sqrt{2}}(X\otimes X+Y\otimes Y)\Big{]}\end{split} \tag{6}\] This structure does not lend itself to an easy and straightforward adaptation of simple and intuitive control strategies. We thus rather resort to optimal control [17; 18]. ### Numerical optimisation In a finite system, calculating the propagator \(U(T)=\mathbb{T}\exp\left(-i\int_{0}^{T}H(t)dt\right)\) for a given drive Hamiltonian \(H_{d}\) and a gate time \(T\) is usually only feasible numerically. Figure 1: Labeling of the energy levels in the computational subspace (modified from [14]) Assuming the Hamiltonian is (or can be approximated as being) piece-wise constant, this is a matter of matrix exponentiation and multiplication along the time ordering described by \(\mathbb{T}\). We are attempting to solve the inverse problem of finding a drive Hamiltonian \(H_{d}\) that generates a given gate. This is done with numerical optimisation: start with an initial guess for \(H_{d}\) that contains enough parameters, calculate \(U\), and optimise the parameters with respect to a goal function \(g(U,G)\) that calculates the distance between \(U\) and the ideal propagator (i.e. the desired gate) \(G\). If not otherwise mentioned, the goal function \(g=1-F\) was used with the fidelity \(F=\frac{1}{4}tr(U^{\dagger}G)\) where \(d\) is the system's dimension. The fidelity is only evaluated in the computational subspace with \(d=4\), which in particular means that relative phases in higher excited states are being ignored. If the optimisation reaches a global (or local, but good enough) minimum of \(g\), the drive Hamiltonian realises the ideal gate \(G\) up to a small error. For all following results the numerical optimisation was done using the \(C^{3}\) software [19] with gradient-based optimisation (L-BFGS). All simulations are done assuming a closed system at zero temperature without noise. In addition to the optimisation algorithm, the \(C^{3}\) software simulates the chain of devices which would create the drive function \(s(t)\) in a lab. The chain of devices includes a local oscillator (LO) that generates the carrier signal \(\cos(\omega_{d}t+\phi_{d})\). In the following, \(\omega_{d}\) will be chosen to be resonant to a desired transition between two energy eigenstates. Additionally, a waveform generator (AWG) generates the envelope \(s(t)\), which is mixed with the carrier signal into the drive function (4). The set of optimisable parameters thus contains the scaling amplitude of the drive \(A\), the gate time \(T\), the drive frequency \(\omega_{d}\) and a phase shift \(\phi_{d}\), the strength of the DRAG correction \(\delta\), and all parameters that specify the envelope \(s(t)\). In order to compare the optimised drive frequencies with the resonances of the model we numerically calculated the Stark shifted eigenvalues of the full Hamiltonian (12). Since the drive amplitude is not constant, the Stark shift changes over time. We therefore used the Hamiltonian at \(\frac{T}{2}\) which, in case of the Gaussian envelopes, contains the strongest shift. Although this procedure only approximates the actual Stark shift, it is sufficient for comparing the optimised frequencies to the resonances, especially because the Stark shift is only in the order of a few MHz. ### Alignment of phases Under time evolution with \(U_{0}=e^{-iH_{0}t}\) the free system picks up kinetic phases which need to be corrected for the gates. In contrast to the case of two level systems, there is no transformation into a rotating frame that removes both the drift and the oscillation of the drive. A transformation \(T=e^{-iH_{0}t}\) removes the drift, but creates higher order terms in the drive, because the drive does not commute with the anharmonicity. Instead we can do the transformation \(T=e^{-i\omega ta^{\dagger}a}\) with the harmonic part of the drift [20]. This results in \[\tilde{H} =T^{\dagger}HT+iT^{\dagger}\dot{T} \tag{7}\] \[=\frac{\lambda}{2}a^{\dagger}a^{\dagger}aa\ +\ As(t)(a+a^{ \dagger}) \tag{8}\] where the oscillation of the drive disappeares after a rotating wave approximation (compare eq. (4)) but the anharmonicity remains. The second and third excited state still pick up phases under time evolution. We thus have the option of actively correcting phases with the drive Hamiltonian or of choosing gate times at which the phases of the computational subspace align. The latter could be achieved by allowing the gate time \(T\) to be numerically optimised which leads to times at which at least some of the phases vanish. In order to have all phases of the excited states aligned with the ground state we need to find \(n,m\in\mathbb{Z}\) such that in the rotating frame (frequencies are in units of \(\frac{2\pi}{s}\)) \[E_{2}T =\lambda T=n\] \[E_{3}T =3\lambda T=3\lambda m \tag{9}\] \[E_{3}T =3(\omega-\lambda)T=l\] The first condition fixes the time to \(T=\frac{n}{\omega}\), while the last two lead to \(m=(2-\frac{\lambda}{\omega})n\) and \(l=3(1-\frac{1}{\omega})n\). Unless \(n\) is large enough so that \(\frac{\lambda}{\omega}n\) is integer, the phases will in general only partially align and need to be actively corrected. For the chosen values of \(\omega=5\) GHz, \(\lambda=300\) MHz, however, \(n,m,l\) are integer for \(T\) being a multiple of 10 ns. If this seems arbitrary, the argument works the other way around as well: using a tunable transmon it would be possible to fix \(T\) and optimise its anharmonicity until the phases (mostly) align. Perfect alignment of the drift phases is spoiled by the AC-Stark shift caused by the drive. For some gates the shift is in the range of several MHz which, depending on the gate time, can correspond to almost a full rotation. However, since the amplitude and the carrier frequencies are variable parameters for the optimisation we expect the results to already be adjusted to the shifted energies. ## III Single qubit gates The simplest gates that we wish to realise are single-qubit gates on each of the qubits within one transmon, which corresponds to splitting up \(SU(4)\) into \(SU(2)\otimes SU(2)\). For the simulation of all of these gates, we used parameters \(\omega=5\) GHz and \(\lambda=300\) MHz which are in the typical range for transmons. As described in section II.3 these values force us to use gate times \(T\) in multiples of 10 ns unless we are actively correcting the phases. For all single qubit gates, the envelopes are unnormalised Gaussians \[s(t)=\exp\left(-\frac{(t-t_{0})^{2}}{2\sigma^{2}}\right) \tag{10}\] that contain \(t_{0}\) and \(\sigma\) as optimisable parameters. This means that they are not considered to be piece-wise constant but continuous (up to the simulation's resolution of \(0.02\) ns). Initial values for the optimisation are \(t_{0}=\frac{T}{2}\) and \(\sigma=\frac{T}{5}\). The gates do not depend on the actual shape of the envelope, but the action \(\int_{0}^{T}H_{d}(\tau)d\tau\) depends on the integrated area under the envelopes. Gates could thus be made shorter by using rectangular or flat-top Gaussian shapes, which we did not attempt here. All optimised parameter values are listed in Appendix A. ### Gates on qubit \(Q_{2}\) For gates on \(Q_{2}\) the two subspaces spanned by \(\{\ket{\bar{0}},\ket{\bar{1}}\}\) and \(\{\ket{\bar{2}},\ket{\bar{3}}\}\) need to be treated separately and transitions between them have to be avoided (see fig. 1). Thus, two transitions \(\ket{\bar{0}}\leftrightarrow\ket{\bar{1}}\) and \(\ket{\bar{2}}\leftrightarrow\ket{\bar{3}}\) need to be addressed simultaneously. This was not possible with one carrier signal and a simple envelope. We chose to use a total of two drive signals. Formally, we replaced the drive Hamiltonian (3) by \[H_{d}=\sum_{k=1,2}A_{k}s_{k}(t)\cos\left(\omega_{d}^{(k)}t+\phi_{d}^{(k)} \right)\,(a+a^{\dagger}) \tag{11}\] This means that there is a carrier signal and envelope for each frequency that needs to be driven, and both signals are superposed in the end, which allows controlling both transitions individually. From an experimental perspective, a single QI mixer is sufficient, with the LO set between 4.4 and 5 GHz, and the required shifts are handled by adding shifts to the envelope. Both carrier frequencies were chosen resonant to \(\omega_{0\to 1}=5\) GHz and \(\omega_{2\to 3}=4.4\) GHz, respectively. Since phases are already corrected by the fixed gate time and sufficiently narrow peaks allow driving the chosen transitions only, the \(\mathds{1}\otimes X(\pi/2)\) gate with \(X(\pi/2)=e^{i\frac{\pi}{4}\sigma_{x}}=\frac{1}{\sqrt{2}}(\mathds{1}+\sigma_{ x})\) is the easiest to realise. After optimisation, the infidelity in the subspace is \(1-F<10^{-6}\). The corresponding propagator is depicted in figure 2. A \(Y(\pi/2)\) gate on \(Q_{2}\) can be realised the same way since the dis Figure 2: \(X(\pi/2)\) gate on qubit \(Q_{2}\). Colours indicate the phase and areas are proportional to the absolute amplitude. **Left:** The optimised propagator \(U\). Dotted black squares indicate the absolute values of the ideal gate. **Right:** the difference \(Ue^{i\phi}-G\) between \(U\) and the ideal gate \(G\). The phase difference \(\phi=\arctan(U_{0,0})-\arctan(G_{0,0})\) corrects the global phase which is ignored in the optimisation. **Bottom:** Normalised spectral amplitude of the optimised pulse. Dotted lines correspond to resonances of the model. tinction between \(X\)- and \(Y\)-rotations is only a matter of shifting the phase \(\phi_{d}\) by \(\frac{\pi}{2}\). Although on superconducting qubits \(Z\)-gates can be realised as virtual gates by means of shifting the phase between two gates [21], this would have a different effect in the \(SU(2)\otimes SU(2)\) basis and it was necessary to implement the gate explicitly. In this case it is only necessary to introduce relative phases on the diagonal of the propagator. We realised this by driving both transitions \(|0,0\rangle\leftrightarrow|0,1\rangle\) and \(|1,0\rangle\leftrightarrow|1,1\rangle\) to a full \(2\pi\) rotation during which the phases were corrected by the relative phases of the two drive signals. Again an infidelity \(1-F<3\cdot 10^{-5}\) was reached. ### Gates on qubit \(Q_{1}\) Gates on qubit \(Q_{1}\) need transition between next-to-nearest-neighbour energy levels \(|\bar{0}\rangle\leftrightarrow|\bar{2}\rangle\) and \(|\bar{1}\rangle\leftrightarrow|\bar{3}\rangle\). Driving the frequencies \(\omega_{0\to 2}\) and \(\omega_{1\to 3}\), which are above 9 GHz, caused many unwanted 2-photon transitions which we could not remove by optimisation. Similarly, driving the three nearest-neighbour transitions \(\omega_{0\to 1}\), \(\omega_{1\to 2}\), and \(\omega_{2\to 3}\) did not result in a high fidelity because the two intended transitions overlap in the middle (in \(|\bar{1}\rangle\leftrightarrow|\bar{2}\rangle\)). The best fidelity was achieved with two-photon transitions by driving \(\frac{\omega_{0\to 2}}{2}\) and \(\frac{\omega_{1\to 2}}{2}\) with a strong amplitude. Since these frequencies are at least \(\frac{\lambda}{2}\) distant from all nearest-neighbour transitions, they are sufficiently off-resonant to not cause single-photon absorption. In addition to a larger amplitude the optimisation resulted in non-vanishing phases \(\phi_{d}\). As before we only realised the \(X(\pi/2)\otimes\mathbbm{1}\) and \(Z(\pi/2)\otimes\mathbbm{1}\) gates, which resulted in errors \(1-F<2\cdot 10^{-3}\) and \(1-F\approx 0.026\), respectively. Their propagators are depicted in figures 1 and 2. As an example, the error of the \(Z(\pi/2)\)-gate, which has the lowest fidelity of all single qubit gates, can be seen in figure 3. The main contributions are phase errors on the diagonal, probably resulting from imperfect rotations by \(2\pi\) which are needed for the \(Z\)-gate. In addition, there are unwanted 3-photon absorptions from \(|00\rangle\) to \(|11\rangle\), most likely due to the large drive amplitude. The errors in all other gates are similar. ## IV Entangling gates between \(Q_{1}\) and \(Q_{2}\) Entangling 2-qubit gates are typically more difficult to realise than single qubit gates, have lower fidelity, and need longer gate times. The biggest advantage of our approach is that an entangling gate between the two qubits in one transmon is easy to facilitate. As can be seen in the Hamiltonian (5), the third term is an Ising-type coupling between both qubits which corresponds to an iSWAP gate \(\exp\bigl{(}i\frac{\pi}{4}(X\otimes X+Y\otimes Y)\bigr{)}\). We can use this by driving the middle transition \(|\bar{1}\rangle\leftrightarrow|\bar{2}\rangle\) to a \(\frac{\pi}{2}\) rotation, which is even simpler than the single-qubit gates because it needs only one carrier frequency as long as the phases are being aligned. For this we again chose the gate time of \(T=40\) ns. The final propagator is shown in figure 5. With a fidelity of \(1-F<2\cdot 10^{-3}\) it is comparable to the single-qubit gates. Although the imperfect rotation is the largest absolute error (fig. 5 right), a non-perfect alignment of the phase of the \(|11\rangle\) state is an additional source of error and would put an upper limit on the fidelity even if the rotation was perfect. For further improvement this could be corrected by a second drive between \(|\bar{2}\rangle\leftrightarrow|\bar{3}\rangle\) or \(|\bar{3}\rangle\leftrightarrow|\bar{4}\rangle\) doing a full \(2\pi\) rotation. Similarly, a \(\sqrt{iSWAP}\) gate is possible by reducing the amplitude \(A\). In addition to the iSWAP gate, it is possible to create an entangling gate that transforms every basis state into a maximally entangled Bell pair. To realise this, a superposition of three drive signals on all neighbouring transitions \(\omega_{0\to 1}\), \(\omega_{1\to 2}\), and \(\omega_{2\to 3}\) is necessary. The upper and lower drives at 5 and 4.4 GHz force a \(\pi\)-rotation, thus adding the same phase on all four levels which becomes an irrelevant global phase. At the same time the middle signal drives a \(\frac{\pi}{2}\) rotation at 4.7 GHz which, on its own, would realise a \(\sqrt{iSWAP}\) gate. Due to the simultaneous rotations of the other levels, however, this creates the same amplitude on the \(|00\rangle\leftrightarrow|11\rangle\) matrix elements. The resulting propagator, a "double-iSWAP" gate, can be seen in figure 6. Because of the larger drive amplitudes, the errors are considerably larger than in the iSWAP gate with \(1-F<5\cdot 10^{-3}\) Figure 3: Error of the \(Z(\pi/2)\) gate on qubit \(Q_{1}\) computed as the difference \(Ue^{i\phi}-G\) between the optimised gate \(U\) and the ideal gate \(G\). The phase difference \(\phi=\arctan(U_{0,0})-\arctan(G_{0,0})\) corrects the global phase which is ignored in the optimisation. ## V Entangling gates between two transmons #### Model Hamiltonian The missing piece for a universal set of gates is at least one entangling gate between the qubits in two coupled transmons. For this we extend the model Hamiltonian to \[H = \underbrace{\sum_{i=1,2}\left(\omega_{i}a_{i}^{\dagger}a_{i}+ \frac{\lambda}{2}a_{i}^{\dagger}a_{i}a_{i}\right)}_{H_{0}}+\underbrace{\sum_{i= 1,2}A_{i}v_{i}(t)(a_{i}+a_{i}^{\dagger})}_{H_{d}}\] \[+ \underbrace{J(a_{1}+a_{1}^{\dagger})(a_{2}+a_{2}^{\dagger})}_{H_ {J}} \tag{12}\] where the drift and drive Hamiltonian \(H_{0}\) and \(H_{d}\) for each of the transmons individually is the same as defined by (2) and (3). In order to avoid degeneracies within the lowest 5 energy levels, the parameters for the first transmon are chosen to be \(\omega_{1}=5\) GHz and \(\lambda_{1}=300\) MHz like in section IV, while the second transmon is operating at \(\omega_{2}=4.5\) GHz and \(\lambda_{2}=250\) MHz. We assume the transmons to be capacitively coupled with a coupling strength \(J\ll\omega,\lambda\), which leads to the transverse coupling axis in (12). Here, \(J=20\) MHz was used. Due to the coupling the drift propagator \(U_{0}\) contains many off-diagonal terms that we were not able to remove by optimisation. We therefore chose to work in the dressed basis, the eigenbasis of the coupled Hamiltonian \(H_{0}+H_{J}\), in which the propagator is diagonal but the states still accumulate phases over time. This would also be the readout basis if readout was slow. In contrast to the single transmon, it was not possible to find a gate time \(T\) at which all phases align. We thus need to find a drive pulse that is able to correct all phases and, depending on the desired gate, add non-diagonal terms to the propagator. Also, between the 25 energy levels of the combined system there are 150 unique resonances, many of which would be degenerate in the uncoupled system. The degeneracies are lifted by the coupling and are separated only by a few megahertz, turning this into a rather formidable control problem. Figure 4: iSWAP gate between \(Q_{1}\) and \(Q_{2}\). Colours indicate the phase and areas are proportional to the absolute amplitude. **Left:** The optimised propagator \(U\). Dotted black squares indicate the absolute values of the ideal gate. **Right:** the difference \(Ue^{i\phi}-G\) between \(U\) and the ideal gate \(G\). The phase difference \(\phi=\arctan(U_{0,0})-\arctan(G_{0,0})\) corrects the global phase which is ignored in the optimisation. **Bottom:** Normalised spectral amplitude of the optimised pulse. Dotted lines correspond to resonances of the model. #### Construction of pulses by reducing the number of frequencies To find an entangling gate with a reasonably high fidelity in this system we chose to start with a pulse shape that has sufficiently many degrees of freedom for achieving arbitrary high fidelity. After this, we successively reduced the number of freedoms and reoptimised in order to find a simpler pulse shape. First, we chose a long time of \(T=1\mu\)s in order to have very narrow frequency peaks of approximately \(T^{-1}=1\) MHz which allows driving individual resonances and avoids unwanted transitions. Second, instead of two frequencies as in the single qubit case, we used a superposition of many frequencies and had the optimisation algorithm figure out which frequencies it needs. Formally this means that the drive Hamiltonian \(H_{d}^{(i)}\) for each qubit \(i\) in (12) was replaced by (compare to (11)) \[H_{d}^{(i)}=\sum_{k=1}^{N}A_{k}\cos\left(\omega_{d}^{(k)}t+\phi_{d}^{(k)} \right)(a_{i}+a_{i}^{\dagger}) \tag{13}\] where \(i\) labels the transmon and \(N\) is the number of individual frequencies. Setting \(v_{i}(t)=1\) corresponds to a rectangular envelope. While the cross-resonance gate was mainly facilitated by driving transmon 1 with a strong amplitude, transmon 2 was also driven with an initially weak amplitude two orders of magnitude below transmon 1 to allow for the correction of unwanted transitions. We started with \(N=5000\) frequencies equally separated between 2.5 GHz and 5.5 GHz, which covers all resonances of the model, and allowed the optimiser to adjust the frequencies \(\omega_{d}^{(k)}\), phases \(\phi_{d}^{(k)}\), and amplitudes \(A_{k}\). When the optimisation converged to a high fidelity, \(N\) was reduced by removing frequencies with the lowest amplitudes \(A_{k}\) (assuming that those have the least effect on the propagator) and the optimisation was started again. This successively boiled the drive Hamiltonian down to a few necessary frequencies. #### Realised gates Since the single-qubit gates on \(Q_{2}\) were more simple than on \(Q_{1}\), we chose to implement cross-resonance entangling gates between the \(Q_{2}\) qubits of both transmons. In this case both \(Q_{1}\) qubits are spectators whose state needs to stay fixed during the gate. The simplest choice for an entangling gate was the diagonal controlled-Z gate because the propagator is already diagonal in the dressed basis. For this gate, the fidelity for each number of frequency components \(N\) during the reduction is shown in figure 5, where the same procedure was also done for shorter gate times of \(T=200\), \(300\), and \(500\) ns. During the reduction the fidelity continually decreases. As can be expected, shorter gate times reach worse fidelities because the broadened spectral peaks cause unwanted transitions. Depending on the desired fidelity, 10 or less frequency components can be sufficient to realise a CZ gate. The propagator with \(1-F\approx 0.081\) that is created by 10 frequencies is depicted in Fig. 6, including the model's resonances in the vicinity of the frequencies. #### 4-qubit gates In additional to 2-qubit gates with two spectators, the setup makes it possible to realise 4-qubit gates. Simple choices are triple-controlled gates in which a Pauli matrix acts on the fourth qubit only if the other three are in the state \(|111\rangle\), such as controlled-Z (CCCZ) and controlled-NOT (CCCX). The propagators were calculated with the same method as above. As can be seen in Fig. (5), Figure 5: **Left:** Fidelity of the CZ gate depending on the number of frequency components. Starting from the right (200 frequencies) the number of frequencies was successively reduced and the gate was reoptimised after each step. This only includes the fidelities of \(F>0.9\). **Right:** Fidelities of 4-qubit gates CCCZ and CCCX compared to the 2-qubit gates CZ and CX. the 4-qubit gates reach similar fidelities as the 2-qubit controlled gates with the same number of frequencies. ## VI Conclusion Using optimised pulse shapes we showed that it is possible to store and control two logical qubits within a transmon, effectively doubling the number of available qubits. The gate times of single-qubit and two-qubit gates in the transmon are comparable to usual gates in two-level systems and could be made even shorter when considering the conditions for phase alignment. Additionally, the necessary pulses with two superposed frequencies should be simple enough to be realisable experimentally. One main advantage of this approach is the fast, high-fidelity intra-transmon entangling gate. One possible caveat of our approach are shorter decoherence times of higher excited levels. Simulating the model as an open system could check if the gates are fast enough to be usable. Entangling gates between logical qubits in two coupled transmons are more difficult to realise, need a pulse shape composed of many frequencies, and have lower fidelity. In this case, the main caveat are long gate times. Our example with a gate time of \(1\)\(\mu s\) reaches high fidelities but is too long for real systems with noise. Although the fidelity drops with decreasing gate times, shorter gates can still be possible. It might also be possible that some values of the gate time are preferable to others. There are several ways how the gate fidelity could be improved, for example by using optimising piece-wise constant signals instead of analytical functions or by using tunable couplers instead of cross-resonance gates at fixed coupling. In general, it remains to show if the coupled system is fully controllable. A further possibility is to assemble desired gates out of single-qubit rotations and the four-qubit entangling gates that have a comparably good fidelity even at shorter gate times. Also, using the entanglement fidelity or Makhlin invariants as the optimiser's goal function (as described in [22]) could lead to high-fidelity entangling gates other than usual choices like the CZ gate. While working on this we became aware of two other groups who have been working on a similar idea: [13], [23]. This project was funded by the German Federal Ministry of Education and Research within the program "quantum technologies - from basic research to the market" within the project GeQCos (contract number 13N15680). Figure 6: CZ gate between the \(Q_{2}\) qubits of two coupled transmons that is created by \(N=10\) frequencies with an infidelity of \(1-F\approx 0.035\). **Top**: Normalised spectral amplitude of the optimised pulse on transmon 1. Dotted lines with labels correspond to Stark-shifted resonances of the model. **Left**: The optimised propagator \(U\). Dotted black squares indicate the absolute values of the ideal gate.
2309.06071
Muon-electron scattering at NNLO with McMule
A recently proposed experiment, MUonE, aims to extract the hadronic vacuum polarisation contribution to the muon g-2 from muon-electron scattering at low energy. The extrapolation requires that both experimental and theoretical uncertainties do not exceed 10 ppm. This corresponds, at least, to next-to-next-to-leading-order (NNLO) QED corrections to $e \mu \to e \mu$. I will discuss the implementation of a Monte Carlo integrator for this process in the McMule framework arXiv:2212.06481, which provides infrared-safe differential results at said order in QED. An approximation of the MUonE setup provides some phenomenological results and sheds light on the need for beyond-NNLO corrections, which are currently under study within McMule.
Marco Rocco
2023-09-12T09:09:36Z
http://arxiv.org/abs/2309.06071v1
# Muon-electron scattering at NNLO with McMule ###### Abstract: A recently proposed experiment, MUonE, aims to extract the hadronic vacuum polarisation contribution to the muon \(g-2\) from muon-electron scattering at low energy. The extrapolation requires that both experimental and theoretical uncertainties do not exceed 10 ppm. This corresponds, at least, to next-to-next-to-leading-order (NNLO) QED corrections to \(e\mu\to e\mu\). I will discuss the implementation of a Monte Carlo integrator for this process in the McMule framework [1], which provides infrared-safe differential results at said order in QED. An approximation of the MUonE setup provides some phenomenological results and sheds light on the need for beyond-NNLO corrections, which are currently under study within McMule. Introduction Excluding new physics, the hadronic vacuum polarisation (HVP) contribution to the muon anomalous magnetic moment, \(a_{\mu}=(g-2)/2\), is generally referred to as the source of the long-standing discrepancy between experimental measurements [2, 3, 4] and Standard Model (SM) predictions. Currently, there is not an agreement among the latter, as predictions using lattice QCD differ from those employing data-driven dispersive calculations. In contrast to dispersive predictions [5], whose discrepancy with the experiment amounts up to 5\(\sigma\), a recent calculation in lattice QCD [6] drastically reduces the discrepancy. In this scenario, a different approach to a data-driven calculation, expressed e.g. by the MUonE experiment [7, 8], is decisive to further disentangle the problem. Traditional dispersive calculations rely on experimental inputs from \(e^{+}e^{-}\to\) hadrons, measured in the time-like region (\(s>0\)) at energies around 1-10 GeV, where numerous hadronic resonances hamper the experimental precision. On the other hand, the MUonE experiment consists of a 160 GeV muon beam colliding on a fixed target of atomic electrons, in a pure \(t\)-channel. From the measurement of the scattering angles of muons and electrons, \(\theta_{\mu}\) and \(\theta_{e}\), in elastic events, the HVP contribution to the running of the electromagnetic coupling, \(\alpha(t(x)<0)\), can be reconstructed via a template fit in the space-like region, where no resonance hampers the measurement. The formula [8] \[a_{\mu}^{\rm HVP}=\frac{\alpha}{\pi}\int_{0}^{1}\mathrm{d}x(1-x)\;\Delta \alpha^{\rm had}(t(x))\,,\qquad t(x)=-\frac{x^{2}\,m_{\mu}^{2}}{1-x}\,, \tag{1}\] yields the HVP contribution to the muon anomaly. Figure 1 shows a simulation of the running of \(\alpha\), split into a leptonic and a hadronic part. The former is computed perturbatively, while the second employs the library alphaQED[9]. The kinematics of MUonE is particularly favourable, as the area accessible to the experiment covers most of the HVP contribution in Figure 1, i.e. corresponding to higher values of \(x\), or equivalently smaller values of \(\theta_{e}\). Further, this allows to define a normalisation region, where the HVP signal is much lower. However, the accuracy of the total experimental and theoretical error should not exceed 10 ppm, as the signal of the experiment is \(\mathcal{O}(10^{-3})\), and the HVP needs to be extracted with a precision below one percent, in order to match the statistical error of the other evaluations. On the theoretical side, there has been a coordinated effort [10] aiming at developing two completely independent Monte Carlo event generators for muon-electron scattering. At least an NNLO calculation in QED is mandatory to reach the precision required, and the results at this order suggest the need for the resummation of large logarithms and the calculation of (the dominant part of) the N\({}^{3}\)LO corrections. In addition to higher-order QED effects, nuclear effects as well as pion and lepton-pair production have to be taken into account [11], but will not be considered in this contribution. The Mesmer Monte Carlo provides the complete set of electroweak NLO corrections [12], as well as QED NNLO corrections [13, 14], using an approximation for genuine two-loop four-point topologies. In order to cope with infrared divergences photon-mass regularisation combined with a slicing approach is employed. In parallel, the McMule framework [15] offers another efficient environment to reach the desired level of accuracy. Section 2 discusses the implementation in McMule of muon-electron scattering at NNLO in QED, which can serve as a Monte Carlo generator (at present, integrator) for the MUonE experiment. A subset of the results is presented in Section 3, before concluding in Section 4. ## 2 McMule for MUonE The implementation of muon-electron scattering at NNLO in the McMule framework, along with results and discussion, is presented in greater detail in [1] and [16]. Here follows a summary, with a particular focus on phenomenology. The general idea behind McMule is the adaptation of techniques developed and employed in the context of higher-order perturbative QCD, to higher-order studies in QED. For example, McMule uses dimensional regularisation and a generalisation of the FKS method [17, 18] to any order in the electromagnetic coupling, FKS\({}^{\ell}\)[19], for the subtraction of soft divergences. In fact, MUonE observables are not collinear safe, and therefore strongly depend on fermion-mass effects. Thus, for fully differential predictions, taking those into account is strictly necessary. At the same time, lepton masses regulate collinear divergences, leaving soft divergences only. Further, the same machinery developed to handle one- and multi-loop integrals can be adapted to QED, where the presence of additional scales, such as the lepton masses, makes loop integrations more difficult. For one-loop problems, McMule employs OpenLoops[20, 21], which proved to be remarkably stable except for phase-space regions where a photon is particularly soft, or a pseudo-collinear configuration leads to the presence of large logarithms. A good numerical stability is recovered using next-to-soft (NTS) stabilisation [22, 23] for such regions, i.e. employing the leading- and next-to-leading-soft terms in the photon-energy expansion of the relevant matrix Figure 1: HVP contribution to the running of \(\alpha\) for space-like kinematics, computed as the NLO correction to muon-electron scattering due to HVP insertions, as a function of \(x\) and \(\theta_{e}\). The contribution due to leptonic VP insertions is also given for comparison, along with the kinematic region accessible to MUonE. element, instead of the full matrix element. With two-loop diagrams no automatic procedure is able to deal with generic processes. It is then necessary to resort to external results that consider the particular process of interest. These results are often built for QCD, where the mass of the light quarks can be neglected. Thus, McMule employs calculations where the mass of the fermion is neglected, but can recover those neglected effects via massification [24, 25, 26]. A more detailed discussion about the methods used in McMule (and a validation of them) can be found in the original muon-electron scattering paper [1], or elsewhere in these proceedings [27]. For the purpose of the present contribution, it is sufficient to say that muon-electron scattering at NNLO is implemented as follows. Contributions at NLO (real and virtual) and NNLO (double-real, real-virtual and double-virtual) are divided into photonic and fermionic. The latter are those containing a fermionic vacuum polarisation insertion, and were calculated using the hyperspherical method [28], then validated by a second calculation done via a dispersive method [29]. In general, both photonic and fermionic contributions can be subdivided into three gauge-invariant subsets according to their formal leptonic charge. As for the sample diagrams below, contributions where photon radiation only attaches to the electron (muon) line are called electronic (muonic), all other contributions are referred to as mixed. In the McMule framework, each contribution can be computed separately, allowing the user to study the impact of different classes and to infer possible hierarchies among them. Double-virtual electronic and muonic contributions were computed with full mass dependence, using the analytic expressions for the heavy quark form factors of [30], while mixed double-virtual contributions, which involve genuine two-loop four-point topologies, were computed applying massification to the results of [31, 32], which employ the master integrals computed in [32, 33, 34]. This is the only approximation made in the McMule prediction, amounting to the neglect, at NNLO, of terms that are polynomially suppressed in the electron-mass expansion of the double-virtual contribution. NTS stabilisation was then applied to all real-virtual contributions, in order to achieve the desired numerical stability. As shown in the original paper, the use of NTS expansions results in the neglect of terms that are much below the 10 ppm requirement by MUonE. A number of internal and external tests were carried out in order to validate the results, cf. Section 4 of [1]. Here we comment on the comparison of the mixed contributions to the photonic NNLO correction, which have been calculated both in Mesmer and in McMule. Since the two frameworks employ different scheme to handle IR divergences, a comparison between the two results represents a completely independent check. As the calculation in [14] is complete up to the mixed two-loop contribution, it is possible to compare the mixed NLO correction to \(\mu e\to\mu e\gamma\) which is physical and corresponds to the double-real and real-virtual contributions to muon-electron scattering. In order to check the numerical stability of the real-virtual implementation, small photon energy cuts of \(\{10^{-6}\), \(10^{-5}\), \(10^{-4}\}\times\sqrt{s}/2\) were used. Perfect agreement was found between the two codes for the total cross section as well as for differential distributions, as shown in Figure 2, at sub-percent level. ## 3 Results This section presents some results for muon-electron scattering at NNLO, with the characteristics of the MUonE experiment in mind. The kinematics is defined by \[e^{-}(p_{1})\,\mu^{\pm}(p_{2})\to e^{-}(p_{3},E_{e},\theta_{e})\,\mu^{\pm}(p_{ 4},E_{\mu},\theta_{\mu})+\{\gamma(k_{1})\,\gamma(k_{2})\}\,, \tag{2}\] with the outgoing electron and muon energy, \(E_{e}\) and \(E_{\mu}\), and the electron and muon scattering angle with respect to the beam axis, \(\theta_{e}\) and \(\theta_{\mu}\). The muon beam energy is set to 160 GeV, consistent with the M2 beam line at CERN North Area [35]. A cut is imposed on the energy of the outgoing electron, \(E_{e}>1\) GeV, which is equivalent to a cut on the minimal value of \(|t|\), in order to cure the singular behaviour of \(\mathrm{d}\sigma/\mathrm{d}t\sim t^{-2}\), where \(t\) is the usual Mandelstam invariant. A cut on \(\theta_{\mu}\) can be Figure 2: Top: NNLO double-real and real-virtual contributions to muon-electron scattering as differential distributions w.r.t. \(\theta_{e}\), computed by Mesmer (yellow) and McMale (blue). A small photon energy cut of \(10^{-6}\times\sqrt{s}/2\) was used. Bottom: Ratio between the two predictions. The larger oscillation is unphysical, corresponding to the zero crossing of the distributions. used to remove most of the background. Hence, for the results shown here, \(\theta_{\mu}>0.3\) mrad was also required. Given this kinematical setup, McMule is able to produce differential distributions for any infrared-safe observable. At present, it does not generate events and can only act as a Monte Carlo integrator. However, the possibility to generate events will be available in the near future [36]. In this contribution, the focus is on differential distributions w.r.t. the scattering angle of the electron, as this is the main interest of the MUonE experiment, in particular for a beam of negative muons. The whole set of results is instead presented in the original paper and publicly available at the McMule Zenodo repository [37]. Figure 3 and 4 show, in the upper panel, the LO and (N)NLO angular distributions, and, in the lower panel, the \(K\) factor for the NLO and NNLO distributions, defined as \[K^{(i)}-1=\frac{\sigma_{i}}{\sigma_{i-1}}\;, \tag{3}\] where \(\sigma_{k}=\sum_{i=0}^{k}\sigma^{(i)}\) is given by the sum of the order-by-order contributions, \(\sigma^{(i)}\), to the N\({}^{k}\)LO integrated cross section. In addition, the \(K\) factor of the signal is also shown, corresponding to the hadronic part of the NLO fermionic contribution. (N)NLO corrections amount up to 20% (0.2%), particularly for small electron scattering angles, or equivalently for large electron energies, where photon emission is forced to be soft. In this kinematical configuration, the signal is completely outweighed by the NLO correction, and turns out to be of the same order or smaller than the NNLO correction. Further, the enhancement observed in the small-\(\theta_{e}\) region suggests the need for a more reliable description of the region where large logarithms cause such behaviour. In order to achieve a well-defined extraction of the signal, not hampered by such dominant QED corrections in the background, a possible way to proceed is to discriminate elastic scattering events Figure 3: Top: LO (green) and NLO (blue) differential cross section w.r.t. \(\theta_{e}\). Bottom: \(K\) factors of the NLO correction and the signal of the MUonE experiment (pink), i.e. the NLO HVP contribution. from the otherwise kinematically allowed radiative events and processes. This can be obtained in terms of the elasticity constraint that relates muon and electron scattering angles in the absence of photons, shown in Figure 5. For example, the requirement \(0.9<\theta_{\mu}/\theta_{\mu}^{\rm el}<1.1\), where \(\theta_{\mu}^{\rm el}\) is the muon scattering angle as defined in Figure 5 as a function of the electron scattering angle, can act as a veto for hard radiation. The angular distributions in the presence of this additional elasticity cut are displayed in Figure 6. As expected from an evenly-distributed soft enhancement, the \(K\) factor is significantly reduced and flattened. In this context, the NLO correction can be subtracted more efficiently, and the signal can be extracted on top of the NNLO correction, which is now, in general, smaller. However, such a kinematical constraint is not ideal from the experimental perspective. It would cut off many events, yielding issues in terms of statistics, and would also complicate the estimate of systematic uncertainties, as it would lead to a complex practical implementation. At present, the alternative Figure 4: Top: LO (green) and NNLO (red) differential cross section w.r.t. \(\theta_{e}\). Bottom: \(K\) factors of the NLO and NNLO correction, and the signal of the MUonE experiment (pink), i.e. the NLO HVP contribution. Figure 5: \(\theta_{\mu}\) as a function of \(\theta_{e}\) for elastic events. The light-blue band corresponds to the elasticity cut. proposed by the experiment is to employ a template fit to extract the HVP, as discussed in [38]. Nonetheless, a study with the elasticity cut is still of theoretical interest. ## 4 Conclusions and outlook We have reviewed the implementation in the McMule framework of muon-electron scattering at NNLO in QED, as well as some of the results presented therein. This corresponds to the first result at NNLO for a two-to-two process in QED with two different non-vanishing masses on the external lines. A more detailed description of the methods employed can be found elsewhere in these proceedings [27]. The MUonE experiment may benefit from these results for the extraction of the HVP contribution to the muon \(g-2\). The McMule effort is part of a bigger theoretical effort [10], whose aim is to provide the most precise prediction for muon-electron scattering, in order to match the 10 ppm precision goal. The magnitude of the NNLO corrections at differential level, around \(10^{-3}\), is still too large compared to said goal. Thus, higher-order predictions beyond NNLO can certainly help in that direction. Furthermore, it is mandatory to make an effort towards a more reliable description of the region where radiation leads to an enhancement through large logarithms. An N\({}^{3}\)LO prediction (or at least the dominant part of it) and a more precise description of large-log regions, through resummation or via the implementation of a parton shower, are on the agenda of an ongoing effort, started with a series of workstops in 2022 and 2023 [39, 40]. AcknowledgementI acknowledge support from the Swiss National Science Foundation (SNSF) under grant 200020_20738. A huge thank you to all the colleagues of the original paper [1], upon which this contribution is mainly based. Figure 6: Top: LO (green) and NNLO (red) differential cross section w.r.t. \(\theta_{e}\). Bottom: \(K\) factors of the NLO and NNLO correction, and the signal of the MUonE experiment (pink), i.e. the NLO HVP contribution. All curves are obtained after applying the elasticity cut discussed in the text.
2309.05550
Multiplierless Design of High-Speed Very Large Constant Multiplications
In cryptographic algorithms, the constants to be multiplied by a variable can be very large due to security requirements. Thus, the hardware complexity of such algorithms heavily depends on the design architecture handling large constants. In this paper, we introduce an electronic design automation tool, called LEIGER, which can automatically generate the realizations of very large constant multiplications for low-complexity and high-speed applications, targeting the ASIC design platform. LEIGER can utilize the shift-adds architecture and use 3-input operations, i.e., carry-save adders (CSAs), where the number of CSAs is reduced using a prominent optimization algorithm. It can also generate constant multiplications under a hybrid design architecture, where 2-and 3-input operations are used at different stages. Moreover, it can describe constant multiplications under a design architecture using compressor trees. As a case study, high-speed Montgomery multiplication, which is a fundamental operation in cryptographic algorithms, is designed with its constant multiplication block realized under the proposed architectures. Experimental results indicate that LEIGER enables a designer to explore the trade-off between area and delay of the very large constant and Montgomery multiplications and leads to designs with area-delay product, latency, and energy consumption values significantly better than those obtained by a recently proposed algorithm.
Levent Aksoy, Debapriya Basu Roy, Malik Imran, Samuel Pagliarini
2023-09-11T15:35:02Z
http://arxiv.org/abs/2309.05550v2
# Multiplierless Design of High-Speed Very Large Constant Multiplications ###### Abstract In cryptographic algorithms, the constants to be multiplied by a variable can be very large due to security requirements. Thus, the hardware complexity of such algorithms heavily depends on the design architecture handling large constants. In this paper, we introduce an electronic design automation tool, called leiger, which can automatically generate the realizations of very large constant multiplications for low-complexity and high-speed applications, targeting the ASIC design platform, leiger can utilize the shift-adds architecture and use 3-input operations, i.e., carry-save adders (CSAs), where the number of CSAs is reduced using a prominent optimization algorithm. It can also generate constant multiplications under a hybrid design architecture, where 2-and 3-input operations are used at different stages. Moreover, it can describe constant multiplications under a design architecture using compressor trees. As a case study, high-speed Montgomery multiplication, which is a fundamental operation in cryptographic algorithms, is designed with its constant multiplication block realized under the proposed architectures. Experimental results indicate that leiger enables a designer to explore the trade-off between area and delay of the very large constant and Montgomery multiplications and leads to designs with area-delay product, latency, and energy consumption values significantly better than those obtained by a recently proposed algorithm. very large constant multiplication, shift-adds design, compressor trees, high-speed design, area optimization, Montgomery multiplication, cryptography ## I Introduction The Montgomery modular multiplication [1] is an essential operation in cryptographic algorithms, such as RSA [2], elliptic curve cryptography (ECC) [3], and supersingular isogeny key encapsulation (SIKE) [4]. Since these algorithms operate on large prime numbers, e.g., 2048, 521, and 768 in RSA, ECC, and SIKE, respectively, the operands of the Montgomery multiplication are generally divided into smaller multiple bits, so that reasonable sizes of multiplication and addition operations can be used to compute the modular multiplication in acceptable latency [5, 6]. Note that the size of these multiple bits in the very large constant and input variable has a significant impact on the hardware complexity of the Montgomery multiplication design and thus, the exploration of values of these parameters is important to find the design, which fits perfectly in a low-complexity and high-speed application [6]. The Montgomery multiplication includes the multiplication of a very large prime number by an input variable, called the _very large constant multiplication_ (VLCM) operation. There is only the algorithm of [7], which aims to reduce the hardware complexity of the VLCM operation under the shift-adds architecture using only shift and addition/subtraction operations. To do so, it uses techniques, which maximize the sharing of common subexpressions among constant multiplications. However, it does not consider the high-speed realization of the VLCM operation, which is essential for high-performance cryptographic algorithms. To the best of our knowledge, there exist no algorithms proposed for the low-complexity and high-speed realization of the VLCM operation. Thus, in this paper, we introduce an electronic design automation (EDA) tool, called leiger, which can describe the high-speed design of the VLCM operation taking into account the area and targeting the ASIC design platform under three different architectures: (i) the shift-adds architecture using carry-save adders (CSAs), denoted as SA-CSA; (ii) the shift-adds architecture using 2-input adders/subtractors and 3-input CSAs at different stages, denoted as SA-Hybrid; (iii) the design architecture using compressor trees, denoted as CT. The very large constants are divided into smaller coefficients under the shift-adds architectures and the number of 2-input operations and CSAs is reduced using optimization algorithms [8, 9, 10, 11]. The input variable is partitioned into smaller bits under the CT architecture and compressor trees are used to add the multiples of very large constants. Moreover, leiger can automatically generate the entire Montgomery multiplication including its high-speed VLCM operation implemented under a given design architecture. Thus, the main contributions of this paper are two-fold: (i) it proposes design architectures to realize the VLCM operation for high-speed applications, incorporating prominent algorithms to reduce its complexity; (ii) it introduces high-speed Montgomery multiplication designs including the VLCM operation realized under the proposed architectures. It is observed from the experimental results that the exploration of the number of bits used in partitioning the very large constant and input variable in the VLCM operation is crucial while finding a low-complexity and high-speed design. It is shown that when compared to leiger, the algorithm of [7] leads to VLCM operation and Montgomery multiplication designs with \(4.3\times\) and \(1.3\times\) larger area-delay product (ADP) values, respectively. The rest of this paper is organized as follows: Section II presents the background concepts. The proposed design archi tectures for the high-speed realization of the VLCM operation are described in Section III. Experimental results are given in Section IV and finally, Section V concludes the paper. ## II Background ### _Constant Multiplication_ The multiplication of constants by the variable \(x\) can be written as the realization of constants by simply eliminating the variable. For example, \(3x=x\ll 1+x=(1\ll 1+1)x\) can be written as \(3=1\ll 1\). These notations will be used interchangeably in this paper. Since constants are determined beforehand and the realization of a multiplier in hardware is expensive in terms of area, the multiplication of constants by a variable is generally realized under the shift-adds architecture using only shift and addition/subtraction operations [12]. Note that shifts can be realized using only wires, which represent no hardware cost. Thus, the optimization problem is to find the minimum number of adders/subtractors that are required to realize the constant multiplications. Note that this is an NP-complete problem [13]. In a straight-forward way, the digit-based recoding (DBR) technique [14] initially defines the constants under a number representation, e.g., binary, and then, for each nonzero digit in the representation of constant, it shifts the input variable based on the digit position and adds/subtracts the shifted variables according to the digit values. As an example, consider the constant multiplications \(51x\) and \(55x\). The decomposition of constants under the binary representation are given as follows: \[51x= (110011)_{bin}x=x\ll 5+x\ll 4+x\ll 1+x\] \[55x= (110111)_{bin}x=x\ll 5+x\ll 4+x\ll 2+x\ll 1+x\] leading to a solution with 7 operations as shown in Fig. 1(a). The number of operations in a shift-adds design is generally reduced by sharing the partial products. To do so, many efficient common subexpression elimination (CSE ) and graph-based (GB) algorithms have been introduced. The CSE algorithms [8, 9] initially define the constants under a number representation. Then, in an iterative fashion, they identify all possible subexpressions, which can be extracted from the nonzero digits in representations of constants, choose the "best" subexpression, generally the most common, and replace this subexpression with its realization. The GB algorithms [10, 15] are not restricted to any number representation and find the "best" intermediate constants, which enable to realize the constant multiplications with a small number of operations. For our example, the GB algorithm of [10] finds a solution with 3 operations as shown in Fig. 1(b). In a shift-adds realization, an adder/subtractor is assumed to be a 2-input operation, which can be implemented using a low-complexity ripple carry adder (RCA) as shown in Fig. 2(a). However, in high-speed applications, CSA is preferred to RCA [16]. CSA has three inputs and two outputs, i.e., sum (S) and carry (C), and an \(n\)-bit CSA includes \(n\) full adders (FAs) as shown in Fig. 2(b). Note that the delay of CSA is equal to the gate delay of an FA, independent of the input bit-width. The sum and carry outputs together form the computation, which can be obtained by adding these outputs using a fast adder at the end of the whole process. The DBR technique can find a shift-adds realization of constant multiplications using CSAs in a similar fashion. For our example, 5 CSAs are required as shown in Fig. 1(c). Efficient CSE algorithms have also been proposed to reduce the number of CSAs [11, 17]. They iteratively determine all possible 3-term subexpressions and choose the "best" one to be shared among constant multiplications. For our example, the CSE algorithm of [11] finds a solution with 3 CSAs as shown in Fig. 1(d). ### _Montgomery Multiplication_ Cryptographic algorithms like ECC are based on finite field arithmetic, which involves modular multiplication between large integers. Montgomery multiplication [1] allows us to perform modular reduction without computing any trial division. Algorithm 1 presents the constant time version of the Montgomery algorithm [18], where \(M\) is the given prime modulus and \(M^{\prime}\) and \(\overline{M}\) are precomputed constants. It performs word-wise multiplication between \(a_{i}\) and \(B\), where each \(a_{i}\) is \(r\) bits long, and involves the multiplication of the constant \(\overline{M}\) by the variable \(q_{i}\), which is the primary focus of this work. There have been multiple works that focus on developing efficient architectures for the Montgomery multiplication, including systolic array-based architectures [19, 20]. In this paper, we focus on the redundant number system (RNS)-based implementation of the Montgomery multiplication [21]. Note that RNS allows us to perform large integer arithmetic efficiently without long carry propagation. The architecture of Fig. 2: Addition architectures: (a) RCA; (b) CSA. Fig. 1: Multiplierless design of \(51x\) and \(55x\): (a) DBR technique [14] using 2-input operations; (b) GB algorithm of [10]; (c) DBR technique [14] using 3-input operations; (d) CSE algorithm of [11]. the RNS-based Montgomery multiplication is shown in Fig. 3. The input \(A\) is represented as a radix-\(r_{1}\) redundant number with \(1\) extra redundant bit for each word of dimension \(r_{1}\). The input \(B\) and the constant \(\overline{M}\) are represented as radix-\(r_{2}\) redundant numbers. Note that \(m_{a}\) and \(m_{b}\) in Fig. 3 are given as \(\lceil m/r_{1}\rceil\) and \(\lceil m/r_{2}\rceil\), respectively, where \(m\) is the bit-width of the prime modulus \(M\). In our ASIC design, the values of \(r_{1}\) and \(r_{2}\) are determined based on the parallel realization of multiplications. The _Multiplication_ and _Multiplication & Accumulation_ blocks compute \(a_{i}\cdot B\) and \(S_{i}+q_{i}\cdot\overline{M}\) in a carry-save form, respectively. The output of the former block is combined with the output of the latter block after shifting via the _4:2 compressor_ circuit. The _base converter_ module converts the result of the carry-save form into radix-\(r_{2}\) redundant form and the _mod_ block realizes the modulo operation when the modulus is \(2^{r_{1}}\). ## III High-Speed Design of the VLCM Operation In this section, we describe the realization of the VLCM operation under three design architectures, namely SA-CSA, SA-Hybrid, and CT, and introduce our EDA tool leiger. ### _SA-CSA Architecture_ Given \(n\) very large constants i.e., \(lc_{1},lc_{2},\ldots,lc_{n}\), in hexadecimal format and the number of bits in partition, i.e., \(p\), their shift-adds realization is obtained in three stages: (i) partitioning; (ii) realization of coefficients; and (iii) realization of equations. These stages are described in the following sections. #### Iii-A1 Partitioning Each very large constant \(lc_{i}\), \(1\leq i\leq n\), is divided into \(p\) bits, starting from the least significant bit and its \(p\)-bit coefficients, \(c_{1},c_{2},\ldots,c_{d}\), where \(d=\lceil w_{i}/p\rceil\) and \(w_{i}\) is the bit-width of \(lc_{i}\), are determined as \(lc_{i}=\sum_{j=1}^{d}lc\lceil jp-1:(j-1)p\rceil 2^{(j-1)p}=\sum_{j=1}^{d}c_{j}2^{(j-1)p}\). The coefficients other than zero are stored as integers in a set called \(T\) without repetition. Shift values of these coefficients are also computed based on the locations of these coefficients in \(lc_{i}\) and stored in a set called \(U\). Finally, the realization of each very large constant is written as an equation in the form of a summation of coefficients in set \(T\) based on their shift values in set \(U\), assuming that the multiplication of these coefficients by the input variable is realized using CSAs. Fig. 4 presents an example of the multiplierless realization of the VLCM operation. The partitioning step is given in Fig. 4(a) when \(p\) is 8. Note that \(S\&Ct_{i}\), \(1\leq i\leq|T|\) denotes the S and C outputs of CSA realizing \(t_{i}\). #### Iii-A2 Realization of Coefficients The CSE algorithm of [11] is applied to find the shift-adds realization of coefficients in the set \(T\) with a small number of CSAs. Fig. 4(b) presents its solution with 4 CSAs. #### Iii-A3 Realization of Equations Initially, the common subexpressions in equations are found using the CSE algorithm of [11] and are replaced by their realizations. For our example, there is a single subexpression \(exp_{0}\) as shown in Fig. 4(c). Since it has 4 inputs, 2 CSAs are required, implemented as \(S\&Cexp_{0}=C19\ll 8+Saux+Caux\) with \(S\&Caux=S19\ll 8+C21+S21\). Then, final equations are realized using CSAs considering the sizes of CSAs as shown in Fig. 4(c). For our example, each final equation requires 4 CSAs. Thus, the realization of the VLCM operation under the SA-CSA architecture requires a total of 14 CSAs, 4 for the coefficients, 2 for the common subexpression in equations, and 8 for the final equations as shown in Fig. 4(d). The realizations of equations are depicted using a 4-input operation, which actually includes 2 CSAs, for the sake of clarity in Fig. 4(d). ### _SA-Hybrid Architecture_ To reduce the hardware complexity under the SA-CSA architecture, rather than CSAs, 2-input adders/subtractors can be used in the realization of coefficients at the second stage and in the realization of common subexpressions at the third stage, Fig. 4: Realization of the VLCM operation under the SA-CSA architecture: (a) partitioning; (b) coefficients; (c) equations; (d) implementation. Fig. 3: RNS-based design of the Montgomery multiplication. where the number of operations can be optimized using the GB algorithm of [10] and CSE algorithms of [8, 9], respectively. The final equations at the third stage can be realized using CSAs. Thus, the number of terms in the final equations, and consequently, the number of CSAs, can be reduced. Fig. 5 presents the stages in the realization of the VLCM operation under the SA-Hybrid architecture for our example in Fig. 4. The realization of coefficients and the common subexpression requires 5 and 1 adders/subtractors, respectively. Also, each final equation requires 1 CSA. Thus, 6 2-input adders/subtractors and 2 CSAs are required as shown in Fig. 5(d). Note that the sign value shown in 2-input operations denotes the operation type in this figure. ### _CT Architecture_ For each very large constant \(lc_{i}\), \(1\leq i\leq n\), the variable \(x\) is partitioned into \(r\) bits and the multiples of \(lc_{i}\) between 0 and \(2^{r}-1\), i.e., \(0,lc_{i},2lc_{i},\ldots,(2^{r}-1)lc_{i}\), are generated. For each partition of \(x\), \(x[jr-1:(j-1)r)]\), \(1\leq j\leq\lceil iw/r\rceil\), where \(iw\) is the bit-width of the input variable \(x\), the multiples of \(lc_{i}\) are selected using MUXes, shifted accordingly, and added using compressor trees. In this case, \(\lceil iw/r\rceil\)\(2^{r}\)-input MUXes and \(\lceil iw/r\rceil-2\) CSAs are required. The multiples of very large constants are described as constants so that the synthesis tool can apply its optimization techniques to simplify the logic. For our example, the realization of constant multiplications is shown in Fig. 6 when \(iw\) and \(r\) are 8 and 2, respectively. ### _The EDA Tool_ Given the very large constants, the design architecture, the number of bits used in partitioning the constants or input variable, and other design parameters, leiger automatically generates the behavioral description of the VLCM operation in Verilog, test-bench for verification, and synthesis and simulation scripts. It is equipped with algorithms developed for the optimization of the number of 2-input adders/subtractors and CSAs [8, 9, 10, 11]. It can generate the VLCM operation with one single output and two outputs as sum and carry. Moreover, it can automatically generate the Montgomery multiplication design described in Section II-B, where the VLCM operation is realized under a given architecture. It is available at _[https://github.com/leventatskoy/vlcm_](https://github.com/leventatskoy/vlcm_). ## IV Experimental Results As an experiment set, we use 5 elliptic curve instances taken from [22], namely _anomalous_, _anssifrp_, _bn(2,254)_, _brainpool256_, and _brainpool348_, whose underlying primes do not have any special form and are 204, 256, 254, 256, and 384 bits long, respectively. In this section, we present the gate-level synthesis results of the VLCM block of the Montgomery multiplication and of the entire Montgomery multiplication design based on these elliptic curves. These designs are also implemented under the shift-adds architecture using 2-input operations [7], denoted as SA-2IO. Note that logic synthesis was performed by Cadence Genus using a commercial 65 nm cell library and designs were validated using 10,000 randomly generated inputs in simulation. ### _Very Large Constant Multiplication_ As the first experiment, we generate the VLCM operations with two outputs as they are used in the Montgomery multiplication design. Possible realizations of the VLCM operation are obtained by changing the values of \(p\), i.e., the number of bits in partitioning the constant, under the shift-adds architectures and \(r\), i.e., the number of bits in partitioning the input variable, under the CT architecture. Note that \(p\) ranges from 8 to 28 in a step of 4 and \(r\) ranges between 2 and 7. The maximum value of \(p\) is due to the limitation of the GB algorithm of [10] on the bit-width of constants. The designs are synthesized without a strict delay constraint aiming for area optimization. Fig. 7 Fig. 5: Realization of the VLCM operation under the SA-Hybrid architecture: (a) partitioning; (b) coefficients; (c) equations; (d) implementation. Fig. 6: Design of the VLCM operation in Fig. 4 under the CT architecture. Fig. 7: Impact of \(p\) and \(r\) values on the area of the VLCM operation. shows the impact of \(p\) and \(r\) on the total area (in \(\mu m^{2}\)) of the _brainpool348_ instance under different architectures when the bit-width of the input variable, i.e., \(iw\), is 16. Observe from Fig. 7 that since the coefficients and equations to be realized under the shift-adds architectures are determined based on the \(p\) value and the number of multiples of the very large constant, MUXes, and CSAs are determined based on the \(r\) value under the CT architecture, they have a significant impact on area of the VLCM design. Note that the increase in area with respect to the minimum one can be up to 37.5%, 56.3%, 15.5%, and 63.4% under the SA-2IO, SA-CSA, SA-Hybrid, and CT architectures, respectively. Similar results were observed on other instances also when \(iw\) is 32 and 64. To further explore the impact of a design architecture on the gate-level area of VLCM designs under different \(iw\) values, Fig. 8 presents the results belonging to VLCM designs with the smallest area value among others obtained with aforementioned \(p\) and \(r\) values when \(iw\) is 16, 32, and 64. Observe from Fig. 8 that designs under the SA-CSA architecture have the largest area. When \(iw\) is 16, the designs under the CT architecture have less area than designs under the SA-2IO and SA-Hybrid architectures, except the _anomalous_ instance. However, as \(iw\) increases, the SA-2IO and SA-Hybrid architectures lead to designs with smaller area when compared to those realized under the CT architecture. For example, on the _ansifirp_ instance when \(iw\) is 16, the design under the CT architecture has 12.3% and 25.7% gain in area with respect to the designs under the SA-2IO and SA-Hybrid architectures, respectively. However, on the same instance when \(iw\) is 64, the design under the SA-2IO (SA-Hybrid) architecture has a 41.5% (28.6%) gain in the area when compared to the design under the CT architecture. This is because as \(iw\) increases, both the size and number of CSA operations increase under the CT architectures while only the size of operations increases under the shift-adds architectures. To find the impact of the design architecture on the minimum achievable delay values, denoted as _mad_, the _anomalous_ instance is synthesized under different architectures while the delay constraint is changed in a binary search manner until the minimum delay in the critical path is found without a negative slack. In this case, the initial lower and upper bounds on the delay constraint are set to 0 ps and 80 ns, respectively. Table I presents the gate-level synthesis results of these designs when \(iw\) is 16 and 32. In this table, \(A\), \(D\), _ADP_, and \(P\) are the total area in \(\mu m^{2}\), the critical path delay in \(ps\), area-delay product in \(10^{6}\times\mu m^{2}\times ps\), and the total power dissipation in \(\mu W\), respectively. Observe from Table I that while the SA-2IO architecture leads to designs with the largest _mad_ values, the designs under the CT architecture have the smallest _mad_, ADP, and power dissipation values. The SA-Hybrid architecture achieves better _mad_ and ADP values than the SA-2IO architecture. The SA-CSA architecture leads to designs with promising _mad_ values, having the smallest ADP value among the shift-adds architectures when \(iw\) is 32. This is because the logic synthesis tool has a large room to optimize area under a strict delay constraint when CSAs are used. We note that similar results were obtained on other elliptic curves also when \(iw\) is 64. To further explore the area and delay tradeoff on the VLCM designs, the _anomalous_ instance is synthesized with a delay constraint ranging from 1350 ps and 2350 ps in a step of 100 ps when \(iw\) is 16. Observe from Table I that the largest \(mad\) value is 1341 ps when \(iw\) is 16. Fig. 9 shows the gate-level area of these designs under different architectures. Observe from Fig. 9 that as the delay constraint is decreased, there is a slight increase in the area of designs under the SA-CSA and CT architectures. This is because the \(mad\) values of these designs are smaller than the given delay constraints as shown in Table I and hence, these delay constraints are Fig. 8: Impact of design architectures on the area of the VLCM operation: (a) \(iw\) is 16; (b) \(iw\) is 32; (c) \(iw\) is 64. Fig. 9: Impact of delay constraint on the area of the VLCM operation. easily satisfied by the logic synthesis tool. However, as the delay constraint is decreased, the area of designs under the SA-2IO and SA-Hybrid architectures increases significantly, which is simply to satisfy the delay constraint. Note that while the design under the SA-Hybrid architecture is 9.8% smaller than that under the CT architecture when the delay constraint is 2350 ps, the area of the design under the SA-Hybrid architecture is \(1.5\times\) larger than that of the design under the CT architecture when the delay constraint is 1350 ps. ### _Montgomery Multiplication_ As the second experiment, we implement the entire Montgomery multiplication using the VLCM design realized under a given architecture. In these Montgomery multiplication designs, possible realizations of the VLCM operations were obtained by exploring the values of \(p\) and \(r\) as mentioned in Section IV-A. We note that different \(p\) and \(r\) values lead to Montgomery multiplication designs with different hardware complexity, although it is not as significant as in the VLCM operation shown in Fig. 7. On the _brainpool348_ instance when \(iw\) is 16, the increase in area with respect to the minimum one can be up to 2.8%, 3.4%, 2.5%, and 5.2% under the SA-2IO, SA-CSA, SA-Hybrid, and CT architectures, respectively. We also explored the impact of the design architecture of the VLCM operation on the gate-level area of the Montgomery multiplication design under different \(iw\) values. Similar to the results shown in Fig. 8, the SA-CSA architecture in the VLCM operation leads to Montgomery multiplication designs with the largest area and as \(iw\) increases, the SA-2IO and SA-Hybrid architectures lead to Montgomery multiplication designs with less area with respect to the CT architecture. To explore the impact of the design architecture of the VLCM operation on the _mad_ value of the Montgomery multiplication design, the _anomalous_ instance is synthesized in a binary search manner as mentioned in Section IV-A. Table II presents the gate-level synthesis results of these designs. In this table, \(L\) denotes the latency of the design in \(ns\), computed as \(D\times CC\), where \(CC\) is the number of clock cycles required to compute the multiplication result and is 51 and 30 for the _anomalous_ instance when \(iw\) is 16 and 32, respectively. Also, \(E\) denotes the energy consumption in \(nW\), computed as \(L\times P\). Observe from Table II that the CT architecture leads to designs with the smallest _mad_, ADP, latency, and energy consumption values, but with the largest area. The designs under the SA-CSA and SA-Hybrid architectures have smaller _mad_, ADP, latency, and energy consumption values when compared to those under the SA-2IO architecture. Finally, Table III presents the high-speed Montgomery multiplication designs under the CT architecture with the _mad_ values when \(iw\) is 16 and 32. Observe from Table III that as \(iw\) increases, the area and energy consumption of the Montgomery multiplication design increase significantly. However, in this case, the number of clock cycles and latency decrease. Note that while the gain in area can be up to 42.2% on the _brainpool348_ instance when compared to designs generated when \(iw\) is 16 and 32, the gain in latency can be up to 28.2% on the _ansifrp_ instance when compared to designs generated when \(iw\) is 32 and 16. ## V Conclusions This paper introduced an EDA tool, called leiger, that can generate high-speed realizations of the VLCM operation. leiger can implement the VLCM operation under different architectures and is equipped with techniques, which can optimize the number of operations used in the shift-adds architectures. As a case study, leiger was applied to the VLCM block of the Montgomery multiplication and high-speed Montgomery multiplication designs were obtained. It was shown that leiger can enable a designer to generate realizations of both VLCM and Montgomery multiplication designs, which can fit into a low-complexity and high-speed application, and to explore the tradeoff between area and delay of the VLCM and Montgomery multiplication designs. ## Acknowledgment This work was partially supported by the EU through the European Social Fund in the context of the project "ICT programme". This work was also initiated as part of the EU's H2020 project SAFEST (grant agreement No 952252).
2301.13510
3D Former: Monocular Scene Reconstruction with 3D SDF Transformers
Monocular scene reconstruction from posed images is challenging due to the complexity of a large environment. Recent volumetric methods learn to directly predict the TSDF volume and have demonstrated promising results in this task. However, most methods focus on how to extract and fuse the 2D features to a 3D feature volume, but none of them improve the way how the 3D volume is aggregated. In this work, we propose an SDF transformer network, which replaces the role of 3D CNN for better 3D feature aggregation. To reduce the explosive computation complexity of the 3D multi-head attention, we propose a sparse window attention module, where the attention is only calculated between the non-empty voxels within a local window. Then a top-down-bottom-up 3D attention network is built for 3D feature aggregation, where a dilate-attention structure is proposed to prevent geometry degeneration, and two global modules are employed to equip with global receptive fields. The experiments on multiple datasets show that this 3D transformer network generates a more accurate and complete reconstruction, which outperforms previous methods by a large margin. Remarkably, the mesh accuracy is improved by 41.8%, and the mesh completeness is improved by 25.3% on the ScanNet dataset. Project page: https://weihaosky.github.io/sdfformer.
Weihao Yuan, Xiaodong Gu, Heng Li, Zilong Dong, Siyu Zhu
2023-01-31T09:54:20Z
http://arxiv.org/abs/2301.13510v2
# 3D Former: Monocular Scene Reconstruction with 3D SDF Transformers ###### Abstract Monocular scene reconstruction from posed images is challenging due to the complexity of a large environment. Recent volumetric methods learn to directly predict the TSDF volume and have demonstrated promising results in this task. However, most methods focus on how to extract and fuse the 2D features to a 3D feature volume, but none of them improve the way how the 3D volume is aggregated. In this work, we propose an SDF transformer network, which replaces the role of 3D CNN for better 3D feature aggregation. To reduce the explosive computation complexity of the 3D multi-head attention, we propose a sparse window attention module, where the attention is only calculated between the non-empty voxels within a local window. Then a top-down-bottom-up 3D attention network is built for 3D feature aggregation, where a dilate-attention structure is proposed to prevent geometry degeneration, and two global modules are employed to equip with global receptive fields. The experiments on multiple datasets show that this 3D transformer network generates a more accurate and complete reconstruction, which outperforms previous methods by a large margin. Remarkably, the mesh accuracy is improved by \(41.8\%\), and the mesh completeness is improved by \(25.3\%\) on the ScanNet dataset. 1 Footnote 1: Project Page: [https://weihaosky.github.io/former3d](https://weihaosky.github.io/former3d) ## 1 Introduction Monocular 3D reconstruction is a classical task in computer vision and is essential for numerous applications like autonomous navigation, robotics, and augmented/virtual reality. Such a vision task aims to reconstruct an accurate and complete dense 3D shape of an unstructured scene from only a sequence of monocular RGB images. While the camera poses can be estimated accurately with the state-of-the-art SLAM (Campos et al., 2021) or SfM systems (Schonberger and Frahm, 2016), a dense 3D scene reconstruction from these posed images is still a challenging problem due to the complex geometry of a large-scale environment, such as the various objects, flexible lighting, reflective surfaces, and diverse cameras of different focus, distortion, and sensor noise. Many previous methods reconstruct the scenario in a multi-view depth manner (Yao et al., 2018; Chen et al., 2019; Duzceker et al., 2021). They predict the dense depth map of each target frame, which can estimate accurate local geometry but need additional efforts in fusing these depth maps (Murez et al., 2020; Sun et al., 2021), e.g., solving the inconsistencies between different views. Recently, some methods have tried to directly regress the complete 3D surface of the entire scene (Murez et al., 2020; Sun et al., 2021) from a truncated signed distance function (TSDF) representation. They first extract the 2D features with 2D convolutional neural networks (CNN), and then back-project the features to 3D space. Afterward, the 3D feature volume is processed by a 3D CNN network to output a TSDF volume prediction, which is extracted to a surface mesh by marching cubes (Lorensen and Cline, 1987). This way of reconstruction is end-to-end trainable, and is demonstrated to output accurate, coherent, and complete meshes. In this paper, we follow this volume-based 3D reconstruction path and directly regress the TSDF volume. Inspired by recent successes of vision transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), some approaches (Bozic et al., 2021; Stier et al., 2021) have adopted this structure in 3D reconstruction, but their usages are all limited to fusing the 2D features from different views while the aggregation of the 3D feature volumes is still performed by the 3D CNN. In this paper, we claim that the aggregation of 3D feature volume is also critical, and the evolution from 3D CNN to 3D multi-head attention could further improve both the accuracy and completeness of the reconstruction. Obviously, the limited usage of 3D multi-head attention in 3D feature volume aggregation is mainly due to its explosive computation. Specifically, the attention between each voxel and any other voxel needs to be calculated, which is hard to be realized in a general computing platform. This is also the reason why there are only a few applications of 3D transformers in solving 3D tasks. In this work, to address the above challenges and make the 3D transformer practical for 3D scene reconstruction, we propose a sparse window multi-head attention structure. Inspired by the sparse CNN (Yan et al., 2018), we first sparsify the 3D feature volume with predicted occupancy, in which way the number of the voxels is reduced to only the occupied ones. Then, to compute the attention score of a target voxel, we define a local window centered on this voxel, within which the non-empty voxels are considered for attention computing. In this way, the computation complexity of the 3D multi-head attention can be reduced by orders of magnitude, and this module can be embedded into a network for 3D feature aggregation. Therefore, with this module, we build the first 3D transformer based top-down-bottom-up network, where a dilate-attention module and its inverse are used to downsample and upsample the 3D feature volume. In addition, to make up for the local receptive field of the sparse window attention, we add a global attention module and a global context module at the bottom of this network since the size of the volume is very small at the bottom level. With this network, the 3D shape is estimated in a coarse-to-fine manner of three levels, as is displayed in Figure 1. To the best of our knowledge, this is the first paper employing the 3D transformer for 3D scene reconstruction from a TSDF representation. In the experiments, our method is demonstrated to outperform previous methods by a significant margin on multiple datasets. Specifically, the accuracy metric of the mesh on the ScanNet dataset is reduced by \(41.8\%\), from \(0.055\) to \(0.032\), and the completeness metric is reduced by \(25.3\%\), from \(0.083\) to \(0.062\). In the qualitative results, the meshes reconstructed by our method are dense, accurate, and complete. The main contributions of this work are then summarized as follows: \(\bullet\) We propose a sparse window multi-head attention module, with which the computation complexity of the 3D transformer is reduced significantly and becomes feasible. \(\bullet\) We propose a dilate-attention structure to avoid geometry degeneration in downsampling, with which we build the first top-down-bottom-up 3D transformer network for 3D feature aggregation. This network is further improved with bottom-level global attention and global context encoding. \(\bullet\) This 3D transformer is employed to aggregate the 3D features back-projected from the 2D features of an image sequence in a coarse-to-fine manner, and predict TSDF values for accurate and complete 3D reconstruction. This framework shows a significant improvement in multiple datasets. ## 2 Related Work **Depth-based 3D Reconstruction.** In traditional methods, reconstructing a 3D model of a scene usually involves depth estimating for a series of images, and then fusing these depths together into Figure 1: The overview of the 3D reconstruction framework. The input images are extracted to features by a 2D backbone network, then the 2D features are back-projected and fused to 3D feature volumes, which are aggregated by our 3D SDF transformer and generate the reconstruction in a coarse-to-fine manner. a 3D data structure (Schonberger et al., 2016). After the rising of deep learning, many works have tried to estimate accurate and dense depth maps with deep neural networks (Yao et al., 2018; Wang and Shen, 2018; Chen et al., 2019; Im et al., 2019; Yuan et al., 2021, 2022; Long et al., 2021). They usually estimate the depth map of the reference image by constructing a 3D cost volume from several frames in a local window. Also, to leverage the information in the image sequence, some other methods try to propagate the message from previously predicted depths utilizing probabilistic filtering (Liu et al., 2019), Gaussian process (Hou et al., 2019), or recurrent neural networks (Duzceker et al., 2021). Although the predicted depth maps are increasingly accurate, there is still a gap between these single-view depths and the complete 3D shape. Post mesh generation like Poisson reconstruction (Kazhdan and Hoppe, 2013), Delaunay triangulation (Labatut et al., 2009), and TSDF fusion (Newcombe et al., 2011) are proposed to solve this problem, but the inconsistency between different views is still a challenge. **Volume-based 3D Reconstruction.** To avoid the depth estimation and fusion in 3D reconstruction, some methods try to directly regress a volumetric data structure end-to-end. SurfaceNet (Ji et al., 2017) encodes the camera parameters together with the images to predict a 3D surface occupancy volume with 3D convolutional networks. Afterward, Atlas (Murez et al., 2020) back-projects the 2D features of all images into a 3D feature volume with the estimated camera poses, and then feeds this 3D volume into a 3D U-Net to predict a TSDF volume. Then NeuralRecon (Sun et al., 2021) improves the efficiency by doing this within a local window and then fusing the prediction together using a GRU module. Recently, to improve the accuracy of the reconstruction, some methods also introduce transformers to do the fusion of 2D features from different views (Bozic et al., 2021; Stier et al., 2021). However, their transformers are all limited in 2D space and used to process 2D features, which is not straightforward in the 3D reconstruction task. There are also some methods for object 3D shape prediction, which can infer the 3D shape of objects with only a few views (Xie et al., 2020; Wang et al., 2021). But the network of these methods can only infer the shape of one category of small objects. Lately, some works represent the 3D shape with an implicit network, and optimize the implicit representation by neural rendering (Yariv et al., 2020; Wang et al., 2021; Yariv et al., 2021). These methods could obtain a fine surface of an object with iterative optimization, but with the cost of a long-time reconstruction. **Transformers in 3D Vision.** The transformer structure (Vaswani et al., 2017) has attracted a lot of attention and achieved many successes in vision tasks (Dosovitskiy et al., 2020; Liu et al., 2021). Most of them, nevertheless, are used for 2D feature extraction and aggregation. Even in 2D feature processing, the computation complexity is already quite high, so many works are proposed to reduce the resource-consuming (Dosovitskiy et al., 2020; Liu et al., 2021). Directly extending the transformer from 2D to 3D would cause catastrophic computation. Thus most works are only carefully performed on resource-saving feature extraction, e.g., the one-off straightforward feature mapping without any downsampling or upsampling (Wang et al., 2021), where the size of the feature volume remains unchanged, or the top-down tasks with only downsampling (Mao et al., 2021), where the size of the feature volume is reduced gradually. In 3D reconstruction, however, a top-down-bottom-up structure is more reasonable for feature extraction and shape generation, as in most of the 3D-CNN-based structures (Murez et al., 2020; Sun et al., 2021; Stier et al., 2021). So in this work, we design the first 3D transformer based top-down-bottom-up structure for improving the quality of 3D reconstruction. In addition, a sparse window multi-head attention mechanism is proposed to save the computation cost. Although the sparse structure can handle the highly-sparse data, like the object detection of Lidar points (Mao et al., 2021), it is not suitable for processing a relatively-dense data, like a mesh of an indoor scene. Therefore, a sparse window structure is needed in 3D scene reconstruction, where a dense surface within a window could be sufficiently aggregated. ## 3 Method ### Overview The overview framework of our method is illustrated in Figure 1. Given a sequence of images \(\{\mathbf{I}_{i}\}_{i=1}^{N}\) of a scene and the corresponding camera intrinsics \(\{\mathbf{K}_{i}\}_{i=1}^{N}\) and extrinsics \(\{\mathbf{P}_{i}\}_{i=1}^{N}\), we first extract the image features \(\{\mathbf{F}_{i}\}_{i=1}^{N}\) in 2D space in three levels, and then back project these 2D features to 3D space, which are fused to three feature volumes in the coarse, medium, and fine levels, respectively. Afterward, these three feature volumes are aggregated by our SDF 3D transformer in a coarse-to-fine manner. At the coarse and medium levels, the output of the 3D transformer is two occupancy volumes \(\mathbf{O^{2}},\mathbf{O^{1}}\), while at the fine level, the output is the predicted TSDF volume \(\mathbf{S^{0}}\). The coarse occupancy volume \(\mathbf{O^{2}}\) and the medium occupancy volume \(\mathbf{O^{1}}\) store the occupancy values \(o\in[0,1]\) of the voxels, which are used to sparsify the finer level. Therefore, the feature volumes could be processed sparsely to reduce the computation complexity. Finally, the predicted mesh is extracted using marching cubes (Lorensen and Cline, 1987) from the TSDF volume \(\mathbf{S^{0}}\). ### Feature Volume Construction The 2D features \(\{\mathbf{F}_{i}^{l}\}_{i=1}^{N}\) in three levels \(l=0,1,2\) are extracted by a feature pyramid network (Lin et al., 2017) with the MnasNet-B1 (Tan et al., 2019) as the backbone. The resolution of the features at these three levels are \(\frac{1}{4},\frac{1}{8},\frac{1}{16}\), respectively. Then following Murez et al. (2020), we back project the 2D features to 3D space with the camera parameters \(\{\mathbf{K}_{i}\}_{i=1}^{N}\) and \(\{\mathbf{P}_{i}\}_{i=1}^{N}\), generating 3D feature volumes \(\{\mathbf{V}_{i}^{l}\}_{i=1}^{N}\) of size \(N_{X}\times N_{Y}\times N_{Z}\). In previous work, usually the fusion of these feature volumes from different views is computed by taking the average (Murez et al., 2020; Sun et al., 2021). However, the back-projected features from different views contribute differently to the 3D shape, e.g., the view with a bad viewing angle and the voxels far from the surface. Therefore, a weighted average is more reasonable than taking the average. To compute these weights, for each voxel we calculate the variance of the features of different views by \[\textbf{Var}_{i}^{l}=(\mathbf{V}_{i}^{l}-\overline{\boldsymbol{\nabla}}^{l})^ {2}, \tag{1}\] where \(\overline{\boldsymbol{\nabla}}^{l}\) is the average of the features of all views. Then we feed the features and the variance into a small MLP to calculate the weights \(\mathbf{W}_{i}\), which are used to compute a weighted average of the features from different views as \[\mathbf{V}_{w}^{l}=\frac{1}{N}\sum_{i}\mathbf{V}_{i}^{l}\times\text{SoftMax}( \mathbf{W}_{i}), \tag{2}\] where \(\times\) denotes element-wise multiplication. Inspired by Yao et al. (2018), we also calculate the total variance of all feature volumes and then concatenate it with the weighted average to the final feature volumes, as \[\mathbf{V}^{l}=\{\mathbf{V}_{w}^{l},\frac{1}{N}\sum_{i}\textbf{Var}_{i}^{l}\}, \tag{3}\] ### Sparse Window Multi-head Attention The multi-head attention structure has been shown to be effective in many vision tasks (Dosovitskiy et al., 2020; Liu et al., 2021). Most of them, however, are limited to 2D feature processing rather than 3D feature processing. This is because the computation complexity of the multi-head attention is usually higher than convolutional networks, which problem is further enlarged in 3D features. To compute this for a 3D feature volume, the attentions between a voxel and any other voxels need to be computed, i.e., \(N_{X}\times N_{Y}\times N_{Z}\) attentions for one voxel and \(N_{X}\times N_{Y}\times N_{Z}\times N_{X}\times N_{Y}\times N_{Z}\) attentions for all voxels, which is extremely large and hard to be realized in regular GPUs. Figure 2: (a) Illustration of the sparse window attention. For calculating the attention of the current voxel (in orange), we first sparsify the volume using the occupancy prediction from the coarser level, and then search the occupied voxels (in dark blue) within a small window. The attention is hence computed based on only these neighbor occupied voxels. (b) Illustration of the dilate-attention in a 2D slice. We dilate the occupied voxels and calculate the attention of these dilated voxels (in yellow) to maintain the geometry structure. To deal with this problem and make the multi-head attention of 3D volumes feasible, we propose to use a sparse window structure to calculate the attention. As is displayed in Figure 1, in the medium and the fine level, we sparsify the volumes using the occupancy prediction \(\mathbf{O^{2}},\mathbf{O^{1}}\), and only compute the attention of the non-empty voxels. In addition, considering that the nearby voxels contribute more to the shape of the current voxel and the distant voxels contribute less, we only calculate the attention within a local window of each voxel, as is shown in Figure 2. Therefore, we are able to only calculate the multi-head attention of the occupied voxels within a small window, in which way the computation complexity is reduced significantly. Specifically, for any non-empty voxel \(v_{i}\) in the feature volume \(V\), we first search all non-empty voxels within a \(n\times n\times n\) window centered on this voxel and get the neighbor voxels \(\{v_{j},j\in\Omega(i)\}\). Then the query, key, and value embeddings are calculated as \[Q_{i}=\mathcal{L}_{q}(V(v_{i})),K_{j}=\mathcal{L}_{k}(V(v_{j})),V_{j}= \mathcal{L}_{v}(V(v_{j})), \tag{4}\] where \(\mathcal{L}_{q},\mathcal{L}_{k},\mathcal{L}_{v}\) are the linear projection layers. For the position embedding \(P\), we hope to block the influence from the scale of the 3D world coordinates. Hence we compute it based on the relative voxel position in the volume rather than based on the real-world coordinates (Mao et al., 2021), as \[P_{j}=\mathcal{L}_{p}(v_{j}-v_{i}). \tag{5}\] Then the attention is calculated as \[\text{Attention}(v_{i})=\sum_{j\in\Omega(i)}\text{SoftMax}(Q_{i}(K_{j}+P_{j}) /\sqrt{d})(V_{j}+P_{j}). \tag{6}\] In this case, the computation complexity is reduced from \[\mathcal{O}_{\text{3D-Attn}}=N_{X}\times N_{Y}\times N_{Z}\times N_{X}\times N _{Y}\times N_{Z}\times\mathbb{O}(ij), \tag{7}\] to \[\mathcal{O}_{\text{SW-3D-Attn}}=N_{\text{occu}}\times n_{\text{occu}}\times \mathbb{O}(ij), \tag{8}\] where \(\mathbb{O}(ij)\) is the complexity of one attention computation between voxel \(v_{i}\) and \(v_{j}\), \(N_{\text{occu}}\) is the number of occupied voxels in the volume, and \(n_{\text{occu}}\) is the number of occupied voxels within the local window. Assuming that the occupancy rate of the volume is \(10\%\) and the window size is \(\frac{1}{10}\) of the volume size, the computation complexity of the sparse window attention would be only \(\frac{n^{3}/10}{10N_{X}N_{Y}N_{Z}}=\frac{1}{100000}\) of the dense 3D attention. ### SDF 3D Transformer Limited by the high resource-consuming of the multi-head attention, most of the previous works related to 3D transformers are only carefully performed on resource-saving feature processing, e.g., the one-off straightforward feature mapping without any downsampling or upsampling (Wang et al., 2021), where the size of feature volumes remains unchanged, or the top-down tasks with only downsampling (Mao et al., 2021), where the size of feature volumes is reduced gradually. In 3D reconstruction, however, a top-down-bottom-up structure is more reasonable for feature extraction and prediction generation, as in most of the 3D-CNN-based structures (Murze et al., 2020; Sun et al., 2021; Stier et al., 2021). So in this work, we design the first 3D transformer based top-down-bottom-up structure, as is shown in Figure 3. Figure 3: The structure of the SDF transformer. “S-W-Attn” denotes sparse window attention. Taking the network for the fine volume (\(V^{0}\) in Figure 1) as an example, there are four feature levels in total, i.e. \(\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{16}\), as shown in Figure 3. In the encoder part, at each level, a combination of downsampling and dilate-attention is proposed to downsample the feature volume. Then two blocks of the sparse window multi-head attention are used to aggregate the feature volumes. At the bottom level, a global attention block is employed to make up the small receptive field of the window attention, and a global context encoding block is utilized to extract the global information. In the decoder part, we use the inverse sparse 3D CNN to upsample the feature volume, i.e., we store the mapping of the down flow and now restore the spatial structure by inversing the sparse 3D CNN in the dilate-attention. Therefore, the final shape after the up flow should be the same as the input. Similar to FPN (Lin et al., 2017), the features in the down flow are also added to the upsampled features in the corresponding level. To enable the deformation ability, a post-dilate-attention block is equipped after the down-up flow. Finally, a submanifold 3D CNN head with Tanh activation is appended to output the TSDF prediction. For the coarse volume \(V^{2}\) and medium volume \(V^{1}\), two and three-level of similar structures with Sigmoid activation are adopted. **Dilate-attention.** The direct downsampling of a sparse structure is prone to losing geometry structure. To deal with this, between each level we first downsample the feature volume, and then dilate the volume with a sparse 3D CNN with the kernel size of \(3\), which calculates the output if any voxel within its kernel is non-empty. The dilation operation alone may also harm the geometry, since it may add some wrong voxels into the sparse structure. Thus we calculate the sparse window attention of the dilated voxels, such that the voxels far from the surface would get low scores and do not contribute to the final shape. The dilated voxels are then joined to the downsampled volume by concatenating the voxels together. With this dilate-attention module, the 3D shape is prevented from collapsing. Without this module, the network performs badly and only generates a degraded shape. **Global attention and global context encoding.** Since the attention blocks in the top-down flow are all local-window based, there could be a lack of the global receptive field. Considering the resolution of the bottom level is not high, we equip with a global attention block at the bottom level, i.e., we calculate the attention between each non-empty voxel and any other non-empty voxel in the volume. This could build the long-range dependency missing in the sparse window attention blocks. In addition, we use the multi-scale global averaging pooling (Zhao et al., 2017) of scales \(1,2,3\) to extract the global context code of the scene. This encoding module could aggregate the global information and explain the illumination, global texture, and global geometry style. ### Loss Function The final TSDF prediction \(\mathbf{S}^{0}\) is supervised by the log L1 distance between the prediction and the ground truth as \(L^{0}=|\log\mathbf{S}^{0}-\log\widehat{\mathbf{S}}|\). To supervie the occupancy predictions \(\mathbf{O}^{2},\mathbf{O}^{1}\) in the coarse and medium levels, we generate the occupancy volumes based on the TSDF values. Specifically, the voxels with TSDF of \(-1\sim 1\) are regarded as occupied, and the values are set to \(1\), otherwise set to \(0\). Then a binary cross-entropy loss is calculated between the prediction and the ground truth as: \(L^{l}=-\widehat{\mathbf{O}^{l}}\log\mathbf{O}^{l},\ \ l=1,2\). To supervie the averaging weights \(\mathbf{W}_{i}^{l}\), we use the occupancy in the back-projection following Stier et al. (2021). Intuitively, when the feature is back-projected from a 2D image to the 3D space along the camera ray using multiple depth values, we hope the voxels close to the mesh surface have bigger weights in the fusion. Therefore, the 3D position is regarded as occupied if the difference between the project depth and the true depth from the depth map is smaller than the TSDF truncation distance. Then the cross entropy loss is applied to the weights and the occupancy: \[L_{w}^{l}=-\widehat{\mathbf{O}_{i}^{l}}\log\sigma(\mathbf{W}_{i}^{l}),\ \ l=1,2,3, \tag{9}\] where \(\sigma\) denotes Sigmoid, and \(\widehat{\mathbf{O}_{i}^{l}}\) is the ground truth occupancy in the back-projection of image \(I_{i}\). Figure 4: Ablation study on the ScanNet dataset. ## 4 Experiments ### Experiments Setup Our work is implemented in Pytorch and trained on Nvidia V100 GPUs. The network is optimized with the Adam optimizer (\(\beta_{1}=0.9,\beta_{2}=0.999\)) with learning rate of \(1\times 10^{-4}\). For a fair comparison with previous methods, the voxel size of the fine level is set to 4cm, and the TSDF truncation distance is set to triple the voxel size. Thus the voxel size of the medium and the coarse levels are \(8\) cm and \(16\) cm, respectively. For the balance of efficiency and receptive field, the window size of the sparse window attention is set to \(10\). For the view selection, we first follow Hou et al. (2019) to remove the redundant views, i.e., a new incoming frame is added to the system only if its relative translation is greater than \(0.1\) m and the relative rotation angle is greater than \(15\) degree. Then if the number of the remaining views exceeds the upper limit, a random selection is adopted for memory efficiency. The view limit is set to \(20\) in the training, which means twenty images are input to the network for one iteration, while the limit for testing is set to \(150\). Our framework runs at an online speed of \(75\) FPS for the keyframes. Detailed efficiency experiments are reported in the supplemental materials. ScanNet (Dai et al., 2017) is a large-scale indoor dataset composed of \(1613\) RGB-D videos of \(806\) indoor scenes. We follow the official train/test split, where there are \(1513\) scans used for training and \(100\) scans used for testing. TUM-RGBD (Sturm et al., 2012) and ICL-NUIM (Handa et al., 2014) are also two datasets composed of RGB-D videos but with small-number scenes. Therefore, following previous methods (Stier et al., 2021), we only perform the generalization evaluation of the model trained on ScanNet on these two datasets, where 13 scenes of TUM-RGBD and 8 scenes of ICL-NUIM are used. We first directly evaluate the reconstructed meshes with the ground-truth meshes, and obtain a significant improvement from previous methods, improving from F-score \(=0.641\) to F-score \(=0.705\), as shown in Table 1. Then following Bozic et al. (2021), we add the same occlusion mask at evaluation to avoid penalizing a more complete reconstruction, which is because the ground-truth meshes are incomplete due to unobserved and occluded regions, while our method could reconstruct a more complete 3D shape, as shown in Figure 5. This results in a more reasonable evaluation, as in the second part of Table 1. The improvement is further enlarged, from F-score \(=0.655\) to F-score \(=0.754\) compared to previous best method. The accuracy error is decreased from \(0.055\) m to \(0.032\) m, which is almost half (\(41.8\%\)) of the previous best method, while the completeness error is decreased by \(25.3\%\), from \(0.083\) m to \(0.062\) m. This owes to the feature aggregating ability of the proposed 3D SDF transformer, which can predict a more accurate 3D shape. This is also demonstrated in the generalization experiments on ICL-NUIM and TUM-RGBD datasets, as shown in Figure 3. After evaluating the reconstructed meshes, we also evaluate the depth accuracy of our method. Since our method does not predict the depth maps explicitly, we render the predicted 3D shape to the Figure 5: The qualitative results on the ScanNet dataset. Texture-less rendering is displayed in the appendix. image planes and get the depth maps, following previous methods (Murez et al., 2020). The results are shown in Table 2, from which we can see our method decreases the error a lot from previous methods. The relative error is reduced by \(16.4\%\), from \(0.061\) to \(0.051\). The accuracy of the depth maps also demonstrates the accurate feature analysis ability of the proposed 3D SDF transformer. From the qualitative visualization in Figure 5, we can see our method can predict a complete and accurate 3D shape. Previous methods which can recover a complete mesh usually reconstruct a smooth 3D shape with losing some details (Murez et al., 2020). However, our method could predict a more complete mesh than the ground truth, while the details of the 3D shapes are better recovered. Please note that for a fair comparison, the voxel size is set to \(4\) cm, such that it is hard to reconstruct the geometry details less than \(4\) cm. ### Ablation Study **SDF transformer.** To verify the effectiveness of the proposed SDF transformer, we first build a baseline model with the same structure as Figure 1, but the 3D SDF transformer is replaced by a UNet structure of 3D CNN. Adding the variance fusion would improve the mesh in some clutter areas and slightly increase the performance. Then we add a base version of the SDF transformer, which does not include the global module and the post-dilate-attention module. The performance is significantly improved with this module, as is shown in Table 4 and Figure 4. The reconstructed meshes possess much more geometry details compared to the baseline. **Global module.** We next add the global module, including the bottom-level global attention and the global context code. The sparse window attention block can only obtain the long-range dependency within a local window. Thus it may have problems when it can not get enough information within this local window, e.g., the texture-free regions. Also, the global module could reason the global information like the illumination and the texture style. **Dilate attention.** The dilate attention module is crucial in the SDF transformer, so we can not remove all the dilate attention blocks. That will destroy the whole framework and generate a degraded 3D shape. Therefore, we only ablate the post dilate attention block after the down-up flow. This block could deform the shape and make it more complete, e.g., making up the crack as shown in Figure 4. From the quantitative results in Table 4, we can also see the improvement of completeness. **Window size.** As shown in Table 4, we study the impact of the window size of the attention. It is expected that a larger window size would generate a better result, since the range of the dependency is longer, but with the cost of more resource consumption. We choose \(10\) as the default size, considering that the performance improvement is minor after that. ## 5 Conclusion We propose the first top-down-bottom-up 3D transformer for 3D scene reconstruction. A sparse window attention module is proposed to reduce the computation, a dilate attention module is proposed to avoid geometry degeneration, and a global module at the bottom level is employed to extract the global information. This structure could be used to aggregate any 3D feature volume, thus it could be applied to more 3D tasks in the future, such as 3D segmentation. \begin{table} \begin{tabular}{r r r r r r} \hline \hline & Method & Acc \(\downarrow\) & Comp \(\downarrow\) & Prec \(\uparrow\) & Recall \(\uparrow\) & F-score \(\uparrow\) \\ \hline \multirow{5}{*}{\(\downarrow\)} & Atlas & \(0.175\) & \(0.314\) & \(0.280\) & \(0.194\) & \(0.229\) \\ & NeuralRecon & \(0.215\) & \(1.031\) & \(0.214\) & \(0.036\) & \(0.058\) \\ & VoRTX & \(0.102\) & \(0.146\) & \(0.449\) & \(0.375\) & \(0.408\) \\ & & Ours & \(\mathbf{0.083}\) & \(\mathbf{0.142}\) & \(\mathbf{0.522}\) & \(\mathbf{0.390}\) & \(\mathbf{0.447}\) \\ \hline \multirow{5}{*}{\(\downarrow\)} & Atlas & \(0.208\) & \(2.344\) & \(0.360\) & \(0.089\) & \(0.132\) \\ & NeuralRecon & \(0.130\) & \(2.528\) & \(0.382\) & \(0.075\) & \(0.115\) \\ & & Vortx & \(0.175\) & \(\mathbf{0.314}\) & \(0.280\) & \(\mathbf{0.194}\) & \(0.229\) \\ \cline{1-1} & Ours & \(\mathbf{0.129}\) & \(0.455\) & \(\mathbf{0.406}\) & \(0.173\) & \(\mathbf{0.254}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Generalization experiments on the ICL-NUIM and TUM-RGBD datasets. \begin{table} \begin{tabular}{r r r r r r} \hline \hline & Method & Acc \(\downarrow\) & Comp \(\downarrow\) & Prec \(\uparrow\) & Recall \(\uparrow\) & F-score \(\uparrow\) \\ \hline \multirow{5}{*}{\(\downarrow\)} & Baseline & \(0.056\) & \(0.089\) & \(0.698\) & \(0.587\) & \(0.636\) \\ & \(0.054\) & \(0.090\) & \(0.713\) & \(0.594\) & \(0.647\) \\ & SDF Former & \(0.036\) & \(0.065\) & \(0.807\) & \(0.671\) & \(0.732\) \\ & + Global & \(0.033\) & \(0.064\) & \(0.823\) & \(0.676\) & \(0.741\) \\ & + Post-Dila-Attn & \(0.032\) & \(0.062\) & \(0.829\) & \(0.694\) & \(0.754\) \\ \hline \multirow{5}{*}{Window Size} & \(1\) & \(0.052\) & \(0.086\) & \(0.721\) & \(0.604\) & \(0.656\) \\ & \(3\) & \(0.047\) & \(0.078\) & \(0.768\) & \(0.636\) & \(0.695\) \\ \cline{1-1} & \(5\) & \(0.037\) & \(0.069\) & \(0.799\) & \(0.660\) & \(0.730\) \\ \cline{1-1} & \(8\) & \(0.033\) & \(0.065\) & \(0.822\) & \(0.682\) & \(0.746\) \\ \cline{1-1} & \(10\) & \(0.032\) & \(0.062\) & \(0.829\) & \(0.694\) & \(0.754\) \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on the ScanNet dataset. Components are added one by one in the upper part.
2309.08074
Successive phase transitions of the spin-orbit-coupled metal Cd2Re2O7 probed by high-resolution synchrotron x-ray diffraction
The 5d pyrochlore oxide superconductor Cd2Re2O7 (CRO) has attracted significant interest as a spin-orbit-coupled metal (SOCM) that spontaneously undergoes a phase transition to an odd-parity multipole phase by breaking the spatial inversion symmetry due to the Fermi liquid instability caused by strong spin-orbit coupling. Despite the significance of structural information during the transition, previous experimental results regarding lattice deformation have been elusive. We have conducted ultra-high resolution synchrotron radiation x-ray diffraction experiments on a high-quality CRO single crystal. The temperature-dependent splitting of the 0 0 16 and 0 0 14 reflections, which are allowed and forbidden, respectively, in the high-temperature cubic phase I (space group Fd-3m), has been clearly observed and reveals the following significant facts: inversion symmetry breaking and tetragonal distortion occur simultaneously at Ts1 = 201.5(1) K; the previously believed first-order transition between phase II (I-4m2) and phase III (I4122) at Ts2 ~ 120 K consists of two close second-order transitions at Ts2 = 115.4(1) K and Ts3 ~ 100 K; there is a new orthorhombic phase XI (F222) in between. The order parameters (OPs) of these continuous transitions are uniquely represented by a two-dimensional irreducible representation Eu of the Oh point group, and the OPs of phase XI are a linear combination of those of phases II and III. Each phase is believed to correspond to a distinct odd-parity multipole order, and the complex successive transitions observed may be the result of an electronic phase transition that resolves the Fermi liquid instability in the SOCM.
Daigorou Hirai, Atsuhito Fukui, Hajime Sagayama, Takumi Hasegawa, Zenji Hiroi
2023-09-15T00:05:26Z
http://arxiv.org/abs/2309.08074v1
Successive phase transitions of the spin-orbit-coupled metal Cd\({}_{2}\)Re\({}_{2}\)O\({}_{7}\) probed by high-resolution synchrotron X-ray diffraction ###### Abstract The 5\(d\) pyrochlore oxide superconductor Cd\({}_{2}\)Re\({}_{2}\)O\({}_{7}\) (CRO) has attracted significant interest as a spin-orbit-coupled metal (SOCM) that spontaneously undergoes a phase transition to an odd-parity multipole phase by breaking the spatial inversion symmetry due to the Fermi liquid instability caused by strong spin-orbit coupling. Despite the significance of structural information during the transition, previous experimental results regarding lattice deformation have been elusive. We have conducted ultra-high resolution synchrotron radiation X-ray diffraction experiments on a high-quality CRO single crystal. The temperature-dependent splitting of the 0 0 16 and 0 0 14 reflections, which are allowed and forbidden, respectively, in the high-temperature cubic phase I (space group _Fd-3m_), has been clearly observed and reveals the following significant facts: inversion symmetry breaking and tetragonal distortion occur simultaneously at \(T_{\rm{si}}=201.5(1)\) K; the previously believed first-order transition between phase II (\(I\)-4\(m\)2) and phase III (\(I\)-4\(m\)2) at \(T_{\rm{s2}}\sim 120\) K consists of two close second-order transitions at \(T_{\rm{s2}}=115.4(1)\) K and \(T_{\rm{s3}}\sim 100\) K; there is a new orthorhombic phase XI (\(F222\)) in between. The order parameters of these continuous transitions are uniquely represented by a two-dimensional irreducible representation \(E_{u}\) of the \(O_{h}\) point group, and the order parameters of phase XI are a linear combination of those of phases II and III. Each phase is believed to correspond to a distinct odd-parity multipole order, and the complex successive transitions observed may be the result of an electronic phase transition that resolves the Fermi liquid instability in the SOCM. Keywords: spin-orbit-coupled metal, pyrochlore oxide, spin-orbit coupling, inversion symmetry breaking ## 1 Introduction In general, the Fermi surfaces of metals become unstable against various interactions, and characteristic ordered states appear to resolve them. When electron-phonon and electron-electron interactions become strong, for instance, a gap opens on the Fermi surface, and ordered states such as superconductors and Mott insulators, respectively, are produced. Behind observed phase transitions are interactions that govern the physical properties, and the investigation of ordered states leads to an in-depth comprehension of these interactions. Recently, Fu proposed the concept of spin-orbit-coupled metal (SOCM), taking into account the rarely considered Fermi liquid instability resulting from spin-orbit couplings (SOCs).[1] The SOCM possesses a crystal structure with inversion symmetry and strong SOCs that act on the conduction electrons. There, the SOC-caused Fermi liquid instability induces spontaneous inversion symmetry breaking (ISB). As a result, the antisymmetric SOC is activated, and spin splitting is expected to occur on the Fermi surface, resulting in odd-parity ordering of itinerant electrons such as multipoles, gyrotropic order, and ferroelectric metallic phases.[1] Conversely, ISB occurs to resolve the instability caused by SOC on spin-degenerate Fermi surfaces. Furthermore, it has been proposed that fluctuations in odd-parity multipole orders could induce exotic _p_-wave superconductivity.[2] Cd\({}_{2}\)Re\({}_{2}\)O\({}_{7}\) (CRO) has garnered considerable interest as a promising SOCM candidate.[3] It satisfies the requirements for SOCM due to its cubic pyrochlore-type crystal structure with inversion symmetry at room temperature and its metallic conductivity with 5\(d\) electrons that have strong SOCs.[4] In fact, it exhibits spontaneous ISB. As shown in Fig. 1, phase I at room temperature has a cubic structure with the space group \(F\)_d-3m_, while phase II below \(T_{\rm s1}\) \(\sim\) 200 K has a tetragonal structure with the space group \(I\)-4\(m\)2 and broken inversion symmetry.[5]\(T_{\rm s1}\) is a second order transition with negligible tetragonal deformation between 0.05-0.10 percent.[6, 7] Despite this minor structural modification, the electronic state changes drastically below \(T_{\rm s1}\), with a sharp decrease in electrical resistivity and a nearly 50 percent decrease in density of states.[3] Therefore, \(T_{\rm s1}\) is considered an electronic phase transition driven by the Fermi liquid instability of SOCM, as opposed to a simple structural transition.[1, 3] In addition, it is believed that phase II transitions to phase III with a different tetragonal structure in the space group \(I\)4\({}_{1}\)22 and broken inversion symmetry below \(T_{\rm s2}\)\(\sim\) 120 K; this transition is reported to be a first-order transition.[8] Under high pressure, on the other hand, a total of six phases, up to phase IX, have been identified in the vicinity of the critical pressure of 4.2 GPa for ISB,[9] and an \(R\)-3\(m\) phase appears at room temperature above 21 GPa;[10] we refer to this phase as phase X. These diverse phases indicate that CRO possesses a unique Fermi liquid instability and a coupled structural instability. The two low-temperature phases of ISB at ambient pressure most likely correspond to the multipole order with odd parity predicted for SOCM.[1] The Landau theory explains the successive phase transitions between phases I (\(F\)_d-3m_), II (\(I\)-4\(m\)2), and III (\(I\)4\({}_{1}\)22) in terms of the order parameters (OPs) of the \(E_{u}\) irreducible representation of the \(O_{h}\) point group.[12] As depicted in Fig. 1, the displacements induced by phase transitions in the four tetrahedral Re atoms can be viewed as electric dipoles, certain pairs of which generate electric toroidal moments. Then, for phases II and III, these virtual electric toroidal moments are organized as \(x^{2}-y^{2}\) and \(3z^{2}-r^{2}\) configurations, respectively.[11] They are therefore known as electric toroidal quadrupole (ETQ) orders.[13] Alternatively, they can be interpreted as the distinct orderings of the six Re-Re bonds in the tetrahedron.[13] Experimental results contradicting the aforementioned were also reported for the successive structural phase transitions of CRO. Single crystal X-ray diffraction (XRD),[5, 14] powder neutron diffraction,[7] convergent-beam electron diffraction (CBED),[15] and Raman scattering experiments[16] support the space group \(I\)-4\(m\)2 for phase II, whereas nonlinear optical second harmonics generation (SHG) experiments suggest a less symmetric \(I\)-4;[17] an alternative OP to \(E_{u}\) has been discussed.[17, 18] For phase III, the \(I\)4\({}_{1}\)22 space group is supported by single crystal XRD[5, 14] and CBED,[15] whereas the \(T_{\rm s2}\) transition was not observed in SHG measurements.[19] Recent Raman scattering experiments with an isotope-substituted CRO crystal demonstrated that a third phase transition occurs at an even lower temperature of 80 K.[20] According to first-principles calculations, an orthorhombic phase of the space group \(F\)222 is stable down to the lowest temperature.[20] However, the veracity of this assertion is questionable, given that no structural or physical anomalies at 80 K have been observed in other measurements, including other Raman scattering experiments.[16, 21] Very recently, on Figure 1: Temperature dependence of electrical resistivity for a high-quality Cd\({}_{2}\)Re\({}_{2}\)O\({}_{7}\) crystal with RRR = 670. There is a distinct kink at \(T_{\rm s1}\), where space inversion symmetry is broken in a second order manner, whereas a broad anomaly appears around \(T_{\rm s2}\). The inserted images depict how the Re tetrahedron can deform in phases II and III, with red arrows representing the displacements of Re atoms (black balls).[11] The Re displacements generate virtual electric toroidal moments (blue arrows) of the \(x^{2}-y^{2}\) and \(3z^{2}-r^{2}\) types, respectively. The colors of the connecting rods between Re atoms differentiate identical bonds in each phase. the other hand, Takigawa observed in his Cd NMR experiments that \(T_{\alpha 2}\) is not a first-order transition and that a new phase with low symmetry exists between phases II and III[22]. Furthermore, Uji et al. obtained compatible torque measurement results[23]. We performed high-resolution synchrotron radiation XRD experiments on a high-quality CRO crystal to elucidate the details of structural changes caused by the phase transitions; recent improvements in crystal quality resulted in a one-order-of-magnitude increase in residual resistivity ratio (RRR), indicating less carrier scattering by defects, and allowed the observations of quantum oscillations as well as the flipping of tetragonal strain at \(T_{\alpha 2}\)[11, 24, 25]. Consequently, we were able to observe the distinct splitting of diffraction peaks caused by tetragonal distortion, which had not been observed in previous XRD experiments[6]. We determined the precise temperature dependence of the lattice constants and gathered data on the extinction that uniquely identifies the space group for each phase. The most significant discovery is that the 120 K transition is not a first-order transition but rather consists of two close second-order transitions at \(T_{\alpha 2}=\) 115.4(1) K and \(T_{\alpha 3}\sim\) 100 K, with the formation of a new orthorhombic phase XI overlooked for years in between. Unique structural modifications unveiled in this study, such as the negative thermal expansion below \(T_{\alpha 1}\) and the enhancement of crystal symmetry at \(T_{\alpha 3}\), strongly suggest that the electronic phase transition is the source of these structural modifications. As a result of these observations, it is anticipated that our understanding of the microscopic origin of the odd-parity multipoles in CRO and the characteristics of SOCM will advance. ## 2 Experiment Synchrotron XRD experiments were performed at the beamline BL-4C of the KEK Photon Factory. We measured the temperature dependence of two high-index (0 0 16)\({}_{\rm c}\) and (0 0 14)\({}_{\rm c}\) reflections, which are allowed and forbidden, respectively, in the high-temperature cubic phase I (_Fd_-3_m_) (subscript c denotes cubic crystal system; subscripts t and o denote tetragonal and orthorhombic crystal systems, respectively, in the following). To improve the resolution of the diffraction angle \(\theta\), X-rays with wavelengths (energies) of 1.25 A (9.90 keV) and 1.42 A (8.73 keV) were employed for the (0 0 16)\({}_{\rm c}\) and (0 0 14)\({}_{\rm c}\) reflections, respectively, so that both reflections appear at high angles with \(2\theta>150^{\circ}\). In addition, the energy and angular divergences of the incident X-rays were reduced by narrowing the slit before the beamline monochromator. Consequently, we were able to identify a distinct peak split caused by minute tetragonal distortion. Single crystal samples were obtained by recrystallization of single crystals prepared by chemical vapor transport using the same method described previously[24]. An approximately 2 \(\times\) 2 \(\times\) 2 mm\({}^{3}\) CRO single crystal was attached to a copper plate and cooled in a \({}^{4}\)He closed-cycle refrigerator equipped with a four-circle diffractometer. The measurements were performed between 283 and 10 K. To investigate the temperature dependences of the two diffraction angle ranges covering (0 0 16)\({}_{\rm c}\) and (0 0 14)\({}_{\rm c}\) reflections, separate temperature scans were carried out. The temperature of the sample was monitored by a silicon diode thermometer attached on the copper plate. The error of the sample temperature was estimated to be less than 0.05 K. The scattered X-rays were detected by a two-dimensional pixel array detector (XPAD S70, imApad, La CIGat). Because the tetragonal domains of the low-temperature phases are less than a few tens of \(\mu\)m[17, 24], this experimental setup with an exposed area of 1 \(\times\) 1 mm\({}^{2}\) includes reflections from numerous domains. Therefore, the diffraction pattern is a pseudo-powder pattern. For the intensity fits for diffraction at each temperature, we accounted for the asymmetries of the peaks by introducing skewness into the skew normal distribution function. The asymmetry was already present in the cubic phase, and the peak was slightly elongated to a lower angle than would be predicted by a normal distribution (Gaussian) under ideal conditions. This could be due to the fact that the single-crystal sample was large and diffraction occurred from a certain-width space rather than a point. By fitting the diffraction intensities near the (0 0 16)\({}_{\rm c}\) reflection, the location of the peak's centre and the lattice constants were determined. The error of the lattice constants in the fitting is extremely small at 10\({}^{\circ}\) A, but the actual error must be larger. In addition, the extinction information for each phase was obtained by estimating the diffraction intensities close to the (0 0 14)\({}_{\rm c}\) reflection. The phase transition temperature was determined based on the temperature-dependent change in the intensity of the reflection and the lattice distortion. ## 3 Results ### Observed four phases with distinct crystal structures Figure 2 depicts representative pseudo-powder XRD patterns at four temperatures. These diffraction patterns can be distinguished by the difference in number of peaks. At \(T=\) 252 K, which corresponds to phase I, the (0 0 16)\({}_{\rm c}\) reflection is observed as a single peak, while the (0 0 14)\({}_{\rm c}\) reflection is absent, which is consistent with the space group _Fd_-3\(m\); the (0 0 14)\({}_{\rm c}\) reflection is forbidden by the reflection condition of \(h\) 0 0:_h_ = 4\(m\) for the _d_-glide symmetry operation. At \(T=\) 149 K, which corresponds to phase II, the (0 0 16)\({}_{\rm c}\) reflection pattern splits into two peaks, a low-angle peak with low intensity and a high-angle peak with high intensity. Two similar peaks are observed at the location of the (0 0 14)\({}_{\rm c}\) reflection. In contrast to the previous XRD pattern, in which the diffraction peaks were broad and the two peaks were indistinguishable,[6] this pattern demonstrates the impact of enhanced crystallinity and angle resolution. The appearance of reflections corresponding to (0 0 14)\({}_{\rm c}\) reflections means that the \(d\) glide plane is lost and spatial inversion symmetry is broken. These results are consistent with the tetragonal space group _I_-4\(m\)2. The transformation from cubic to tetragonal (\(a\neq c\)) generally results in the formation of three types of domains, with the \(c\)-axis of the tetragonal structure facing the three directions [1 0 0], [0 1 0], and [0 0 1] of the cubic structure. In the powder pattern, for instance, the reflections corresponding to (16 0 0)\({}_{\rm c}\) and (0 16 0)\({}_{\rm c}\) appear in the same location, while the reflection corresponding to (0 0 16)\({}_{\rm c}\) appears in a different location. Assuming that the structural change during the phase transition is minimal and the domain orientation is completely random, the ratio of the intensities of the two peaks should be 2:1. In addition, the unit cell vector of a tetragonal structure is expressed as follows during the transition from a face-centered-cubic to a body-centered-tetragonal lattice: \({\bf a_{\rm t}}=({\bf a_{\rm c}}-{\bf b_{\rm c}})/2\), \({\bf b_{\rm t}}=({\bf a_{\rm c}}+{\bf b_{\rm c}})/2\), \({\bf c_{\rm t}}={\bf c_{\rm c}}\); the length \(a_{\rm t}\) is \(a_{\rm c}/2\). We can assign the following indices to the two peaks of phase II based on this information: (0 0 16)\({}_{\rm c}\) [(0 0 14)\({}_{\rm c}\)] reflection for the weak low-angle peak and (8 8 0)\({}_{\rm t}\) [(7 7 0)\({}_{\rm c}\)] reflection for the strong high-angle peak. Noticed, however, that the ratio of intensity between (0 0 16)\({}_{\rm c}\) [(0 0 14)\({}_{\rm c}\)] and (8 8 0)\({}_{\rm t}\) [(7 7 0)\({}_{\rm c}\)] reflections deviate from the expected value of 1:2. This is due to the fact that the domain distribution was not completely random. In addition, note that the ratio of intensities varies between the two reflections. This may be due to the fact that the domain distribution was influenced by the temperature history of the cooling process; the two measurements were performed in separate temperature runs. Consequently, tetragonal distortion occurs in phase II when \(c_{\rm t}\) exceeds \(\sqrt{2}a_{\rm t}\), Consistent with previous estimations,[6] the tetragonal strain \(2(c_{\rm t}-\sqrt{2}a_{\rm t})/(c_{\rm t}+\sqrt{2}a_{\rm t})\) is 0.050%. In the \(T\) = 10 K XRD pattern for phase III, two peaks are observed at the (0 0 16)\({}_{\rm c}\) reflection location and one at the (0 0 14)\({}_{\rm c}\) reflection location. The structure is tetragonal due to the splitting of the (0 0 16)\({}_{\rm c}\) reflection; however, compared to phase II, the intensity of the high and low angle peaks has been reversed, with (8 8 0)\({}_{\rm t}\) at low angle and (0 0 16)\({}_{\rm c}\) at high angle. Thus, \(c_{\rm t}\) is less than \(\sqrt{2}a_{\rm t}\), indicating that the tetragonal strain has been reversed at _T_\({}_{\rm 2}\). Comparable to phase II, the tetragonal strain in phase III is as low as -0.057%. The observed peak at the (0 0 14)\({}_{\rm c}\) reflection location in phase III is determined to be the (7 7 0)\({}_{\rm c}\) reflection using the lattice constant obtained from the (8 8 0)\({}_{\rm t}\) reflection. The (0 0 14)\({}_{\rm t}\) reflection that ought to be observed at 153.23\({}^{\circ}\) on the high-angle side is not observed. If we assume the presence of a peak at that location and fit the pattern as a double peak, the (0 0 14) reflection intensity is merely 0.5% of the (7 7 0)\({}_{\rm t}\) reflection; therefore, we conclude that there is no (0 0 14)\({}_{\rm t}\) reflection within the experimental error. This (0 0 14)\({}_{\rm t}\) reflection is forbidden in \(I\)4122 due to the diffraction condition of 0 0 \(l\): \(l=4n\), which is derived from the 4\({}_{\rm 1}\) helical axis. Contrarily, there is no such extinction rule for the \(F\)222 structure, which was found to exist below 80 K via Raman scattering experiments,[20] thereby ruling out the \(F\)222 structure. Also disqualified for the same reason is the _I_-4 structure. A diffraction pattern clearly distinguishable from phases II and III was observed at \(T\) = 110 K. Three peaks were observed at both reflection locations, which can be indexed as (16 0 0)\({}_{\rm o}\) [(14 0 0)\({}_{\rm o}\)], (0 16 0)\({}_{\rm o}\) [(0 14 0)\({}_{\rm o}\)], and (0 0 16)\({}_{\rm o}\) [(0 0 0 14)\({}_{\rm o}\)] from the lowest angle, assuming an orthorhombic unit cell. This new phase is designated as phase XI; to date, ten phases, including the high-pressure phases, have been reported.[3] When the monoclinic angle is close to 90 degrees, it is difficult to determine whether the phase is orthorhombic or monoclinic from this experiment. However, magnetic torque measurements also support the orthorhombic structure.[23] In addition, according to a recent first-principles Figure 2: Typical pseudo-powder XRD patterns from multidomain containing the (0 0 16)\({}_{\rm c}\) and (0 0 14)\({}_{\rm c}\) reflections at 252 K (corresponding to phase I), 149 K (phase II), 110 K (phase XI), and 10 K (phase III). The curves that best fit a single Gaussian or multiple skew-normal distribution functions are represented by solid lines. The peak indices are based on cubic, tetragonal, orthorhombic, and tetragonal unit cells, respectively. phonon calculation, the optimal space group for low temperatures is \(F222\)[20]. As discussed previously in terms of symmetry [12, 17, 18], phase XI, which is continuously connected to \(I\)-\(4m2\) and \(I4\)\(122\), is likely \(F222\). The characteristics of each phase are summarized in Table 1. ### Structural change across \(T_{s1}\) The temperature variation of the XRD profile near \(T_{s1}\) is shown in Fig. 3(a). At high temperatures, the (0 0 16)\({}_{\rm c}\) reflection is a single peak and shifts to the high-angle side upon cooling due to lattice contraction. At approximately 200 K, however, it shifts to the low-angle side, indicating an atypical lattice expansion. At the same time, a low-angle side broadening is observed, and at 149 K, the peak completely separates into (0 0 16)\({}_{\rm c}\) and (8 8 0)\({}_{\rm t}\) reflections, as shown in Fig. 2. Figure 3(b) illustrates the temperature dependence of the half width at half maximum (FWHM) obtained by fitting a single Gaussian function to the data. The FWHM remains constant above 202 K and rises close to 201 K. The evolution of lattice constants was determined by fitting double skew-Gaussian functions to the diffraction patterns below 201 K. The deviation from the cubic crystal \(\sqrt{2}a_{\rm t}-c_{\rm t}\), which is regarded as an OP, exhibits a temperature dependence consistent with a second-order transition [Fig. 3(b)]. By fitting the data between 201 K and 188 K to a power function (\(T_{\rm c}-T\))\({}^{\beta}\), a critical temperature \(T_{\rm c}=T_{\rm sl}=201.5(1)\) K and a critical exponent \(\beta=0.34(1)\) were determined. When combined with the increase in FWHM at approximately the same temperature, \(T_{\rm sl}\) can be considered the transition temperature to the tetragonal structure. As shown in Fig. 3(a), the peak corresponding to the forbidden (0 0 14)\({}_{\rm c}\) reflection appears at 201 K and grows in intensity and width as temperature decreases. The peak eventually separates into the (0 0 14)\({}_{\rm t}\) and (7 7 0)\({}_{\rm t}\) reflections at 149 K (Fig. 2). To obtain the temperature dependence of the intensities of the two reflections, the patterns are fitted with two skewed Gaussians. As shown in Fig. 3(b), the (0 0 14)\({}_{\rm t}\) reflection increases with \(T_{\rm c}=201.23(7)\) and the critical exponent \(\beta=0.42(2)\) in the form (\(T_{\rm c}-T\))\({}^{\beta}\), while the (7 7 0)\({}_{\rm t}\) reflection exhibits a different temperature dependence with the same \(T_{\rm c}\); this difference in temperature dependence may be due to anisotropic atomic shifts. Recent XRD study that focused on superlattice reflections uncovered anisotropic atomic displacements [14]. \(T_{\rm c}\) is nearly identical to \(T_{\rm sl}\) according to the \(\sqrt{2}a_{\rm t}-c_{\rm t}\) plot of Fig. 3(b). We therefore conclude that the temperatures at which tetragonal distortion and \(d\)-glide loss occur are equivalent within the experimental error margin. Although it is possible that a cubic \(F\)-\(43m\) phase exists between phases I and II in which only spatial inversion is broken, as predicted by the primary OP of \(A_{2n}\) rather than \(E_{n}\)[12, 17, 18] our results exclude this possibility. \begin{table} \begin{tabular}{c c c c c} \hline Phase & I & II & XI & III \\ \hline Transition temp. & \(-\) & \(T_{\rm sl}=201.5(1)\) K & \(T_{\rm sl}=115.4(1)\) K & \(T_{\rm sl}\sim 100\) K \\ Space group & \(Fd\)–\(3m\) & \(I\)–\(4m2\) & \(F222\) & \(I4_{1}22\) \\ & & 7 7 0, 0 0 14 & 14 0 0, 0 14 0, 0 0 14 & 7 7 0 \\ Observed reflections & 0 0 16 & 8 8 0, 0 0 16 & 16 0 0, 0 16 0, 0 0 16 & 8 8 0, 0 0 16 \\ Lattice constants & \(a_{\rm c}=10.2209\) Å (210 K) & \(a_{\rm c}=7.2283\) Å (210 K) & \(a_{\rm c}=10.2274\) Å (210 K) & \(a_{\rm c}=10.2243(2)\) Å (210 K) & \(c_{\rm c}=10.2209\) Å (210 K) & \((10\) K) \\ Tetragonal distortion: & & & & \\ \(2(c_{\rm c}-\sqrt{2}a)\)(\(c_{\rm c}+\sqrt{2}a_{\rm t}\)) & \(-\) & 0.050\% & \(-\) & \(-\)0.057\% (10 K) \\ \(E_{\rm e}\) order parameter & (0, 0) & (0, \(\eta_{2}\)) & (\(\eta_{1},\eta_{2}\)) & (\(\eta_{1},0\)) \\ \hline \end{tabular} \end{table} Table 1: Structural parameters for the series of ambient-pressure phases of Cd\(\cdot\)Re\(\cdot\)O\({}_{7}\). Lattice constants are determined from the location of the peak’s centre near the (0 0 16)\({}_{\rm c}\) reflection. ### Structural changes around \(T_{s2}\) and \(T_{s3}\) The changes close to \(T_{c2}\) are depicted in Fig. 4(a). Above 117 K, which corresponds to phase II, there is no change in the shape of the two peaks in the (8 8 0)/(0 0 16), reflection. Below this temperature, however, only the high-angle (8 8 0), reflection broadens as the temperature decreases, and at 111 K, it clearly separates into two peaks. The high-angle peak remains at almost the same position, whereas the low-angle peak shifts to a lower angle. At approximately 100 K, the (0 0 16) reflection gradually shifts to the high-angle side and merges with the central peak, resulting in two peaks once more; this temperature is denoted as \(T_{s3}\). Consequently, a distinct phase XI with an apparent orthorhombic structure exists between \(T_{s2}\) and \(T_{s3}\) (Fig. 2). If this change were the result of a single first-order transition, phases II and III would coexist within this temperature range. The pattern should then be a simple addition of the 89 K and 117 K patterns, with only their relative intensities varying with temperature. However, the continuous peak shift observed in Fig. 4(a) disproves the coexistence of the two phases. Similar continuous changes have also been observed in magnetic torque measurements[23] and Cd NMR spectra.[22] Therefore, the change in this region is not the result of a first-order transition, but rather two successive second-order transitions. The (7 7 0)/(0 14), reflection exhibits essentially the same temperature dependence as the (8 8 0)/(0 0 16) reflection in the vicinity of \(T_{s2}\), but below \(T_{s3}\) it transforms into a single peak corresponding to the (7 7 0), of phase III. The intensity of the (0 0 14) reflection in phase XI decreases as the temperature decreases from \(T_{s2}\) and disappears below 100 K. The (0 0 14) reflection intensity in Fig. 4(b) decreases rapidly below \(T_{s2}\), then approaches zero asymptotically near \(T_{s3}\). However, determining \(T_{s3}\) from this temperature dependence of the (0 0 14), reflection intensity is difficult. The temperature dependence of the lattice constants was determined by fitting triple skew-Gaussian functions to the XRD pattern between 120 K and 90 K [Fig. 5(a)]. In phase XI, \(\sqrt{2}a_{\rm t}\) from phase II splits into two (\(b_{\rm o}\), \(c_{\rm o}\)), while \(c_{\rm t}\) from phase II becomes \(a_{\rm o}\). In phase III, \(a_{\rm o}\) and \(b_{\rm o}\) approach and transform into \(\sqrt{2}a_{\rm t}\). Figure 4(b) depicts the temperature dependences of two orthorhombic distortion types, \(d_{1}=b_{\rm o}-c_{\rm o}\) and \(d_{2}=a_{\rm o}-b_{\rm o}\). \(d_{1}\) develops rapidly from \(T_{s2}\) and exhibits an OP-like behavior for a second-order transition. The fitting of the power function yields a transition temperature of \(T_{s2}=115.4(1)\) K and a critical exponent of \(\beta=0.51(3)\). Unlike \(d_{1}\), \(d_{2}\) decreases as \(T_{s3}\) approaches and approaches zero asymptotically. A similar power function fit yields \(T_{s3}=99.9(7)\) K and \(\beta=1.5(1)\). This transition temperature is identical to the temperature at which the (0 0 14), reflection vanishes. This peculiar temperature dependence raises the question of whether \(T_{s3}\) is a crossover rather than a phase transition. If Figure 3: (a) Temperature evolution of the XRD patterns across \(T_{s1}\) near the (0 0 16), and (0 0 14), reflections of phase I. The solid lines represent single Gaussian function fits above 202 K and double skew Gaussian function fits below 201 K. (b) Temperature dependences close to \(T_{s1}\) of the FWHM and the tetragonal distortion \(\sqrt{2}a_{\rm t}-c_{\rm t}\) of the (0 0 16), (top), as well as the intensities of the (7 7 0), and (0 0 14), reflections (bottom). Solid lines are fitted to the form (\(T_{c}-T^{\beta}\). The error bars of the standard deviations of the skew-Gaussian fits are too small to observe. this is the case, the coincidence of \(a_{\rm u}\) and \(b_{\rm o}\) is coincidental, and phase III does not exist at the lowest temperature, but phase XI does. Nevertheless, the observed disappearance of the (0 0 14)\({}_{\rm o}\) reflection below \(T_{\rm s3}\) indicates a change in space group or symmetry, proving the existence of a phase transition at \(T_{\rm s3}\). We conclude that phase XI is a new orthorhombic phase that exists within a 15 K temperature window between 115 K and 100 K. \(a_{\rm o}-b_{\rm o}\) (red triangles) as well as the (0 0 14)\({}_{\rm r}\) reflection intensity. The data for \(d_{1}\) and \(d_{2}\) are fitted by solid lines to (\(T_{\rm c}\)\(-\)\(T\))\({}^{b_{\rm o}}\): (\(T_{\rm c}\), \(\beta\)) = [115.4(1), 0.51(3)] and [99.9(7), 1.5(1)], respectively. ### Lattice constants and cell volume Figure 5 illustrates the temperature dependence of the lattice constants and volume over the entire temperature range, as determined by diffraction intensity fits near the (0 0 16)\({}_{\rm c}\) reflection location. The temperature dependence of the lattice constants near \(T_{\rm s1}\) replicates previous observations.[6, 7] At low temperatures, there are significant changes near \(T_{\rm s2}\) and \(T_{\rm s3}\) with a reversal of tetragonal distortion between \(c_{\rm t}\)\(>\)\(\surd 2a_{\rm t}\) above \(T_{\rm s2}\) and \(c_{\rm t}\)\(<\)\(\surd 2a_{\rm t}\) below \(T_{\rm s3}\), in contrast to previous reports in which the change was always smooth with \(c_{\rm t}\)\(>\)\(\surd 2a_{\rm t}\) down to 10 K.[6, 7] Figure 4: (a) Temperature evolution of the (8 8 0)/(0 0 16)\({}_{\rm t}\) and (7 7 0)/(0 0 14)\({}_{\rm r}\) reflections in the temperature window encompassing \(T_{\rm s2}\) and \(T_{\rm s3}\). Solid lines represent triple skew-Gaussian function fits. (b) Temperature dependences of the orthorhombic distortions \(d_{1}=b_{\rm o}-c_{\rm o}\) (black circles) and \(d_{2}=a_{\rm o}-b_{\rm o}\) (red triangles) as well as the (0 0 14)\({}_{\rm r}\) reflection intensity. The data for \(d_{1}\) and \(d_{2}\) are fitted by solid lines to (\(T_{\rm c}\)\(-\)\(T\))\({}^{b_{\rm o}}\): (\(T_{\rm c}\), \(\beta\)) = [115.4(1), 0.51(3)] and [99.9(7), 1.5(1)], respectively. Figure 5: Temperature dependences of (a) lattice constants and (b) unit cell volume for CRO. The error bar for the lattice constants is negligible except near the transition temperatures. The inset of (b) enlarges the volume change at phase XI. In phase I, as shown by the temperature dependence of unit cell volume in Fig. 5(b), the lattice thermally contracts upon cooling like normal materials, whereas in phases II and III, the volume expands and becomes even greater than that at room temperature. In contrast, the volumes of the pyrochlore oxides Tl\({}_{2}\)Mn\({}_{2}\)O\({}_{7}\) and Lu\({}_{2}\)V\({}_{2}\)O\({}_{7}\) decrease monotonically with decreasing temperature, by 0.64% (14 K) and 0.42% (59 K) from room temperature, respectively.[26] The negative thermal expansion of CRO at low temperatures may be the hallmark of phase transitions driven by the energy stabilization of the electronic system. The volume reaches a peak in phase XI, but the reason for this is unknown. It is possible that this is an artifact of the analysis, as it was difficult to distinguish between the three peaks. Alternatively, it could reveal an intriguing characteristic of phase XI. ## 4 Discussion ### Possible space group for phase XI The OP of the I-II-III successive phase transitions is understood in terms of a two-dimensional irreducible representation of \(E_{u}\) at the \(\Gamma\) point of the Brillouin zone.[12] The \(E_{u}\) OP is represented by the two-dimensional vector \(\mathbf{\eta}=(\eta_{1},\eta_{2})\). In phase II of space group \(I\)-\(4m2\), only \(\eta_{2}\) has a finite value, whereas in phase III of space group \(I4_{1}22\), only \(\eta_{1}\) has a finite value (Table 1). In addition to \(I\)-\(4m2\) and \(I4_{1}22\), \(F222\) is a candidate for a structure with low symmetry that can be transferred by the \(E_{u}\) OP from the space group \(Fd\)-\(3m\).[12, 17, 18] The OP of \(F222\) is their combination; \(\mathbf{\eta}=(\eta_{1},\eta_{2})\). It is therefore reasonable to assume that phase XI, which is continuously linked to phases II and III, possesses an \(F222\) structure. The three peaks seen in phase XI provide support for this orthorhombic structure. ### Polar coordinate description of the OPs Assuming \(\eta_{1}=\eta\)cos\(\theta\) and \(\eta_{2}=\eta\)sin\(\theta\), the OP can be represented in polar coordinates as \(\mathbf{\eta}=(\eta,\,\theta)\), as shown in Fig. 6(a);[27]\(\eta\) and \(\theta\) represent the amplitude and phase of the OP, respectively. Since the vector \(\mathbf{\eta}\) has sixfold rotational symmetry in the \(O_{h}\) point group, there are six equivalent states for each rotation of 60 degrees. These correspond to six distinct domains with various \(c\)-axis orientations in an induced tetragonal structure. In this polar coordinate plane, the OP vectors of \(I\)-\(4m2\) and \(I4_{1}22\) are located on the lines \(\theta\)\(=k\pi/6\) with \(k=2n+1\) and \(2n\), respectively, and the area between these two lines represents \(F222\). Consider the variation in lattice constants caused by \(E_{u}\) distortion with respect to the coordinate axes (\(x\), \(y\), \(z\)) of an orthorhombic structure. Taking into account \(\eta\) up to the second order in the free energy, the lattice constants of the orthorhombic unit cell \(a_{\rm o}\), \(b_{\rm o}\), and \(c_{\rm o}\) in each direction are as follows:[27] Figure 6: (a) Polar coordinate representation of the \(E_{u}\) OP \(\mathbf{\eta}=(\eta,\,\theta)\). Phases I, II, III, and XI are located, respectively, at the origin, on the \(\theta=n\pi/3\) lines (dashed lines), on the \(\theta=(2n+1)\pi/6\) lines (solid lines), and between them. In the explanation of the text, the \(\theta=30^{\circ}\) and \(0^{\circ}\) lines are selected for phase II with its \(c\) axis along \(x\), and for phase III with its \(c\) axis along \(z\), respectively. The red marks represent the OPs that are temperature-dependent. (b) Relationship between the unit cell axes of the three phases studied in the text. (c) Temperature dependences of the amplitude \(\eta\) (main panel) and phase \(\theta\) (inset). The blue dashed lines serve as eye guides. \[a_{\rm o} = a_{0}+a_{\rm A}\,\eta^{2}+a_{\rm E}\,\eta^{2}\cos(2\theta+2\pi/3),\] \[b_{\rm o} = a_{0}+a_{\rm A}\,\eta^{2}+a_{\rm E}\,\eta^{2}\cos(2\theta-2\pi/3), \tag{1}\] \[c_{\rm o} = a_{0}+a_{\rm A}\,\eta^{2}+a_{\rm E}\,\eta^{2}\cos(2\theta),\] where \(a_{0}\) is a temperature-dependent constant due to thermal expansion, and \(a_{\rm A}\) and \(a_{\rm E}\) are temperature-independent constants. In \(I\)-4\(m\)2, for example, substituting \(\theta=\pi/6\) results in the expression \(b_{\rm o}=c_{\rm o}\), which represents the \(x\) domain as \(c_{\rm i}/\,x\). The expected domain of \(I\)4\(\downarrow\)22 transforming from this domain is the one that minimizes the change in OP, therefore \(\theta=0\) or \(\theta=\pi/3\). \(\theta=0\) yields \(a_{\rm o}=b_{\rm o}\), which is the \(z\) domain with \(c_{\rm i}/\,z\), while \(\theta=\pi/3\) yields the \(y\) domain. Figure 6(b) depicts the relationship between the axis orientations of \(\theta=0\) (\(I\)-4\(m\)2), \(\pi/6\) (\(I\)4\(\downarrow\)22), and an intermediate orientation for \(F\)222. This domain transition with a 90\({}^{\circ}\) rotation of the \(c_{\rm i}\) axis during the transition between phases II and III was actually observed by polarized light microscopy [24]. This is thought to be associated with tetragonal distortion switching. Transforming equation (1), we obtain \[\tan(2\theta)=\sqrt{3}(b_{\rm o}-a_{\rm o})^{\prime}\,(2c_{\rm o}-a_{\rm o}-b _{\rm o}), \tag{2}\] \[\eta=[(2c_{\rm o}-a_{\rm o}-b_{\rm o})/3a_{\rm E}\,\cos(2\theta)]^{1/2}.\] We determined the temperature dependences of magnitude \(\eta\) and phase \(\theta\) of the OP by substituting the experimentally obtained lattice constants into equation (2) [Fig. 6(c)]. \(\eta\) develops rapidly at \(T_{\rm sl}\), indicating that it is indeed the primary OP of the phase transition. Fixing the transition temperature to 201.5 K as mentioned above and fitting the data with a power function, the critical exponent \(\beta=0.223(5)\) reproduces the data close to \(T_{\rm sl}\). In contrast, \(\eta\) decreases once at \(T_{\rm c2}\) and increases again below \(T_{\rm sl}\). This suggests that the hypothetical transition temperature of phase III in the absence of phases II and XI could be lower than that of phase II, resulting in a smaller OP development in phase III at these temperatures. Nevertheless, since the observed decrease in \(\eta\) below \(T_{\rm s2}\) is uncommon for a phase transition with a single OP, a secondary OP may play a significant role in these complex phase transitions. On the other hand, \(\theta\) varies almost linearly between 30\({}^{\circ}\) and 0\({}^{\circ}\) in phase XI [Fig. 6(c), inset]. Figure 6(a) displays the variation in polar coordinates of OP as a function of temperature. Phase I is located at the origin with zero amplitude. When the transition from phase I to phase II occurs at \(T_{\rm sl}\), \(\eta\) becomes finite and increases along the line \(\theta=30^{\circ}\) with decreasing temperature. At \(T_{\rm s2}\), during the transition to phase XI, \(\theta\) begins to approach zero while \(\eta\) slightly decreases. When phase III is reached at \(T_{\rm s3}\), the OP develops along the \(\theta=0^{\circ}\) line. A two-dimensional \(E_{\rm o}\) OP can therefore explain the continuous phase transition of CRO, including phase XI. The linear variation in \(\theta\) between \(T_{\rm s2}\) and \(T_{\rm s3}\) suggests that \(\theta\) is the essential parameter for describing phase transitions in this region. This contrasts with typical phase transitions, such as the \(T_{\rm sl}\) transition, in which the magnitude of OP increases as the temperature decreases. According to Landau theory, this change in \(\theta\) is caused by the dominance of a term of extremely high order (12th order) [27]. The higher order term dominates the \(T_{\rm s2}\)-\(T_{\rm s3}\) transition, probably because phases II and III are nearly degenerate in energy. However, from a symmetry perspective, the two cannot be connected continuously; there should be a jump between them. To avoid this, it is believed that the \(F\)222 phase described by the linear coupling of both OPs will intervene as phase XI. ### Remarks to the previous experiments at around \(T_{\rm s2}\) The transition at \(\sim\)120 K has been considered to be a first-order transition [3]. This is primary due to the small temperature hysteresis observed in the electrical resistivity (at 120-114 K for crystal 1A and at 117-112 K for crystal 40A in reference 3) and the sharp peak observed at 112 K in the heat capacity measurement [3]. However, there was significant sample dependence in these data. The electrical resistivity of particular crystals exhibited no hysteresis. In fact, it was not observed in single crystals of comparable quality to that used in this study. In addition, heat capacity measurements revealed that the second crystal in reference 3 exhibited not a sharp peak, but rather two broad bumps near 112 K and 100 K [3]; these temperatures are close to \(T_{\rm s2}\) and \(T_{\rm s3}\). Phase XI is believed to be the result of a delicate balance between phases II and III. In addition, \(T_{\rm s3}\) is a transition of higher order, as indicated by its peculiar temperature dependence, and thermal equilibrium may take a considerable amount of time to achieve; as stated previously, \(T_{\rm s3}\) is not a crossover because the symmetry between phases XI and III is distinct. In addition, the elastic energy associated with domain formation and domain wall pinning as a result of defects may be in conflict with thermodynamic equilibrium. Consequently, the behavior near \(T_{\rm s2}\) and \(T_{\rm s3}\) may be highly dependent on sample quality and measurement conditions and techniques. Accordingly, we believe that the two successive transitions observed in this study are consistent with prior experimental findings and are an intrinsic property of CRO. Recent magnetic torque measurements on high-quality crystals confirmed the existence of a second-order phase transition at 115 K and suggested the existence of a transition at a lower temperature [23]. Takigawa's Cd-NMR experiments revealed the existence of a phase with lower symmetry than the tetragonal one between 115 and 100 K [22], which roughly corresponds to \(T_{z2}\) and \(T_{z3}\). In light of our structural data, we conclude that the three-step sequential phase transitions in \(E_{u}\) OP have been experimentally evidenced. ### Characteristics of the multipolar transitions of CRO A possible origin of the phase transitions in CRO is the Fermi liquid instability of SOCM. Spin-degenerate Fermi surfaces with large SOCs are stabilized by spontaneous ISB, forming a multipole order with spin-split Fermi surfaces. Based on crystal symmetry considerations, phases II and III correspond to \(x^{2}-y^{2}\)- and \(3z^{2}-r^{2}\)-type ETQ orders, respectively.[13] Phase XI, whose OP is represented by a linear combination of the OPs of phases II and III, is another entangled odd-parity multipole. To experimentally establish ETQ orders in CRO, it is required to confirm the presence of the secondary OP of even-parity \(E_{g}\). According to group theoretical considerations, the electric quadrupole (EQ) order corresponding to \(E_{g}\) should coexist with the ETQ order resulting from \(E_{u}\).[13, 17, 18] Unfortunately, the accuracy of our structural data was insufficient to detect the coexistence of OPs other than \(E_{u}\). Nevertheless, magnetic torque measurements, which are sensitive to even parity OPs, revealed that odd-parity \(E_{u}\) and even-parity \(E_{g}\) actually coexist.[23] On the other hand, SHG measurements, which are sensitive to odd parity OPs, revealed that \(T_{2u}\), \(T_{lg}\), and \(E_{u}\) coexist, with \(E_{u}\) being a secondary OP.[17] However, it has been proposed that \(E_{u}\) alone can reproduce the same angular dependence of the signal observed in SHG measurements for the \(T_{2u}\) mode.[17, 18] Consequently, the previous experimental findings can be explained by a scenario assuming primally \(E_{u}\) and secondary \(E_{g}\). In the future, the existence of OPs other than \(E_{u}\) may be clarified using ultrasonic measurements and other suitable measurement techniques. The emergence of as many as three electronic multipole phases in CRO may be a manifestation of the diversity of Fermi liquid instability resolution mechanisms in SOCM. In structural phase transitions caused by electronic instabilities, such as the Jahn-Teller effect, two types of states can frequently appear.[28] For instance, stabilizations of the \(d\)-orbital energy by stretching and contracting the octahedron are degenerate. However, once a structural transition occurs at low temperatures and one deformation is chosen, the structure stabilizes with a large structural deformation, and subsequent temperature changes rarely cause the other deformation to appear. This is due to large electron-phonon interactions. In contrast, the exceptional sequential occurrence of multiple structural deformations in CRO is most likely due to the weak electron-phonon interactions, and the phase transition is solely governed by the competition of electron system energies. Another aspect of SOCM's Fermi liquid instability may be the presence of up to seven phases under high pressure.[9] CRO is the ideal compound for studying pure electronic phase transitions in SOCM. ## 4 Conclusion High-resolution synchrotron radiation XRD experiments were performed on a high-quality single crystal of a SOCM candidate CRO to clarify the details of the structural changes, including the temperature dependence of the lattice constants. The transition around 120 K, previously believed to be a first-order transition, consists of two successive continuous transitions with phase XI of the space group \(F222\) in between. The three successive phase transitions of CRO can be understood in a unified manner in terms of the two-dimensional order parameter \(E_{u}\): \(T_{\alpha i}\) is a typical phase transition where the amplitude of the OP develops, whereas \(T_{\alpha i}\) and \(T_{\alpha i}\) are exceptional phase transitions where the phase of the OP changes. Negative thermal expansion below \(T_{\alpha i}\) and the transition of increased symmetry below \(T_{\alpha i}\) indicate that the transitions are exclusively electronic in nature. Phases II and III are ETQ odd-parity multipole orders of the \(x^{2}-y^{2}\) and \(3z^{2}-r^{2}\) types, respectively, and phase XI is regarded as their superposition state. The appearance of these multipole phases must reflect the instability of Fermi liquids in SOCMs. ## Acknowledgements The authors are grateful to J. Yamaura, M. Takigawa, S. Uji, Y. Yokoyama, and M. Mizumaki for insightful discussion. They appreciate Y. Motome and H. Kusunose for their helpful comments. They also thank H. T. Hirose for the ETQ images in Fig. 1. This work was performed under the approval of the Photon Factory Program Advisory Committee (Proposal No. 2020G628). This research was supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant numbers 20H01858, 22H04462, and 22H01178.
2309.17154
Efficient Interpretable Nonlinear Modeling for Multiple Time Series
Predictive linear and nonlinear models based on kernel machines or deep neural networks have been used to discover dependencies among time series. This paper proposes an efficient nonlinear modeling approach for multiple time series, with a complexity comparable to linear vector autoregressive (VAR) models while still incorporating nonlinear interactions among different time-series variables. The modeling assumption is that the set of time series is generated in two steps: first, a linear VAR process in a latent space, and second, a set of invertible and Lipschitz continuous nonlinear mappings that are applied per sensor, that is, a component-wise mapping from each latent variable to a variable in the measurement space. The VAR coefficient identification provides a topology representation of the dependencies among the aforementioned variables. The proposed approach models each component-wise nonlinearity using an invertible neural network and imposes sparsity on the VAR coefficients to reflect the parsimonious dependencies usually found in real applications. To efficiently solve the formulated optimization problems, a custom algorithm is devised combining proximal gradient descent, stochastic primal-dual updates, and projection to enforce the corresponding constraints. Experimental results on both synthetic and real data sets show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner while also improving the time-series prediction, as compared to the current state-of-the-art methods.
Kevin Roy, Luis Miguel Lopez-Ramos, Baltasar Beferull-Lozano
2023-09-29T11:42:59Z
http://arxiv.org/abs/2309.17154v1
# Efficient Interpretable Nonlinear Modeling for Multiple Time Series ###### Abstract Predictive linear and nonlinear models based on kernel machines or deep neural networks have been used to discover dependencies among time series. This paper proposes an efficient nonlinear modeling approach for multiple time series, with a complexity comparable to linear vector autoregressive (VAR) models while still incorporating nonlinear interactions among different time-series variables. The modeling assumption is that the set of time series is generated in two steps: first, a linear VAR process in a latent space, and second, a set of invertible and Lipschitz continuous nonlinear mappings that are applied per sensor, that is, a component-wise mapping from each latent variable to a variable in the measurement space. The VAR coefficient identification provides a topology representation of the dependencies among the aforementioned variables. The proposed approach models each component-wise nonlinearity using an invertible neural network and imposes sparsity on the VAR coefficients to reflect the parsimonious dependencies usually found in real applications. To efficiently solve the formulated optimization problems, a custom algorithm is devised combining proximal gradient descent, stochastic primal-dual updates, and projection to enforce the corresponding constraints. Experimental results on both synthetic and real data sets show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner while also improving the time-series prediction, as compared to the current state-of-the-art methods. Vector autoregression, Topology identification, Granger causality, Interpretability, Invertible neural network. ## I Introduction In many engineering fields, such as financial engineering, signal analysis from sensor networks, brain signal processing, and interconnected systems in water networks and the oil and gas sector, to mention a few, determining the dependencies among several interconnected systems is an important task. Many of these scenarios include the measurement and storage of several time series, often obtained from sensors that are associated with other sensor variables of the same underlying physical process being observed. Such relationships may be represented as a graph structure that consists of nodes and edges, where each node represents a time series, and the edges or arcs between nodes typically represent a function expressing the dependency between the time series associated with the two connected nodes. Note that such large-scale systems can become very complex in terms of the number of dependencies between different sensors. The set of relationships between them, usually referred to as the "topology" of the sensor network, can also be interpreted by human operators and can vary depending on the various control actions happening in the system. The methods for learning these dependencies are of considerable significance [3]. The interdependencies between different sensor variables are often modeled using a graph representation [4], which is helpful for tasks such as prediction [5], change point detection [6], and data compression [7], among others. Within the plethora of methods that have been proposed to identify dependencies between interconnected systems, Granger causality (GC) [8] is a widely used paradigm. The GC quantifies the degree to which the history of one time series helps predict the future of another time series. More specifically, a time series is said to be Granger-caused by another if the optimal prediction error of the former decreases when the history of the latter time series is considered [9]. There are alternative causality definitions based on the vector autoregressive (VAR) model, which represents interactions between variables with linear or nonlinear functions [10, 11, 12]. The VAR model has been proven useful in multiple applications involving topology identification [13]. VAR causality is determined from the support of VAR matrix parameters and is equivalent to GC under certain conditions [9]. In the case of a linear VAR, [9], the previous time samples of one time series have an impact on the future of the other series that is modeled as a linear equation representing a causal linear filter. The causality estimates in VAR models can be made scalable to high-dimensional settings using regularizers that enforce sparsity over the VAR parameters [10]. Other linear models, such as structural equation models (SEM) and structural VAR (SVAR) models, are often utilized to learn linear causal dependencies among connected time series [13]. SEM does not take into account temporal dependencies, while VAR and SVAR both capture delayed interactions. Topology identification in linear VAR models has been extensively researched [3, 9, 14]. In real-world applications, such as brain networks and industrial sensor data networks, employing linear models may result in inconsistent assessments of causal relationships [15] because the underlying physical process might have nonlinear interactions. Investigation of nonlinear models is a growing area of research since linear models often struggle to capture nonlinear relationships or dependencies. Although there is a large body of research on nonlinear causal discovery [16, 17, 18, 19, 20, 21], only a small number of studies [11, 22] have successfully used Deep Learning (DL) to identify causal relationships in time series. Deep neural networks are used to model temporal dependencies and interactions between the variables under the GC framework. Regarding nonlinear extensions to the VAR model, functions in reproducing kernel Hilbert spaces (RKHS) are used in [18, 19] to identify nonlinear dependencies by mapping variables to a higher-dimensional Hilbert space where dependencies are linear. Theoretically, DL methods enable the modeling of nonlinear causal interactions [11], providing high expressive power, but their flexibility has a drawback: since DNNs, in general, are black-box approximators, it makes it more challenging to comprehend and interpret the causal links that are learned, despite being the main goal of causal structure learning. In addition, these techniques are typically computationally expensive. This work proposes a method that enables interpretable modeling of nonlinear interactions using feed-forward invertible neural networks (INNs) as the main tool to take nonlinearities into account. The fundamental premise of the proposed model is that a set of time series is assumed to be generated by a VAR process in a latent space and that each time series is then observed using a nonlinear, component-wise, monotonically increasing (thus invertible) function represented by an INN. It avoids the black-box nature of many DL-based architectures. We impose sparsity-inducing penalties on the VAR coefficients to improve interpretability and enhance the capacity to manage limited data in the high-dimensional scenario. In this paper, we detail two different formulations with two different levels of complexity. Linear VAR-causality is often used as the modeling tool to test for GC [23]. The notion of causality that this paper works with is based on the linear interactions in the latent space, as will be detailed in Sec. II. Due to the invertible nonlinearities, there is a one-to-one correspondence between variable values in the measurement and latent spaces, and therefore when a causal connection is identified in the linear model in the latent space, it can be deemed present with the same strength between the corresponding pair of variables in the measurement space. The first algorithm explicitly uses the inverse, having a fitting cost function based on the prediction error in the sensor signal domain. On the other hand, the second algorithm does not require the inverse calculation, having a cost function based on the prediction error in the latent space, which will be proven to be a bound on the former cost function. The second algorithm has lower computational complexity than the first algorithm, requiring constant memory needs for each iteration, making it suitable for sequential and big-data or high-dimensional scenarios. We also empirically validate the performance of these two algorithms, and compare it with currently existing DL-based nonlinear models through extensive tests on synthetic and real data sets. First, simulations are carried over synthetically-generated signals, namely a nonlinear VAR (matching the modeling assumption) for different values of the lag order \(P\), and data generated by the nonlinear Lorenz-96 model [24] for different values of the force constant \(F\), showing that our interpretable approach identifies the graph of nonlinear interactions. Finally, we also evaluate the performance of our methods using real data from a sensor network from a use case in the offshore oil and gas industry. The contributions of the present paper can be summarized as follows: * A comprehensive description of the proposed modeling assumption that allows inference of nonlinear dependency graphs among any set of time series. * Design of an inference algorithm based on explicit inversion of the functions mapping between the latent and measurement space (formulation A). * A theoretical result stating under which conditions the prediction MSE in the latent space is an upper bound of the prediction MSE in the measurement space, motivating the formulation of an alternative algorithm. * Derivation of an inference algorithm based on MSE minimization in the latent space (formulation B) which addresses the same modeling assumption and is computationally more efficient. * Experimental results validating both proposed algorithms, establishing that formulation B outperforms formulation A, and comparing their prediction and topology-identification performance against state-of-the-art GC inference algorithms based on DL. The conference versions of this work present a preliminary version of formulation A with the derivation of the necessary gradients via implicit differentiation in [1], and incorporating sparsity-enforcing regularization (including numerical results showcasing its impact on topology identification) in [2]. The rest of the paper is organized as follows: Sec. II introduces background on linear andnonlinear topology identification. Sec. III describes the modeling assumption in detail. Sec. IV describes the two formulations and the algorithms to solve them. Sec. V contains simulation and experiments on real and synthetic data sets comparing the strength of our algorithms with other state-of-the-art methods. Finally, Sec. V concludes the paper. ## II Preliminaries After outlining the notion of linear causality graphs, this section reviews how these graphs can be identified by formulating an optimization problem. Then, the basics of the nonlinear causality graphs problem are described. ### _Linear causality Graphs_ Consider a collection of \(N\) sensors providing \(N\) time series \(\left\{y_{n}[t]\right\}_{n=1}^{N},\,t=0,1,\ldots,T\), \(t\in\mathbb{Z}\), where \(y_{n}[t]\) denotes the measurement of the \(n^{th}\) sensor at time \(t\). A causality graph \(\mathcal{G}\triangleq(\mathcal{V},\mathcal{E})\) is a directed graph where the \(n^{th}\) vertex in \(\mathcal{V}=\left\{1,\ldots,N\right\}\) is identified with the \(n^{th}\) time series \(\left\{y_{n}[t]\right\}_{t=0}^{T}\) and there is a directed edge from \(n^{\prime}\) to \(n\) (i.e. \((n,n^{\prime})\in\mathcal{E}\) ) if and only if \(\left\{y_{n^{\prime}}[t]\right\}_{t=0}^{T}\) causes \(\left\{y_{n}[t]\right\}_{t=0}^{T}\). The notion of causality that we deal with in this work is VAR-causality, which is equivalent to GC under certain conditions, and it is easy to obtain from a VAR model. A \(P^{th}\)-order linear VAR model can be formulated as \[y[t]=\sum_{p=1}^{P}A^{(p)}y[t-p]+u[t],\qquad P\leq t\leq T \tag{1}\] where \(y[t]=[y_{1}[t],\ldots,y_{N}[t]]^{T}\), \(A^{(p)}\in R^{N\times N}\) and \(p=1,\ldots,P\), are respectively the matrices of VAR parameters, \(T\) is the observation time period, and \(u[t]=[u_{1}[t],\ldots,u_{N}[t]]^{\top}\) is a vector innovation process typically modeled as a Gaussian, temporally-white random process. Letting \(a_{n,n^{\prime}}^{(p)}\) denote the \((n,n^{\prime})\) entry of the matrix \(A^{(p)}\), 1 takes the form: \[y_{n}[t] =\sum_{n^{\prime}=1}^{N}\sum_{p=1}^{P}a_{n,n^{\prime}}^{(p)}y_{n^ {\prime}}[t-p]+\ u_{n}[t],\quad P\leq t\leq T \tag{2}\] \[=\sum_{n^{\prime}\in\mathcal{N}(n)}\sum_{p=1}^{P}a_{n,n^{\prime} }^{(p)}y_{n^{\prime}}[t-p]+u_{n}[t] \tag{3}\] for \(n=1,\ldots,N\), where \(\mathcal{N}(n)\triangleq\{n^{\prime}:a_{n,n^{\prime}}\neq 0_{P}\}\) and \(a_{n,n^{\prime}}=[a_{n,n^{\prime}}^{(1)},\ldots,\ a_{n,n^{\prime}}^{(p)}]^{T}\) is the impulse response from node \(n^{\prime}\) to node \(n\); this will be a zero vector when there is no edge from node \(n^{\prime}\) to node \(n\). Thus, \(\{y_{n^{\prime}}[t]\}\) VAR-causes \(\{y_{n}[t]\}\) if \(a_{n,n^{\prime}}\neq 0_{P}\). It therefore holds that the set of directed edges is \(\mathcal{E}\triangleq\{(n,n^{\prime}):a_{n,n^{\prime}}\neq 0_{P}\}\), and the in-neighborhood of node \(n\), denoted as \(N(n)\), contains all the nodes causing (having a non-zero impulse response connected towards) node \(n\). The problem of identifying a linear VAR causality model boils down to estimating the VAR coefficient matrices \(\{A^{(p)}\}_{p=1}^{P}\) given the observations \(\{y[t]\}_{n=0}^{T-1}\). To quantify the strength of these dependencies, a weighted graph can be constructed by assigning e.g. the weight \(\left\|\boldsymbol{a}_{n,n^{\prime}}\right\|_{2}\) to the edge \((n,n^{\prime})\). The VAR coefficients can be learned by solving a minimization problem with a least-squares loss. Moreover, models with a reduced number of nonzero parameters entail a reduced number of edges are preferable as they are more parsimonious, motivating the following sparsity-enforced optimization problem with a Lasso-type penalty [25]: \[\min_{\{A^{(p)}\}_{p=1}^{P}} \sum_{t=P}^{T}\left\|y[t]-\left(\sum_{p=1}^{P}A^{(p)}(y[t-p]) \right)\right\|_{2}^{2}\] \[+\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left| a_{n,n^{\prime}}^{(p)}\right| \tag{4}\] where \(|.|\) denotes the absolute value. The hyper-parameter \(\lambda>0\) controls the level of sparsity enforced by the \(l_{1}\) norm of the coefficients. The objective function (4) is non-differentiable which will be considered when designing the iterative algorithms to solve this problem, as we explain in Sec IV. ### _Nonlinear modeling_ As stated in Sec I, time-series collections in many practical applications usually exhibit nonlinear interactions, thus a linear VAR model is insufficient for capturing the nonlinear data dependencies. In the most general nonlinear case, VAR models are not capable of identifying nonlinear dependencies, and their prediction error in real-world scenarios is high. Each data variable \(y_{n}[t]\) can be represented as a nonlinear function of multiple multivariate data time series as follows: \[y_{n}[t]=h_{n}(y_{t-1},\ldots,y_{t-P})+u_{n}[t], \tag{5}\] where \(y_{t-p}=[y_{1}[t-p],y_{2}[t-p],\ldots,y_{N}[t-p]]^{\top}\), \(p\in[1,P]\) and \(h_{n}(\cdot)\) is a nonlinear function. The model in (5) has two main drawbacks: the first one is that there are infinitely many nonlinear functions that can fit a finite set of data points. The second one is that, even if \(h_{n}(\cdot)\) could be identified, there is no clear criterion in the literature to determine an interpretable graph that allows us to identify which key variables are affecting another variable from such a set of nonlinear functions. In Sec. III we present the nonlinear model that we consider to circumvent the aforementioned drawbacks. ## III Interpretable Nonlinear Model In this work, we are restricting the nonlinear function to be learned to belong to a subset of possible nonlinear functions which comes in between the linear model and the general nonlinear model in terms of complexity. We aim to design an interpretable nonlinear model. Notice that the linear VAR model is interpretable because its coefficients represent a notion of additive influence of each variable on any another as it can be seen in (1). Since we seek a model having the advantage of identifying dependencies, our model should have a structure resembling that of a VAR model. Linearity renders VAR models not capable of identifying nonlinear dependencies, and their prediction error in real-world scenarios is high. Therefore, the desiderata here is a model which gives low prediction error as compared to linear models while retaining interpretability. To achieve this, we propose a modeling assumption stating that a collection of time series is generated through a VAR process in a latent space, and then each time-series \(\{z_{i}[t]\}\) is observed in a measurement space through a per-sensor nonlinear, monotonically increasing (and thus invertible) function (\(f_{i}\)) connecting \(\{y_{i}[t]\}\) with \(\{z_{i}[t]\}\). Each nonlinear function \(f_{i}\) associated with each time series \(z_{i}[t]\) is generally different. The concept is depicted in Fig. 1: the green circle represents the vector space where the latent variables lie, among which the dependencies are linear. The area outside the circle represents the space where the actual sensor measurements \(\{z_{i}[t]\}\) lie. The blue lines represent the linear dependencies between time series in the latent space. Darker blue lines depict stronger dependencies between pairs of sensors. The red line from each time series or sensor represents the corresponding measurement space transformation. Let \(f:\mathbb{R}^{\mathbb{N}}\rightarrow\mathbb{R}^{\mathbb{N}}\) denote a vector function such that \([f(x)]_{i}=f_{i}(x_{i})\) where \(f_{i}\) is the nonlinear function associated with each sensor. With this definition, a collection of nonlinearly related time series is assumed to be generated as \[z[t]=f(y[t]), \tag{6}\] where \(y[t]\) is generated according to (1). Since there is a one-to-one mapping between \(z[t]\) and \(y[t]\) (defined by the bijective mapping **f**), we can say that if \(y_{n}[t]\) VAR causes \(y_{m}[t]\), then clearly \(z_{n}[t]\) causes \(z_{m}[t]\). Moreover, given the nonlinearity of each per-sensor mapping, the latter dependency is nonlinear. The structure of the nonlinear dependency graph among the signals in \(z[t]\) is the same as that for the signals in \(y[t]\). Therefore, the modeling assumption introduced in this section allows a criterion for inferring a nonlinear causality graph among any set of time series. Once the model for the generative process is specified, we can express the problem statement as follows: Given a set of sensor measurement data given by multiple time series \(z[t]\) in a given time interval \([0,T]\), our goal is to identify the linear parameters \(\{a_{n,n^{\prime}}^{(p)}\}\) in the latent space and the vector nonlinear function \(f\). In Sec. IV, we formally describe the problem formulation and the techniques to infer the aforementioned parameters. ## IV Problem formulation and algorithm design Here, we provide a rigorous problem formulation and the design of algorithms under the modeling assumption described in Sec. III resulting in a complexity that is comparable to that of the linear VAR model, while accounting for nonlinear interactions. We consider two different problem formulations; while the direct approach described in Sec. IV-A is relatively straightforward, the one in Sec. IV-B is advantageous in terms of computation and accuracy. The problem statement at the end of Sec. III requires learning nonlinear functions, and in order to do that, it is necessary to parameterize the functions. The parameterization of nonlinear transformations is different in Sec. IV-A and Sec. IV-B. ### _Explicit function inversion-based inference_ A first approach can be based on inferring the nonlinear function parameters by formulating an optimization problem that directly penalizes the difference between the predicted and actual values of the time series in the measurement space. The problem can be expressed as follows: given a total of \(T\) observations \(\{z[t]\}_{t=0}^{T-1}\) from the time series, learn the nonlinear transformation \(f\) and the parameters \(\{A^{(p)}\}_{p=1}^{P}\) of the underlying linear model in the latent space. To infer \(f\), each \(f_{i}\) is parameterized as a NN layer with \(M\) units indexed by \(j\) representing the function: \[f_{i}\left(y_{i}\right)=b_{i}+\sum_{j=1}^{M}\alpha_{ij}h\left(w_{ij}y_{i}-k_{ ij}\right) \tag{7}\] Where \(h(\cdot)\) is a monotonically increasing activation function, for example, a sigmoid function; and the parameters to be learned: \(\{\alpha_{ij},w_{ij},k_{ij}\}_{j},b_{i}\) are collected in the vector \(\theta_{i}\): \[\theta_{i}=\left[\begin{array}{c}\alpha_{i}\\ w_{i}\\ k_{i}\\ b_{i}\end{array}\right]\text{ and }\alpha_{i},w_{i},k_{i}=\left[\begin{array}{c} \alpha_{i1}\\ \alpha_{i2}\\ \vdots\\ \alpha_{iM}\end{array}\right],\left[\begin{array}{c}w_{i1}\\ w_{i2}\\ \vdots\\ w_{iM}\end{array}\right],\left[\begin{array}{c}k_{i1}\\ k_{i2}\\ \vdots\\ k_{iM}\end{array}\right].\] The parameters of \(f\) are in turn collected in the vector \(\theta=[\theta_{1}^{\top},\theta_{2}^{\top},\ldots\theta_{N}^{\top}]^{\top}\). For each function \(f_{i}\) to be monotonically increasing, which guarantees invertibility, it suffices to ensure that \(\alpha_{ij}\) and \(w_{ij}\) are positive for all \(j\). The pre-image of \(f_{i}\) is \(\mathbb{R}\), but the image is an interval \((z_{i},\tilde{z}_{i})\), which is in accordance with the fact that sensor data are usually restricted to a given dynamic range. If the range is not available a priori but sufficient data is available, bounds for the operation interval can also be easily inferred. In order to express how an entry of a time series is predicted from the previous values, let \(g_{i}\) denote the inverse of \(f_{i}\), that is, \(y[t]=g(z[t])\) and let \(g:\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}\) denote a vector function such that \([\mathbf{g}(x)]_{i}=g_{i}(x_{i})\). Then, (6) and (4) imply that \[z[t]=\mathbf{f}\bigg{(}\sum_{p=1}^{p}A^{(p)}\mathbf{g}(z[t-p])+u[t]\bigg{)}. \tag{8}\] Notice that, in general, there is no closed form for the inverse function \(g_{i}\); however, it is possible to compute it efficiently via a numerical method such as bisection. The optimization problem for joint learning of \(\mathbf{f}\) and the VAR parameters is formulated as follows and will subsequently be referred to as **Formulation A**: Figure 1: Causal dependencies are assumed linear in the latent space (green circle). In this model, the available sensor data corresponds to the output of the nonlinear functions \(\{f_{i}\}_{i=1}^{N}\). \[\min_{\mathbf{f},\{A^{(p)}\}_{p=1}^{P}} \sum_{t=P}^{T}\left\|z[t]-\mathbf{f}\Big{(}\sum_{p=1}^{p}A^{(p)}\mathbf{g}(z [t-p])\Big{)}\right\|_{2}^{2}\] \[+\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left|a _{n,n^{\prime}}^{(p)}\right|\] (9a) s. to: \[\sum_{j=1}^{M}\alpha_{ij}=\bar{z}_{i}-z_{i}\;\;\forall i \tag{9b}\] \[b_{i}=z_{i}\;\;\forall i\] (9c) \[\alpha_{ij}\geq 0\;\;\forall i,j\] (9d) \[w_{ij}\geq 0\;\;\forall i,j \tag{9e}\] The objective function (9a) is a least-squares criterion with a Lasso regularizer term over the adjacency coefficients to enforce sparsity in the resulting graph. Here, the hyper-parameter \(\lambda\) regulates how sparse the solution is. Notice that this objective function (9a) is non-convex (because it involves composition with \(\mathbf{f}\) which is non-convex in general) and non-differentiable due to the \(l_{1}\) norm in the Lasso term. It can be split as \(\sum_{t=P}^{T}C(\left\{A^{p}\right\},\theta,t)+q(A^{p})\), where \[C\left(A^{p},\theta,t\right)=\;\left\|z[t]-\mathbf{f}\Big{(}\sum_{p=1}^{p}A^{(p)} \mathbf{g}(z[t-p])\Big{)}\right\|_{2}^{2} \tag{10}\] is differentiable, and \[q(A^{p})=\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left|a_{n, n^{\prime}}^{(p)}\right| \tag{11}\] is not, which motivates the use of proximal algorithms. The constraints (9b), (9c) ensure that the image of each \(f\) is in the corresponding sensor dynamic range, and constraints (9d) and (9e)) ensure the invertibility of \(f_{i}\). We solve the optimization problem (9) stochastically by a technique that combines proximal gradient descent and projected gradient descent. Specifically, the regularization term in the second summand can be tackled with a proximal parameter update, and the constraints (9b), (9c), (9d) and (9e) can be enforced by projection. The parameter updates are derived as follows. Note that the Lasso penalty only affects the VAR parameters. Thus each \(a_{nn^{\prime}}^{(p)}\) is updated iteratively by a proximal update, whereas the parameters \(\theta\) are updated by a gradient step. Letting \(t(k)\) denote the time instant used at iteration \(k\), we can write the following updates: \[a_{nn^{\prime}}^{(p)(k+1)} =\mathrm{prox}_{q,\eta}\left(a_{nn^{\prime}}^{(p)(k)}-\eta\bigg{(} \frac{dC(A^{p},\theta,t(k))}{da_{nn^{\prime}}^{(p)}}\bigg{)}^{\top}\right) \tag{12a}\] \[\theta_{i}^{(k+1)} =\theta_{i}^{(k)}-\eta\bigg{(}\frac{dC(A^{p},\theta,t(k))}{d \theta_{i}}\bigg{)}^{\top}. \tag{12b}\] Note that \(q\) in \(prox_{q,\eta}\) corresponds to the function defined in (11) and the proximity operator in (12a) is given by: \[\mathrm{prox}_{q,\eta}\left(x\right)=x\left[1-\frac{\eta\lambda}{\left|x \right|}\right]_{+} \tag{13}\] where \([x]_{+}:=\max(0,x)\), yielding the well-known soft-thresholding operator [26]. After each parameter update, the NN parameters are projected back onto the feasible set, according to the equation \[\Pi_{S}\left(\theta^{(k)}\right)= \arg\min_{\theta}\left\|\theta-\theta^{(k)}\right\|_{2}^{2}\] (14a) s. to: (9b), (9c), (9d), (9e) This is a case of projection onto a simplex which is tackled using the projection algorithm in [27]. The proximal parameter update requires the computation of the gradient of \(C(A^{p},\theta,t)\) w.r.t. \(A^{p}\) and \(\theta\). The forward equations can be written as: \[\tilde{y}_{i}[t-p]= g_{i}\left(z_{i}[t-p],\theta_{i}\right) \tag{15a}\] \[\hat{y}_{i}[t]= \sum_{p=1}^{p}\sum_{j=1}^{n}a_{ij}^{(p)}\tilde{y}_{j}[t-p]\] (15b) \[\hat{z}_{i}[t]= f_{i}\left(\hat{y}_{i}[t],\theta_{i}\right) \tag{15c}\] where the dependency with the parameter vector \(\theta_{i}\) has been made explicit. The remainder of this section shows the backward equations. The main challenge to solving this problem is that there is no closed form for the inverse function \(g_{i}\). However, an inverse function can be computed efficiently via bisection as one of the possible methods. On the other hand, automatic differentiation software cannot yield the gradient of \(g_{i}\). This is circumvented in [1] using implicit differentiation. To make the paper self-contained, the expressions to compute the gradient of \(g_{i}\) are provided here and the full derivation is shown in Appendix A. Letting \(f_{i}^{\prime}\left(\hat{y}\right)=\frac{\partial f_{i}\left(\tilde{y},\theta_{ i}\right)}{\partial g}\), and \(S_{n}=2(\hat{z}_{n}[t]-z_{n}[t])\), the gradient of \(C(A^{p},\theta,t)\) can be expressed as: \[\frac{dC(A^{p},\theta,t)}{d\theta_{i}}= S_{i}\bigg{(}\frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{ \partial\theta_{i}}\bigg{)}+\] \[\sum_{n=1}^{N}S_{n}\bigg{(}f_{n}^{\prime}(\hat{y}_{n}[t])\sum_{p=1 }^{P}a_{ni}^{(p)}\frac{\partial g_{i}\left(z_{i}[t-p],\theta_{i}\right)}{ \partial\theta_{i}}\bigg{)}\] where \[\frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial\theta_{i}}=\left[ \frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial\alpha_{i}}\frac{ \partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial w_{i}}\frac{\partial f _{i}\left(\hat{y},\theta_{i}\right)}{\partial k_{i}}\frac{\partial f_{i}\left( \hat{y},\theta_{i}\right)}{\partial b_{i}}\right]\] can be obtained by analytic or automatic differentiation. The gradient of the inverse function is: \[\frac{\partial g_{i}\left(z,\theta_{i}\right)}{\partial\theta_{i}}=-\{f_{i}^{ \prime}(g_{i}(z,\theta_{i}))\}^{-1}\bigg{(}\left.\frac{\partial f_{i}\left(\tilde {y},\theta_{i}\right)}{\partial\theta_{i}}\right|_{\tilde{y}=g_{i}(z,\theta_{ i})}\bigg{)}.\] Finally, the gradient of \(C(A^{p},\theta,t)\) w.r.t. the VAR coefficient \(a_{nn^{\prime}}^{(p)}\) can be readily calculated as: \[\frac{dC(A^{p},\theta,t)}{da_{ij}^{(p)}}=S_{i}f_{i}^{\prime}\left(\hat{y}_{i}[t] \right)\tilde{y}_{j}[t-p]. \tag{16}\] The detailed derivation of the above expressions is provided in Appendix A. The non-convexity of problem (9) and the comparatively small number of parameters of the model are factors that increase the risk of falling into low-performance local minima, making the final convergence value of the parameters \(\theta\) to be dependent on the initialization. On the other hand, it is expected that the model will accomplish a lower prediction error than the linear VAR model for the same training data. A strategy to obtain a non-linear model performing better than the optimal linear one is to initialize \(f\) to resemble an identity function at the range of the input data and such that a latent prediction that falls out of the range of typical predictions translates into a measurement prediction that is close to the corresponding extreme (maximum or minimum) value observed in the training data. To this end, it is proposed to initialize \(\theta\) such that \[f_{i}(\hat{y}_{i}[t],\theta_{i})=[\hat{y}_{i}[t]]_{\bar{z}_{i}}^{\bar{z}_{i}} \tag{17}\] approximately holds, where \([\hat{y}_{i}[t]]_{z_{i}}^{\bar{z}_{i}}:=\max\left(z_{i},\min(\hat{y}_{i}[t],\bar {z}_{i})\right)\). Additionally, the latent parameters \(\{A^{p}\}\) are to be initialized to equal the linear VAR parameters inferred from the training data with a linear VAR estimation method. As a result, the initial (before iterating) prediction error of the initial nonlinear VAR model is equal to that of the linear VAR, and the subsequent iterations (as given in (12)) will move the parameters in the direction of a solution with a smaller prediction error. Thus, the chances of finding a solution with a lower error than the linear model are increased. In order to increase the efficiency of the algorithm and avoid an initial training of the non-linearity from a synthetic collection of data points for each of the time series, but only having one pre-trained non-linear function, we derive a set of transformation equations from the linear model to obtain the desired nonlinearities for their different ranges. A set of transformation equations can be developed by defining a function \(\hat{f}\) such that \(\hat{f}_{i}(1)=f_{i}(1)=1\), \(\hat{f}_{i}(-1)=f_{i}(-1)=-1\), \(\hat{f}_{i}(x)\) = \(f_{i}(x)=x\). Let \(\hat{\alpha}_{i},\hat{w}_{i},\hat{k}_{i}\) and \(\hat{b}_{i}\) be the learned parameters corresponding to \(\hat{f}_{i}\). The set of transformation equations will be such that \(\hat{\alpha}_{i}=c\alpha_{i},\hat{b}_{i}=cb_{i}+d,\hat{w}_{i}=aw_{i},\hat{k}_{ i}=-w_{i}B+k_{i}\) where \(c=(\bar{z}-\bar{z})/2,d=(\bar{z}+\bar{z})/2,a=-2/(z-\bar{z})\) and \(B=2\bar{z}/(z-\bar{z})\). The complete derivation of the set of transformation equations is shown in Appendix B. In Sec V, we show experimentally that this initialization speeds up both proposed algorithms. The steps of the overall method described in this section are summarized in **Algorithm 1**. ``` Result:\(\mathbf{a}_{n,n^{\prime}}^{(p)}\), for \(n,n^{\prime}=1,..,N\) and \(p=1,p+1,..,P\) Input: data \(z_{i}\), \(\lambda\), \(N\), order \(P\), \(M\), \(T\), learning rate \(\eta\). Initialize:\(\mathbf{a}_{n,n^{\prime}}^{(p)}\), \(\theta_{i}\) as stated in (17) for\(t=P,P+1,...,T\)do for\(n=1,2,...,N\)do Generate \(y_{n}[t]\) from \(z_{n}[t]\) using \(g_{n}\) (15a) Obtain \(y_{n}[t+1]\) using (15b) and Obtain \(z_{n}[t+1]\) using \(f_{n}\) (15c) Network update: \(\theta_{n}=\theta_{n}-\eta\frac{dC[t]}{d\theta_{n}}\) (12b) Projection operation (14) for\(n^{\prime}=1,2,...,N\)do for\(p=1,2,...,P\)do VAR parameter update: \(a_{nn^{\prime}}^{(p)}[t+1]\) via (12a) ``` **Algorithm 1** Explicit function inversion-based inference ### _Latent prediction error minimization-based inference_ As indicated in the previous formulation, the main drawback of the algorithm is associated with the numerical computation of \(\mathbf{g}\). Evaluating he function \(\mathbf{g}\) via bisection adds complexity at each run within the overall algorithm. Next, we propose an alternative formulation to estimate a nonlinear topology, whose solution leads to a lower-complexity algorithm. The main idea of this formulation is to minimize the prediction MSE in the latent space instead of minimizing it in the measurement space. We will show that minimizing the prediction error in the latent space implies approximately minimizing the prediction error in the measurement space. This is because, as it will become clear later, under certain conditions, the latter is an upper bound of the former. The nonlinearities between measurement and latent space are parameterized here in a way different from that presented in the previous formulation. The function mapping sensor \(n\) from latent space to measurement space is now denoted as \(r_{n}\). It has the use of function \(f_{n}\) denoted in the previous section but receives a different symbol as it is parameterized in a different way. The way \(r\) is parameterized is via an explicit parameterization of its inverse (denoted by \(v\)), such that \(y[t]=v(z[t])\). The function \(v_{n}\) for sensor \(n\) which is the inverse of \(r_{n}\) is parameterized as follows: \[v_{n}(x)=b_{n}+\gamma_{n}x+\sum_{j=1}^{M}\alpha_{nj}h\left(w_{nj}x-k_{nj} \right). \tag{18}\] Note that the way \(v_{n}\) is parameterized is similar to the case of \(f_{n}\) in (7) with the addition of the linear term \(\gamma_{n}x\), which together with positivity constraints in \(\alpha\) and \(w\), ensure that the derivative of \(v_{n}\) is at least \(\gamma\). The optimization problem for joint learning of \(\mathbf{v}\) and the VAR parameters is formulated as follows and will subsequently be referred to as **Formulation B**: \[\min_{\{\{A_{p}\}_{p=1}^{P},\theta\}} \frac{1}{T-P}\sum_{t=P}^{T}\left\|v(z[t])-\sum_{p=1}^{P}A^{(p)}v(z[ t-p])\right\|_{2}^{2}\] \[+\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left|a _{n,n^{\prime}}^{(p)}\right|\] (19a) s. to: \[\alpha_{ij}\geq 0\;\;\forall i,j \tag{19b}\] \[w_{ij}\geq 0\;\;\forall i,j\] (19c) \[\gamma_{i}\geq 0\;\;\forall i\] (19d) \[\frac{\sum_{t=0}^{T-1}v_{i}(z[t])}{T}=0\;\;\forall i\] (19e) \[\frac{\sum_{t=0}^{T-1}(v_{i}(z[t]))^{2}}{T-1}=1\;\;\forall i \tag{19f}\] As it can be seen in the problem formulation, the prediction error is minimized in the latent space. This is justified because, as Theorem 1 shows next, the prediction MSE in the latent space is also an upper bound of the prediction MSE in the measurement space when the set of functions \(\{r_{c}(x)\}\) are Lipschitz continuous. Therefore, minimizing in the latent space entails approximately minimizing in the measurement space. With \(\hat{z}\) and \(\hat{y}\) denoting the prediction in measurement and latent space respectively, we state the following theorem. **Theorem 1**.: _if \(\mathbf{r}_{n}()\) is \(L_{r_{n}}\)-Lipschitz continuous and \(z_{n}[t]\) and \(y_{n}[t]\) are related as \(z_{n}[t]=r_{n}(y_{n}[t])\), then the following bound holds:_ \[\sum_{n=1}^{N}\mathbb{E}\left[\left\|\hat{z}_{n}[t]-z_{n}[t] \right\|_{2}^{2}\right]\] \[\qquad\qquad\leq\left(\max_{n}L_{r_{n}}\right)^{2}\sum_{n=1}^{N} \mathbb{E}\left[\left\|\hat{y}_{n}[t]-y_{n}[t]\right\|_{2}^{2}\right] \tag{20}\] Proof.: Given that \(L_{r_{n}}\) is Liptschitz continuous with Lipschitz constant \(L_{r_{n}}\), the following holds: \[\|r_{n}(\hat{y})-r_{n}(y)\|_{2}\leq L_{r_{n}}\|\hat{y}_{n}-y_{n} \|_{2} \tag{21}\] \[\|\hat{z_{n}}[t]-z_{n}[t]\|_{2}\leq L_{r_{n}}\|\hat{y}_{n}[t]-y_{n }[t]\|_{2} \tag{22}\] Squaring (22) equation and taking expectation, we obtain the following: \[\sum_{n=1}^{N}\mathbb{E}\left[\left\|\hat{z}_{n}[t]-z_{n}[t]\right\| _{2}^{2}\right]\leq\sum_{n=1}^{N}(L_{r_{n}})^{2}\,\mathbb{E}\left[\|\hat{z}_{n }[t]-z_{n}[t]\|_{2}^{2}\right]\] \[\qquad\qquad\leq\left(\max_{n}L_{r_{n}}\right)^{2}\sum_{n=1}^{N} \mathbb{E}\left[\|\hat{y}_{n}[t]-y_{n}[t]\|_{2}^{2}\right] \tag{23}\] The Lipschitz continuity constant of a specific instance of function \(v\) can be obtained from a differential property as \[L_{r_{n}}=1/\min_{x^{\prime}}\left\{\frac{dv_{n}(x)}{dx}\mid_{x^{\prime}=x}\right\} \tag{24}\] Intuitively, if \(v_{n}\) is too flat, then \(r_{n}\) is too steep, which implies that a small variation in the prediction in the latent space can be associated with a large variation in the prediction in the measurement space, which can entail a larger prediction MSE in the measurement space as the bound becomes loose. Now that the rationale for having objective function (19a) is clear, we explain the constraints: (19e) and (19f) ensures that the mean of the output of \(v_{n}\) is 0 and the variance of the output of \(v_{n}\) is 1 inside the latent space. The idea to enforce these constraints is to have \(v_{n}\) in the proper dynamic range so that it is not flat. It enacts a nonlinear normalization into the latent space. Notice that if \(v_{n}\) is flat, the left-hand side of (24) goes to infinity, making \(r_{n}\) not Lipschitz continuous anymore. Constraints (19b), (19c) and (19d) ensures that each function \(v_{n}\) is invertible. Similarly to the first formulation, we also enforce sparsity-inducing penalties for the VAR coefficients, and the regularization term in the second summand is again tackled by using a proximal parameter update. Notice that, as opposed to the first formulation, the optimization problem does not explicitly include the inverse function, and hence the burden of computing the inverse function with the bisection method is avoided resulting in reduced complexity. Next, we aim to solve the optimization problem (19) using Lagrangian duality. More specifically, we dualize constraints (19e) and (19f). The remaining constraints can be easily enforced by using a projection operation. The objective function and the constraints (19f) and (19e) are of non-convex nature. Notice that since the optimization problem is not convex, we cannot theoretically claim that an iterative algorithm based on duality will achieve a globally optimal solution satisfying all the constraints. However, as we will show in the experimental results section, our algorithm achieves satisfactory results. With \(\beta\) and \(\mu\) respectively denoting the dual variables associated with constraint (19e) and (19f) of the optimization problem (19a), the partial Lagrangian based on (19a) can be written as: \[\mathcal{L}\left(\left\{A_{p}\right\}_{p=1}^{P},\theta,\beta,\mu\right)= f_{o}\left(\left\{A_{p}\right\}_{p=1}^{P},\theta\right) \tag{25}\] \[+\beta^{\top}g_{1}(\theta)+\mu^{\top}g_{2}(\theta)\] where \[f_{o}\Big{(}\{A_{p}\}_{p=1}^{P},\theta\Big{)}=\] \[\frac{1}{T-P}\sum_{t=P}^{T-1}\left\|v(z[t])-\sum_{p=1}^{P}A^{(p)} v(z[t-p])\right\|_{2}^{2}\] \[+\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left| a_{n,n^{\prime}}^{(p)}\right| \tag{26}\] \[[g_{1}\{\theta\}]_{i}=\frac{\sum_{t=0}^{T-1}v_{i}(z_{i}[t])}{T}, \forall i\] (27) \[[g_{2}\{\theta\}]_{i}=\frac{\sum_{t=0}^{T-1}(v_{i}(z_{i}[t]))^{2} }{T-1}-1, \forall i \tag{28}\] The following steps show how the optimization problem can be solved using the stochastic primal-dual algorithm [28]. Considering \(\eta_{p}\) and \(\eta_{d}\) as primal and dual learning rate, The following steps are derived: Let us define a stochastic version of the partial Lagrangian function: \[\tilde{\mathcal{L}}\Big{(}\{A_{p}\}_{p=1}^{P},\theta,\beta,\mu;t \Big{)} =\tilde{f}_{o}\left(\left\{A_{p}\right\}_{p=1}^{P},\theta;t\right) +\beta^{\top}\tilde{g}_{1}\{\theta;t\}\] \[+\mu^{\top}\tilde{g}_{2}\{\theta;t\} \tag{29}\] In the next paragraphs \(\tilde{\mathcal{L}},\tilde{f}_{o},\tilde{g_{1}}\) and \(\tilde{g_{2}}\) are defined such that \[\mathcal{L}\Big{(}\{A_{p}\}_{p=1}^{P},\theta,\beta,\mu\Big{)}=\sum_{t=0}^{T-1} \tilde{\mathcal{L}}\Big{(}\{A_{p}\}_{p=1}^{P},\theta,\beta,\mu;t\Big{)}. \tag{30}\] Accordingly, the stochastic contribution to \(f_{o}\) is defined as: \[\tilde{f}_{o}\left(\left\{\left\{A_{p}\right\}_{p=1}^{P},\theta\right\} \right)0<t<P\] \[=\left\{\begin{array}{ll}0,&0<t<P\\ \frac{1}{T-P}\Big{[}\left\|v(z[t])-\sum_{p=1}^{P}A^{(p)}v(z[t-p])\right\|_{2}^{ 2}\\ +\lambda\sum_{p=1}^{P}\sum_{n=1}^{N}\sum_{n^{\prime}=1}^{N}\left|a_{n,n^{\prime} }^{(p)}\right|\Big{]},&t\geq P\end{array}\right.\] then, we have that: \[f_{o}\left(\left\{A_{p}\right\}_{p=1}^{P},\theta\right)=\sum_{t=0}^{T-1}\tilde {f}_{o}\left(\left\{A_{p}\right\}_{p=1}^{P},\theta;t\right)\text{.} \tag{31}\] \[\text{similarly, consider: }[\tilde{g_{1}}\{\theta;t\}]_{i}=\frac{q_{i}(z_{i}[t])}{T}, \text{ \ }\forall i. \tag{32}\] \[[g_{1}\{\theta\}]_{i}=\sum_{t=0}^{T-1}\left[\tilde{g_{1}}\{\theta;t\}\right] _{i}\text{ \ }\forall i. \tag{33}\] \[\text{Also }[\tilde{g_{2}}\{\theta;t\}]_{i}=\frac{(v_{i}(z_{i}[t]))^{2}-(T-1)/T )}{T-1}\text{ \ }\forall i. \tag{34}\] \[[g_{2}\{\theta\}]_{i}=\sum_{t=0}^{T-1}\left[\tilde{g_{2}}\{\theta;t\}\right] _{i}\text{ \ }\forall i. \tag{35}\] Let \(t(k)\) denote the time instant used at iteration \(k\), the stochastic primal update equations are: \[\theta_{i}[k+1]=\theta_{i}[k]-\eta_{p}\frac{\partial\tilde{\mathcal{L}}\Big{(} \left\{A_{p}[k]\right\}_{p=1}^{P},\theta[k],\beta[k],\mu[k];t(k)\Big{)}}{ \partial a_{n,n^{\prime}}^{(p)}[k]} \tag{36}\] \[a_{nn^{\prime}}^{(p)}[k+1]=\text{prox}_{q,\eta_{p}}\left(a_{nn^{ \prime}}^{(p)}(k)\right.\] \[\left.-\eta_{p}\frac{\partial\tilde{\mathcal{L}}\Big{(}\left\{A_ {p}[k]\right\}_{p=1}^{P},\theta[k],\beta[k],\mu[k];t(k)\Big{)}}{\partial a_{n, n^{\prime}}^{(p)}[k]}\right) \tag{37}\] Similarly, the stochastic dual update equations are: \[\beta_{i}[k+1]=\beta_{i}[k]\] \[+\eta_{d}\frac{\partial\tilde{\mathcal{L}}\Big{(}\left\{A_{p}[k+ 1]\right\}_{p=1}^{P},\theta[k+1],\beta[k],\mu[k];t(k)\Big{)}}{\partial\beta_{i} [k]} \tag{38}\] \[=\beta_{i}[k]+\eta_{d}[\tilde{g_{1}}\{\theta[k+1];t(k)\}]_{i} \tag{39}\] \[\mu_{i}[k+1]=\mu_{i}[k]\] \[+\eta_{d}\frac{\partial\tilde{\mathcal{L}}\Big{(}\left\{A_{p}[k+ 1]\right\}_{p=1}^{P},\theta[k+1],\beta[k],\mu[k];t(k)\Big{)}}{\partial\mu_{i}[k]} \tag{40}\] \[=\mu_{i}[k]+\eta_{d}[\tilde{g_{2}}\{\theta[k+1];t(k)\}]_{i} \tag{41}\] As discussed in Sec. IV-B, a strategy to increase the chance of obtaining a non-linear model performing better than the linear one is to initialize the nonlinearity to resemble an identity function at the range of the input data. The initial form of the function \(v_{i}\) is required to resemble as much as possible the inverse of the initial shape of the function \(f\) used in Formulation A. Since the initial \(f\) in formulation A behaves like the identity in the range of the input data and is flat out of that range, the initial \(v_{i}\) in Formulation B is sought to behave like the identity in the range of the input data and have a steep slope out of that range. Following steps similar to those described for the initialization of \(f\) in Sec. IV-A the parameters for each node can be obtained by transforming the parameters obtained from training a standard initial function which behaves as an identity between -1 and 1. The steps described in this section are summarized in **Algorithm 2**. ``` Result:\(a_{n,n^{\prime}}^{(p)}\), for \(n,n^{\prime}=1,..,N\) and \(p=1,p+1,..,P\) Input: data:\(z_{i}\), \(\lambda\), \(N\), order \(P\), \(M\), \(T\), learning rates \(\eta_{p},\eta_{d}\). Initialize:\(a_{n,n^{\prime}}^{(p)}\),\(\theta_{i}\) for\(t=P,P+1,...,T\)do for\(n=1,2,...,N\)do Generate \(y_{n}[t]\) from \(z_{n}[t]\) using \(v_{n}\) Obtain \(y_{n}[t+1]\) via (2) Obtain \(\tilde{\mathcal{L}}\Big{(}\left\{A_{p}\right\}_{p=1}^{P},\theta,\beta,\mu;t \Big{)}\) via (25) Network parameter update: \(\theta_{n}\) via (36) Dual parameters update: \(\beta\), \(\mu\) via (38), (40) Projection operation (14) for\(n^{\prime}=1,2,...,N\)do for\(p=1,2,...,P\)do VAR parameter update: \(a_{nn^{\prime}}^{(p)}[t+1]\) via (37) ``` **Algorithm 2** Latent error minimization-based inference ## V Simulation Experiments In this section, we conduct comprehensive numerical tests to assess the performance of our algorithms formulation A (f_A) and formulation B (f_B) on synthetic and real data sets. We provide comparisons against the best four current competitors: cMLP (component-wise Multi-Layer Perceptrons), cLSTM (component-wise Long Short-Term Memory), cRNN (component-wise Recurrent Neural Networks) [11], and linear VAR. The proposed algorithms are evaluated based on the performance metrics described next, where expectations are approximated by the Monte Carlo method. The probability of false alarm (\(P_{\text{FA}}\)) and probability of detection (\(P_{\text{D}}\)) are used to numerically compare the edge-identification performance of the algorithms. The \(P_{\text{FA}}\) is the likelihood that the algorithm detects the existence of a dependence that does not exist, whereas the \(P_{\text{D}}\) is the likelihood that the algorithm discovers a dependence that is really existent in the network. In our experiments, we suppose that there is a detectable edge from the \(p^{th}\) time-lagged value of the \(n^{th}\) sensor to \(n^{th}\) sensor if the absolute value of coefficient \(a_{n,n^{\prime}}^{(p)}\) is greater than a prespecified threshold \(\delta\). Letting \(\hat{a}_{n,n^{\prime}}^{(p)}\) be a binary variable that indicates that \(a_{n,n^{\prime}}^{(p)}\) is detected as nonzero, it is computed as \(\hat{a}_{n,n^{\prime}}^{(p)}=\mathbb{1}\left\{\left|a_{n,n^{\prime}}^{(p)} \right|>\delta\right\}\), where \(\mathbb{1}\{x\}\) denotes the indicator function, taking value 1 when \(x\) is true and 0 when \(x\) is false. With \(a_{n,n^{\prime}}\) denoting the presence of a true edge, \(P_{\text{FA}}\) and \(P_{\text{D}}\) are defined as \[P_{\text{D}}\triangleq 1-\frac{\sum_{n\neq n^{\prime}}\sum_{p=1}^{P}\mathbb{E} \left[\mathbbm{1}\left\{|a_{n,n^{\prime}}^{(p)}|>\delta\right\}\mathbbm{1} \left\{a_{n,n^{\prime}}=1\right\}\right]}{\sum_{n\neq n^{\prime}}\sum_{p=1}^{P }\mathbb{E}\left[\mathbbm{1}\left\{a_{n,n^{\prime}}=1\right\}\right]} \tag{42}\] \[P_{\text{FA}}\triangleq\frac{\sum_{n\neq n^{\prime}}\sum_{p=1}^{P}\mathbb{E} \left[\mathbbm{1}\left\{|a_{n,n^{\prime}}^{(p)}|>\delta\right\}\mathbbm{1} \left\{a_{n,n^{\prime}}=0\right\}\right]}{\sum_{n\neq n^{\prime}}\sum_{p=1}^{P }\mathbb{E}\left[\mathbbm{1}\left\{a_{n,n^{\prime}}=0\right\}\right]} \tag{43}\] With an increase in \(\delta\), both \(P_{\text{D}}\) and \(P_{\text{FA}}\) decrease, eventually reaching zero. In our study, we measure the prediction accuracy using normalized mean squared error (NMSE): \[\mathrm{NMSE(T)}=\frac{\sum_{n=1}^{N}\sum_{t=1}^{T}\left(y_{n}\left[t\right]- \hat{y}_{n}\left[t\right]\right)^{2}}{\sum_{n=1}^{N}\sum_{t=1}^{T}\left(y_{n} \left[t\right]\right)^{2}} \tag{44}\] where \(\hat{y}_{n}[t]\) is the estimate of the time series generated by the \(n^{th}\) node at time instant \(t\). The captions and legends of the figures provide a list of all the experimental parameter values. ### _Experiments with synthetic data_ We use both formulated algorithms to find VAR-based dependency networks in simulated data from a nonlinear VAR model matching the assumption in Sec. III and from a Lorenz-96 process [24], a nonlinear model of climate dynamics, to compare and analyze the performance of our approaches with cMLP, cRNN, and cLSTM. Overall, the findings demonstrate that the proposed approaches can rebuild the underlying nonlinear VAR structure. The VAR experiment findings are presented first, followed by the Lorentz results. Note that we used Hidden units \(H=10\) for formulations A, B and \(H=100\) for cMLP, cRNN, and cLSTM throughout the experiments. The sparsity hyper-parameters \(\lambda\) for different algorithms are selected via grid search based on the held-out validation error (note that the optimal \(\lambda\) for different methods are not necessarily equal under different conditions). The final adjacency matrices are computed by taking the \(l_{2}\) norm (Euclidean norm) along the third dimension (axis 3) of the estimated three-dimensional tensor \(\{A^{p}\}\). The metric used to compare the different approaches is the area under the receiver operating characteristic (AUROC). The ROC curve is traced by selecting different values of threshold \(\delta\) and for each of these values a point (\(P_{\text{FA}}\), \(P_{\text{D}}\)) is computed from 10 Monte Carlo runs. The reported AUROC is the area under the linear interpolant joining the aforementioned points. A topology identification algorithm with a high AUROC value generally achieves operation points with high \(P_{\text{D}}\) and low \(P_{\text{FA}}\), indicating that it can accurately identify network topologies while minimizing the occurrence of false positives. The following subsections describe how the synthetic data are generated. Along all experiments, each generated dataset is split into training (70%), validation (20%), and test (10%) subsets. #### Iv-A1 Nonlinear VAR Model We generate graph-connected time series based on the nonlinear VAR (NL-VAR) model. The parameter values are \(N=10\), \(T=10000\), \(P=4\), and \(P=8\). When generating NL-VAR data set for \(P=4\) and \(8\), we set the lag order parameter to \(4\) and \(8\) respectively. The VAR parameters \(a_{nn^{\prime}}^{(p)}\) are drawn from a Bernoulli distribution with (edge) probability \(p_{e}=0.15\). In order to make the underlying VAR process stable, we re-scale the generated coefficient matrix 2.The nonlinearity \(f_{i}(\cdot)\) (a monotonically increasing nonlinear function) is randomly generated by drawing random values for the parameters \(\theta\) from a uniform distribution and then applying the model in equation (7). The nonlinear model is initialized following the heuristic steps described at the end of Sec. IV-A. Results are displayed in Table I. The AUROC for the proposed formulations A and B, linear VAR, cMLP, cRNN, and cLSTM approaches for three values of the time series length, \(T\in\{250,500,1000\}\) with lag order \(P\in\{4,8\}\) is calculated. The performance of all models improves at larger T for both lag orders (\(P=4\) and \(P=8\)). Formulations A and B outperform the linear model (VAR) for a large enough value of T. This result is expected as the model has a slightly larger expressive power, requiring a moderate increase in T to not overfit. Formulations A, B, and \begin{table} \begin{tabular}{l l l l l l l} \hline Model & \multicolumn{2}{c}{VAR lag order} & \multicolumn{2}{c}{VAR lag order} \\ & \multicolumn{2}{c}{(P) = 4} & \multicolumn{2}{c}{(P) = 8} \\ \hline T & T = 250 & T = 500 & T = 1000 & T = 250 & T = 500 & T = 1000 \\ \hline formulation A & 0.7562 & 0.9299 & 0.9796 & 0.6437 & 0.6833 & 0.7379 \\ Linear VAR & 0.8159 & 0.9153 & 0.9645 & 0.6685 & 0.6726 & 0.7202 \\ formulation B & 0.7795 & 0.9435 & 0.9976 & 0.6137 & 0.6557 & 0.8084 \\ cMLP & 0.6390 & 0.7424 & 0.7522 & 0.5551 & 0.5736 & 0.5845 \\ cRNN & 0.6519 & 0.7947 & 0.8922 & 0.5672 & 0.5827 & 0.5935 \\ cLSTM & 0.5505 & 0.5837 & 0.6116 & 0.5350 & 0.5716 & 0.5833 \\ \hline \end{tabular} \end{table} Table I Comparison of AUROC for VAR causality selection among different approaches, as a function of the VAR lag order and the length of the time series T. Averaged over 10 experimental runs Figure 2: True causal dependencies VAR model with \(P=4\) (left) and Lorentz \(F=10\) (right) VAR outperform state-of-the-art cMLP, cRNN, and cLSTM models. The performance of other models seems to deteriorate over a higher lag value. It is clear from Fig. 3 and Fig. 2 that the estimates \((a_{nn^{\prime}}^{(p)})\) of formulation B are very close to the ground truth, and they outperform the other algorithms for \(P=2\) and \(T=1000\). From Fig. 5, the results seem to suggest that the prediction capability for formulations A and B is better than that of cMLP, cRNN, and cLSTM. #### V-B2 Lorentz Model In an N-dimensional Lorenz model, the continuous dynamics are given by \[\frac{dx_{ti}}{dt}=(x_{t(i+1)}-x_{t(i-2)})x_{t(i-1)}-x_{ti}+F, \tag{45}\] where \(x_{t(-1)}=x_{t(p-1)},x_{t0}=x_{tp},x_{t(p+1)}=x_{t1}\); higher values of the force constant \(F\) entail a stronger nonlinearity and more chaotic behavior in the time series. The data time series generated in this case corresponds to a discrete-time simulation of a multivariate Lorentz-96 model with \(N=10\) series where the nonlinear dependencies follow a sparse pattern as depicted Figure 4: Learned causal dependencies from the data generated from Lorentz model with \(F=10\) and \(T=1000\) Figure 3: Learned causal dependencies from the data generated from VAR model with \(P=2\) and \(T=1000\) on the right pane of Fig. 2. AUROC values were calculated for formulations A, B, linear VAR, cMLP, cRNN, and cLSTM across time series lengths \(T=250\), \(T=500\), and \(T=1000\), for force constant \(F\) taking values 10 and 40. According to Table II, for \(F=10\), all models for \(T>500\) has obtained AUROC \(>0.95\). For more chaotic series with \(F=40\), cMLP and cRNN kept a performance above 0.95, and cLSTM and interpretable models attained an AUROC value between \(0.7\) and \(0.8\). The simplifying modeling offers interpretability with a slight loss in expressive power. In highly chaotic time series (\(F=40\)), performance moderately declines but remains competitive with DL models for less chaotic processes (\(F=10\)). AUROC improves with larger \(T\), with f_A and f_B outperforming linear VAR for \(T>500\). The cRNN model estimates closely match the ground truth, especially for \(F=10\). Fig. 6 shows that the train NMSE for formulations A and B is better than that of linear VAR by a small margin, whereas the DL models perform significantly better at prediction. This result contrasts with the high and similar AUROC values shown in Table II, and suggests that the proposed modeling assumption cannot capture the complexity of the Lorentz model. ### _Experiments with real data sets_ In this section, we conduct experiments using data collected from a sensor network at the Edvard Griefe offshore oil and gas platform. We have 24 time series, each representing sensor readings from decantation tanks measuring temperature (T), pressure (P), or oil level (L). Our goal is to uncover hidden dependencies and predict the system's short-term future state in terms of variables such as pressure and temperature, which may be influenced by physical tank proximity, pipeline flows, and control mechanisms. To create these time series, we uniformly sample sensor values every 5 seconds, resulting in 4000 samples in total. We employ various methods, including Formulations A, B, cMLP, cRNN, cLSTM, and linear VAR, to infer these variable relationships. The optimal \(\lambda\) is determined through a grid search and cross-validation process. Using the parameters learned from Formulation B, we construct an adjacency matrix by computing the \(l_{2}\) norm of the parameter vector for each pair of nodes. The resulting graph is visualized in Fig. 8, where self-loops are removed, and arrow colors indicate edge weights. Additionally, Fig. 7 displays the performance of all methods in terms of training NMSE. Formulations A and B consistently outperform VAR, cMLP, cRNN, and cLSTM, with Formu Figure 5: NMSE comparison of formulation A, B, cMLP, cRNN, cLSTM, and VAR from data generated through nonlinear VAR model with lag order \(P=2\) and \(T=1000\) Figure 8: Causal dependencies estimated using formulation B for real data from Lundin separation facility with \(N=24\) and \(T=4000\) Figure 6: NMSE comparison of formulation A, B, cMLP, cRNN, cLSTM, and VAR from data generated from Lorentz model with \(F=10\) and \(T=1000\) Figure 7: NMSE comparison of formulation A, B, cMLP, cRNN, cLSTM, and VAR using real data from Lundin separation facility. \(N=24\) and \(T=4000\) lation A achieving the lowest prediction NMSE. This aligns with our results from synthetic nonlinear VAR data in Sec. V-A1, where Formulation B demonstrated superior topology identification performance. Since there is no ground truth available for the topology in this case, we visualize the graph identified by Formulation B for clarity. ## VI Conclusion To discover the dependencies that are inherent to a nonlinear multivariate model, a modelling technique has been described, formulated and validated. The main modelling idea is that a nonlinear VAR model can be expressed as the composition of a linear VAR model and a set of univariate, invertible nonlinear functions. A NN is associated with each variable in such a model to express the non-linear relation between a real-world sensor and a latent variable that is part of a VAR model that can be directly associated with a graph. In order to increase the ability of the suggested algorithms to identify the topology underlying a set of time series in an interpretable way, a sparsity-inducing penalty has been added to the estimation cost function. Two different approaches to the estimation of the model parameters are proposed, one of them (formulation A) based on minimizing the MSE in the sensor measurement space, and the other one (formulation B) based on minimising the MSE in the latent space. The solvers for both techniques combine proximal gradient descent and projected gradient descent. Formulation B additionally requires to stabilize the mean and variance of the signals in the latent space, the associated constraints being enforced via a primal-dual algorithm. Numerical results obtained from experiments that use both synthetic and real data indicate that the proposed technique achieves competitive results as its performance is compared with existing state-of-the-art models, in terms of topology identification and prediction ability. This shows that the proposed formulations are useful for determining the nonlinear relationships of sensor networks in the real world, encouraging further research in nonlinear VAR-based topology identification algorithms. Based on the information and experiments provided, it appears that formulation B is more suitable for estimating the adjacency graph, while formulation A is more efficient for prediction tasks. ## Appendix A In this appendix we provide the detailed derivation of the backward equations. The gradient of the cost is obtained by applying the chain rule as follows: \[\frac{dC[t]}{d\theta_{i}}=\sum_{n=1}^{N}\frac{\partial C}{\partial z_{n}[t]} \frac{\hat{z}_{n}[t]}{\partial\theta_{i}} \tag{46}\] where \(\frac{\partial C}{\partial z_{n}[t]}=2(\hat{z}_{n}[t]-z_{n}[t])=S_{n}\) \[\frac{\partial\hat{z}_{n}[t]}{\partial\theta_{i}}=\frac{\partial f_{n}}{ \partial\hat{y}_{n}}\frac{\partial\hat{y}_{n}}{\partial\theta_{i}}+\frac{ \partial f_{n}}{\partial\theta_{n}}\frac{\partial\theta_{n}}{\partial\theta_{i}} \tag{47}\] \[\text{where }\frac{\partial\theta_{n}}{\partial\theta_{i}}=\left\{\begin{array} []{l}I,n=i\\ 0,n\neq i\end{array}\right.\] Substituting equation (46) into (47) yields \[\frac{dC[t]}{d\theta_{i}}=\sum_{n=1}^{N}S_{n}\left(\frac{\partial f_{n}}{ \partial\hat{y}_{n}}\frac{\partial\hat{y}_{n}}{\partial\theta_{i}}+\frac{ \partial f_{n}}{\partial\theta_{n}}\frac{\partial\theta_{n}}{\partial\theta_{ i}}\right). \tag{48}\] Equation(48) can be simplified as: \[\frac{dC[t]}{d\theta_{i}}=S_{i}\frac{\partial f_{i}}{\partial\theta_{i}}+\sum _{n=1}^{N}S_{n}\frac{\partial f_{n}}{\partial\hat{y}_{n}}\frac{\partial\hat{y }_{n}}{\partial\theta_{i}}. \tag{49}\] The next step is to derive \(\frac{\partial\hat{y}_{n}}{\partial\theta_{i}}\) and \(\frac{\partial f_{i}}{\partial\theta_{i}}\) of equation (49): \[\frac{\partial\hat{y}_{n}[t]}{\partial\theta_{i}}=\sum_{p=1}^{P}\sum_{j=1}^{N }a_{nj}^{(p)}\frac{\partial}{\partial\theta_{j}}\tilde{y}_{j}[t-p]\frac{ \partial\theta_{j}}{\partial\theta_{i}}. \tag{50}\] With \(f_{i}^{\prime}\left(\hat{y}\right)=\frac{\partial f_{i}\left(\hat{y},\theta_{ i}\right)}{\partial\left(y\right)},\) expanding \(\tilde{y}_{j}[t-p]\) in equation (50) yields \[\frac{dC[t]}{d\theta_{i}}= S_{i}\left(\frac{\partial f_{i}}{\partial\theta_{i}}\right)\] \[+\sum_{n=1}^{N}S_{n}\left(f_{n}^{\prime}(\hat{y}_{n}[t])\sum_{p=1 }^{P}a_{ni}^{(p)}\frac{\partial}{\partial\theta_{i}}g_{i}\left(z_{i}[t-p], \theta_{i}\right)\right) \tag{51}\] Here, the vector \[\frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial\theta_{i}}= \left[\frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial\alpha_{i} }\frac{\partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial w_{i}}\frac{ \partial f_{i}\left(\hat{y},\theta_{i}\right)}{\partial k_{i}}\frac{\partial f _{i}\left(\hat{y},\theta_{i}\right)}{\partial b_{i}}\right]\] can be obtained by standard or automated differentiation However, (51) involves the calculation of \(\frac{\partial g_{i}\left(\hat{y},\theta_{i}\right)}{\partial\theta_{i}}\), which is not straightforward to obtain. Since \(g_{i}(z)\) can be computed numerically, the derivative can be obtained by implicit differentiation, realizing that the composition of \(f_{i}\) and \(g_{i}\) remains invariant, so that its total derivative is zero: \[\frac{d}{d\theta_{i}}\left[f_{i}\left(g_{i}\left(z,\theta_{i}\right),\theta_{i} \right)\right]=0 \tag{52}\] \[\Rightarrow\frac{\partial f_{i}\left(g_{i}\left(z,\theta_{i}\right),\theta_{i} \right)}{\partial g\left(z,\theta_{i}\right)}\frac{\partial g\left(z,\theta_{ i}\right)}{\partial\theta_{i}}+\left.\frac{\partial f_{i}\left(\tilde{y}, \theta_{i}\right)}{\partial\theta_{i}}\right|_{\tilde{y}=g_{i}\left(z,\theta_{ i}\right)}=0 \tag{53}\] \[\Rightarrow f_{i}^{\prime}(g_{i}(z,\theta_{i}))\frac{\partial g\left(z,\theta_{ i}\right)}{\partial\theta_{i}}+\left.\frac{\partial f_{i}\left(\tilde{y}, \theta_{i}\right)}{\partial\theta_{i}}\right|_{\tilde{y}=g_{i}\left(z,\theta_{i} \right)}=0 \tag{54}\] \[\text{Hence }\frac{\partial g_{i}\left(z,\theta_{i}\right)}{\partial \theta_{i}}=\\ -\left\{f_{i}^{\prime}(g_{i}(z,\theta_{i}))\right\}^{-1}\left(\frac{ \partial f_{i}\left(\tilde{y},\theta_{i}\right)}{\partial\theta_{i}}\right|_{ \tilde{y}=g_{i}\left(z,\theta_{i}\right)}\right). \tag{55}\] The gradient of \(C_{T}\) w.r.t. the VAR coefficient \(a_{ij}^{(p)}\) is calculated as follows: \[\frac{dC[t]}{da_{ij}^{(p)}}=\sum_{n=1}^{N}S_{n}\frac{\partial f_{n}}{\partial \tilde{y}_{n}}\frac{\partial\hat{y}_{n}}{\partial a_{ij}^{(p)}} \tag{56}\] \[\frac{\partial\hat{y}_{n}[t]}{\partial a_{ij}^{(p)}}=\frac{\partial}{\partial a_{ ij}^{(p)}}\sum_{p^{\prime}=1}^{P}\sum_{q=1}^{N}a_{nq}^{(p^{\prime})}\tilde{y}_{q}[t-p]\] \[\text{where }\frac{\partial a_{i}^{(p^{\prime})}}{\partial a_{ij}^{(p)}}=\left\{ \begin{array}{l}1,n=i,p=p^{\prime},\text{ and }q=j\\ 0,\text{otherwise}\end{array}\right. \tag{57}\] \[\frac{dC[t]}{da_{ij}^{(p)}}=S_{i}f_{i}^{\prime}\left(\hat{y}_{i}[t]\right) \tilde{y}_{j}[t-p]. \tag{58}\] ## Appendix B Consider \(\tilde{f}\) such that \[\check{f}_{i}=\check{b}_{i}+\sum_{j=1}^{M}\check{\alpha}_{ij}h\left(\check{ w}_{ij}y_{i}-\check{k}_{ij}\right) \tag{59}\] \(\check{f}_{i}(1)=1,\check{f}_{i}(-1)=-1\), \(\check{f}_{i}(x)=x\). where \(\check{\alpha}_{i},\check{w}_{i},\check{k}_{i}\) and \(\check{b}_{i}\) are the learned parameters corresponding to \(\check{f}_{i}\). A new function \(\check{f}^{1}\) is defined such that \[\check{f}_{i}^{1}=\check{b}_{i}^{1}+\sum_{j=1}^{M}\check{\alpha}_{ij}^{1}h \left(\check{w}_{ij}^{1}y_{i}-\check{k}_{ij}^{1}\right) \tag{60}\] \[\check{f}_{i}^{1}(z)=\check{f}_{i}(-1)\text{ and }\check{f}_{i}^{1}(\bar{z})= \check{f}_{i}(1) \tag{61}\] \[\check{f}_{i}^{1}(ax+B)=\check{f}_{i}(ax+B) \tag{62}\] from (59) and (62) \(\check{w}_{i}^{1}=a\check{w}_{i}\) and \(\check{k}_{i}^{1}=a\check{w}_{i}B+\check{k}_{i}\). from equation (61) and (62), \[a\check{z}+B=-1\text{ and }a\bar{z}+B=1 \tag{63}\] from (63) \(a=-2/(z-\bar{z})\) and \(B=2\bar{z}/(z-\bar{z})\) Let \[\check{f}_{i}^{2}=\check{b}_{i}^{2}+\sum_{j=1}^{M}\check{\alpha}_{ij}^{2}h \left(\check{w}_{ij}^{2}y_{i}-\check{k}_{ij}^{2}\right) \tag{64}\] such that \[\check{f}_{i}^{2}(\bar{z})=\bar{z}\text{, }\check{f}_{i}^{2}(z)=z\text{ and } \check{f}_{i}^{2}(x)=c\check{f}_{i}^{1}(x)+d \tag{65}\] from (64) \(\check{b}_{i}^{2}=c\check{b}_{i}+d\) and \(\check{\alpha}_{i}^{2}=c\check{\alpha}_{i}\). From (65) \[z=-c+d\text{ and }\bar{z}=c+d \tag{66}\] from (66) \(d=(\bar{z}+\bar{z})/2\) and \(c=(\bar{z}-\bar{z})/2\) Hence \(\check{\alpha}_{i}=c\alpha_{i},\check{b}_{i}=cb_{i}+d,\check{w}_{i}=aw_{i}, \check{k}_{i}=-w_{i}B+k_{i}\) where \(c=(\bar{z}-\bar{z})/2,d=(\bar{z}+\bar{z})/2,a=-2/(z-\bar{z})\) and \(B=2\bar{z}/(\bar{z}-\bar{z})\).
2309.07421
Where are the Pevatrons that Form the Knee in the Spectrum of the Cosmic Ray Nucleon Component around 4 PeV?
The paper discusses an approach that made it possible to estimate the distance to the nearest pevatrons, which form a knee in the spectrum of the cosmic ray nucleon component of about $4$ PeV. It is based on the spectra of nucleons and electrons obtained by the authors in the framework of the superdiffusion model of nonclassical cosmic rays diffusion, which have a knee, on the assumption that nucleons and electrons are accelerated by the same type sources and their propagation in an inhomogeneous turbulent galactic medium is characterized by the same diffusion coefficient, and also on the knee in the spectrum of the electronic component in the region of $0.9$ TeV, established in the DAMPE experiment. It is shown that pevatrons, which form a knee in the spectrum of the cosmic ray nucleon component of about $4$ PeV, are located at distances of the order of $0.75$ kpc from the Earth.
A. A. Lagutin, N. V. Volkov
2023-09-14T04:33:59Z
http://arxiv.org/abs/2309.07421v1
Where are the Pevatrons that Form the Knee in the Spectrum of the Cosmic Ray Nucleon Component around 4 PeV? ###### Abstract The paper discusses an approach that made it possible to estimate the distance to the nearest pevatrons, which form a knee in the spectrum of the cosmic ray nucleon component of about 4 PeV. It is based on the spectra of nucleons and electrons obtained by the authors in the framework of the superdiffusion model of nonclassical cosmic rays diffusion, which have a knee, on the assumption that nucleons and electrons are accelerated by the same type sources and their propagation in an inhomogeneous turbulent galactic medium is characterized by the same diffusion coefficient, and also on the knee in the spectrum of the electronic component in the region of 0.9 TeV, established in the DAMPE experiment. It is shown that pevatrons, which form a knee in the spectrum of the cosmic ray nucleon component of about 4 PeV, are located at distances of the order of 0.75 kpc from the Earth. ## 1 Introduction Despite more than 100 years of research, the spatial distribution of the main cosmic rays (CRs) sources and the mechanisms of particle acceleration in them have not been finally established. To solve the problem of searching of galactic CR sources the energies of the order of \(10^{15}\) eV is a key place because at these energies the CR spectrum has a break (so-called "knee"). Today it is generally accepted that CRs with energies around the knee are mainly of galactic origin, and their sources are called pevatrons. The search of galactic pevatrons is currently one of the priority task solved jointly by all ground-based and orbital astrophysical observatories operating in the very high energy region (see for example reviews [1; 2; 3; 4]). One of the most important results achieved is the detection of gamma rays with energies above 100 TeV, clearly indicating that there is an effective acceleration of CR particles up to energies of the order of \(10^{15}\) eV. For many years, the scientific community has been dominated by the hypothesis that the main sources capable of accelerating CRs to such energies are supernova remnants (SNRs). Despite this, today there are no reliable experimental data confirming the fact that supernovae accelerate CR nuclei to energies of \(\sim 4\) PeV, i.e., to the knee energy in the CRs spectrum [5; 6; 7; 8; 9]. The detection of ultrahigh-energy gamma-rays from regions not associated with SNRs indicates that there are other astrophysical objects that can claim the role of pevatrons (see materials of HONEST Workshops [10]). Main goal of this paper is to discuss an approach that made it possible to estimate the distance to the nearest pevatrons, which form a knee in the spectrum of the CRs nuclear component about 4 PeV. ## II Proposed Approach The key elements of the proposed approach are based on the following results and assumptions. * Our approach is based on the spectra of nuclei and electrons obtained by the authors in the framework of the superdiffusion model of nonclassical CRs diffusion, which have a knee. The main provisions of the nonclassical diffusion model and the results of its application for the interpretation of the spectra of the leptonic and nuclear components of CRs, as well as the spectrum and mass composition in the ultrahigh energy region, are given in our previous papers [11; 12; 13; 14; 15; 16; 17; 18]. * We assume that nuclei and high energy electrons and positrons are accelerated by the same type sources and their propagation in an inhomogeneous (fractal-like) turbulent galactic medium is characterized by the same diffusion coefficient. * We use the fact that there is a knee in the high-energy cosmic-ray electrons plus positrons spectrum in the region \(\sim 0.9\) TeV. Early indications of the presence of this knee were obtained by ground-based Cherenkov detectors of the H.E.S.S. collaboration [19]. Recent results of direct measurements by DAMPE [20] and CALET [21] space observatories confirmed the presence of this knee in the total spectrum of electrons and positrons. ## 3 The Nonclassical Crs Diffusion Equations For the first time, the equations of superdiffusion of cosmic rays without taking into account energy losses and taking them into account were proposed in our works [11; 13]. For the density of particles \(N({\bf r},t,E)\) with energy \(E\) at the location \({\bf r}\) and time \(t\), generated in a fractal-like medium by Galactic sources with a distribution density \(S({\bf r},t,E)\), is written as \[\frac{\partial N({\bf r},t,E)}{\partial t}=-D(E,\alpha)(-\Delta)^{\alpha/2}N ({\bf r},t,E)+S({\bf r},t,E), \tag{1}\] \[\frac{\partial N({\bf r},t,E)}{\partial t}=-D(E,\alpha)(-\Delta)^{\alpha/2}N( {\bf r},t,E)+\frac{\partial B(E)N({\bf r},t,E)}{\partial E}+S({\bf r},t,E). \tag{2}\] In these equation \(D(E,\alpha)=D_{0}(\alpha)E^{\delta}\) is the anomalous diffusivity; \((-\Delta)^{\alpha/2}\) is the fractional Laplacian [22] ("Riesz operator") (reflects a nonlocality of the diffusion process of particles in the interstellar medium); \(B(E)\) is the mean rate of continuous energy losses of electrons and positrons. It should be noted that when \(\alpha=2\) from Eqs. (1) and (2) we obtain the normal diffusion Ginzburg-Syrovatskii equations. In calculations of the spectrum of electrons and positrons, the main mechanisms of energy losses are taken into account, i.e. ionization, bremsstrahlung, synchrotron and inverse Compton losses. In a recent paper [23] it was shown that in the relativistic regime (Klein-Nishina mode) the threshold energy of inverse Compton scattering is reached even when electrons interact with photons of visible radiation. The cross sections for the interaction of electrons with background photons become much smaller than the Thomson cross section traditionally used in calculations. As a result, the rate of energy losses of electrons during interaction with the background electromagnetic radiation of the Galaxy decreases (the Klein-Nishina effect). In this paper, in calculating the spectra of electrons and positrons with energies \(E>100\) GeV, the approximation expressions proposed in [23] were used to take into account the Klein-Nishina effect. The solution of the superdiffusion Eqs. (1) and (2) was found by the Green's function method in both cases with energy losses and without ones. To solve equation (2) we use the Syrovatskii functions [24] \[\lambda(E,E_{0})=\int\limits_{E}^{E_{0}}\frac{D(E^{\prime})}{B(E^{\prime})}dE^{ \prime},\qquad\tau(E,E_{0})=\int\limits_{E}^{E_{0}}\frac{dE^{\prime}}{B(E^{ \prime})}. \tag{3}\] \(\lambda(E,E_{0})^{1/\alpha}\), pc \(\tau(E,E_{0})\), yr Klein-Nishina regime Tomson regime \(E_{0}\), GeV **Figure 1. Dependences of the Syrovatskii functions \(\lambda(E,E_{0})^{1/\alpha}\) and \(\tau(E,E_{0})\) on the initial energy \(E_{0}\) of the particles for the different regimes of energy losses** The \(\lambda(E,E_{0})^{1/\alpha}\) function should be understood as the average distance over which the particle has diffused, taking into account the energy losses from the initial value \(E_{0}\) to the energy \(E\). The \(\tau(E,E_{0})\) function gives the "cooling time" from \(E_{0}\) energy to \(E\). Fig. 1 show the behavior of the functions \(\lambda(E,E_{0})^{1/\alpha}\) and \(\tau(E,E_{0})\) for different values of the initial energy \(E_{0}\) of the particles. The calculations were carried out for the both Klein-Nishina and Thomson regimes of energy losses. It can be seen from the Fig. 1(a) that if the particle energy at the point of observation is equal to 0.9 TeV, then for any mode of energy lossess from the initial value of 4 PeV, the diffusion radius of such particles is about \(100-200\) pc. It can be seen from Fig. 1(b), the "cooling time" of such particles varies from \(10^{5}\) to \(10^{6}\) years. For point instant source \(S({\bf r},t,E)=S_{0}E^{-p}\delta({\bf r})\delta(t)\) solutions of the superdiffusion Eqs. (1) and (2) take the form: * for the nuclear component of CR (without energy losses) \[N({\bf r},t,E)=S_{0}E^{-p}(D(E,\alpha)t)^{-3/\alpha}g_{3}^{(\alpha)}(|{\bf r }|(D(E,\alpha)t)^{-1/\alpha}).\] (4) * for the leptonic component of CR (with energy losses) \[N({\bf r},t,E)=\frac{S_{0}E^{-p}}{B(E)}\lambda(E,E_{0})^{-3/\alpha}g_{3}^{( \alpha)}(|{\bf r}|\lambda(E,E_{0})^{-1/\alpha}),\] (5) In the expressions (4) and (5) \(g_{3}^{(\alpha)}(r)\) is the probability density of three-dimentional sphericaly-symmetrical stable distribution [25; 26]. The key feature of this function is the presence of a knee. In case \(\alpha=2\)\(g_{3}^{(\alpha)}(r)\) is the normal distribution. In the particular case (Tomson regime) for energy losses in the form \(B(E)=bE^{2}\) GeV/s, where \(b=1.1\cdot 10^{-16}\) (GeV s)\({}^{-1}\) we find the solution of superdiffusion Eq. (2) \[N({\bf r},t,E)=S_{0}E^{-p}(1-btE)^{p-2}\lambda(t,E)^{-3/\alpha}g_{3}^{(\alpha )}(|{\bf r}|\lambda(t,E)^{-1/\alpha}). \tag{6}\] Here \[\lambda(t,E)=D_{0}(\alpha)E^{\delta}\hat{\lambda}(t,E),\qquad\hat{\lambda}(t,E)=\frac{1-(1-btE)^{1-\delta}}{b(1-\delta)E}. \tag{7}\] The figure 2 shows the results of calculating the function \(\hat{\lambda}(t,E)\) for various values of energy \(E\). It should be noted that in wide range of parameters \(E\) and \(t\) we obtain \(\hat{\lambda}(t,E)\equiv t\), i.e. \(\lambda(t,E)=D(E,\alpha)t\). Calculations in the Klein-Nishina mode lead to the same conclusion. ## IV Knee in the energy spectrum To analyze the energy dependence of the electron concentration, we write solution (6) of the superdiffusion Eq. (2) in the form \(N=N_{0}E^{-\eta}\). It follows from this representation that \[\eta=-\frac{E}{N}\frac{\partial N}{\partial E}.\] Taking into account the property of the stable law [25]\(\frac{dg_{3}^{(\alpha)}(r)}{dr}=-2\pi rg_{5}^{(\alpha)}(r)\), we find \[\eta=2p-2+\frac{\delta-1}{\alpha}\left[3-\frac{2\pi r^{2}}{\lambda(t,E)^{1/ \alpha}}\frac{g_{5}^{(\alpha)}(|\mathbf{r}|\lambda(t,E)^{-1/\alpha})}{g_{3}^{ (\alpha)}(|\mathbf{r}|\lambda(t,E)^{-1/\alpha})}\right]=2p-2+\frac{\delta-1}{ \alpha}\Xi. \tag{8}\] In Eq. (8) \(g_{5}^{(\alpha)}(r)\) is the probability density of five-dimentional stable distribution [25]. Fig. 3 shows the energy dependence of the index \(\Xi\) of the electrons and positrons observed spectrum for various diffusion modes (parameter \(\alpha\)). It can be seen that the spectrum has Figure 2: Dependence of the function \(\hat{\lambda}(t,E)\) on time \(t\) for different values of energy \(E\) a knee in the superdiffusion regime \(\alpha<2\) (exponent \(\Xi\) at the knee point of 0.9 TeV is equal to zero). At the same time, in the normal diffusion regime, the spectral index \(\Xi\) increases monotonically with increasing energy. In this case the spectrum has no knee. It should be noted that the break in the electron spectrum obtained in the framework of the nonclassical diffusion model, as well as comparison with experimental data, will be considered in our next work. ## 5 Approach to Estimate the Distance to the Nearest Pevatrons It was shown in [11] that the knee may be due to anomalous CR diffusion in a turbulent (fractal type) interstellar medium, but not the presence of maximum cosmic ray energy in sources. Recently published data of the LHAASO Collaboration on the spectrum of diffuse Figure 3: Change in the spectral index of the observed electrons on the energy in the case of a point instant source for the different diffusion regimes. \(r=200\) pc gamma radiation from the disk of the Galaxy [27], which is described by a power-law function with an exponent \(-2.99\) in the entire energy region of 10 TeV - 1 PeV may be considered as an indication on the validity of the hypothesis that the knee in the \(1-4\) PeV region is most likely due to transport. In the framework of nonclassical CRs diffusion the knee is due to the presence of a break in the stable distribution \(g_{3}^{(\alpha)}(r)\) at the value of the argument \(r\approx 2.2\). On this basis the expressions (5) and (4) for the spectra obtained in the nonclassical diffusion model make it possible to establish a relationship between the characteristics \((\mathbf{r},t,E)\) of the knee points of the nuclear (\(n\)) and leptonic (\(e\)) components. For \(n\) component \[N(\mathbf{r},t,E)\sim g_{3}^{(\alpha)}\left(|\mathbf{r}|(D(E,\alpha)t)^{-1/ \alpha}\right)\Rightarrow r_{n}(D_{0}(\alpha)E_{n}^{\delta}t_{n})^{-1/\alpha} =2.2.\] For \(e\) component \[N(\mathbf{r},t,E)\sim g_{3}^{(\alpha)}\left(|\mathbf{r}|\lambda(t,E)^{-1/ \alpha}\right)\Rightarrow r_{e}(D_{0}(\alpha)E_{e}^{\delta}\hat{\lambda}(t_{e },E_{e}))^{-1/\alpha}=2.2.\] We assume that CR nuclei and high-energy electrons are accelerated by the same types of sources (pevatrons) and their propagation in an inhomogeneous turbulent galactic medium is characterized by the same diffusion coefficient \(D_{0}(\alpha)\). Due to this assumption \[r_{n}(D_{0}(\alpha)E_{n}^{\delta}t_{n})^{-1/\alpha}=r_{e}(D_{0}(\alpha)E_{e}^ {\delta}\hat{\lambda}(t_{e},E_{e}))^{-1/\alpha}. \tag{9}\] It follows from the Eq. (9) that \[r_{n}=r_{e}\left[\left(\frac{E_{n}}{E_{e}}\right)^{\delta}\frac{t_{n}}{\hat{ \lambda}(t_{e},E_{e})}\right]^{1/\alpha}\equiv r_{e}\xi. \tag{10}\] It should be noted that the estimates obtained within the framework of the proposed approach are almost diffusion model independent. ## 6 Results Parameters of the nonclassical diffusion model and the technology for their self-consistent determination using the available experimental data on galactic CRs were discussed in our previous papers [17; 18]. The main parameters of the model are given in the Table 1. Spatiotemporal characteristics nearest galactic CR sources, which include the Geminga, Monogem, and Vela pulsars, are given in [15]. The "lifetime" of the CR electrons is described by the function \(\tau(E,E_{0})\) from Eq. (3). It follows from this that the TeV-energy electrons observed on Earth were produced by sources \(\sim 10^{5}\) years ago. During this time, in the superdiffusion mode \(\overline{r^{2}}\sim 2D(E,\alpha)t^{3-\alpha}\)[28], diffusion radius of electrons \(r_{e}\sim 200\) pc. To obtain estimates of the distance to the nearest pevatrons, we use the value of the knee energy of the proton spectrum \(E_{n}=650\) TeV that has been obtained in [29; 30]. From Eq. (10) we obtain that \(r_{n}=3.75r_{e}\). Thus pevatrons, which form a knee in the spectrum of the nuclear component of CRs of about 4 PeV, are located at distances of the order of 0.75 kpc from the Earth. \begin{table} \begin{tabular}{|l|l|} \hline **Parameter** & **Value** \\ \hline \(p\) & 2.85 \\ \hline \(\delta\) & 0.27 \\ \hline \(D_{0}(\alpha)\) & \(10^{-3}\) pc\({}^{1.7}\)yr\({}^{-1}\) \\ \hline \(\alpha\) & 1.7 \\ \hline \(t_{e}\), \(t_{n}\) & \(10^{5}\) yr \\ \hline \(E_{e}\) & 0.9 TeV \\ \hline \(E_{n}\) & 0.65 PeV \\ \hline \end{tabular} \end{table} Table 1: The nonclassical diffusion model parameters \begin{table} \begin{tabular}{l|l|l} \hline Source & \(r\), pc & \(t,10^{5}\) yr \\ \hline Monoceros & 600 & 0.46 \\ Cyg. Loop & 770 & 0.20 \\ CTB 13 & 600 & 0.32 \\ S 149 & 700 & 0.43 \\ STB 72 & 700 & 0.32 \\ CTB 1 & 900 & 0.47 \\ HB 21 & 800 & 0.23 \\ HB 9 & 800 & 0.27 \\ \hline \end{tabular} \end{table} Table 2: Space-time parameters of the most likely candidates for pevatrons according to [31; 32] The most likely candidates for pevatrons are shown in the Table 2. ## 7 Conclusions An approach that makes it possible to estimate the distance to the nearest pevatrons, which form a knee in the spectrum of the CRs nuclear component of about 4 PeV, has been formulated. It is based on the CRs spectra of nuclei and leptons obtained by the authors in the framework of the superdiffusion model of nonclassical diffusion, which have a knee, on the assumption that nuclei and electrons are accelerated by the same types of sources and their propagation in an inhomogeneous turbulent galactic medium is characterized by the same diffusion coefficient and also on the knee in the spectrum of the leptonic component in the region of 0.9 TeV, established in the DAMPE and CALET experiments. It has been established that pevatrons, which form a knee in the spectrum of the CRs nucleon component of about 4 PeV, are located at distances of the order of 0.75 kpc from the Earth. ACKNOWLEDGMENTS The work is supported by the the Russian Science Foundation (grant no. 23-72-00057).
2309.13454
Valid and efficient imprecise-probabilistic inference with partial priors, III. Marginalization
As Basu (1977) writes, "Eliminating nuisance parameters from a model is universally recognized as a major problem of statistics," but after more than 50 years since Basu wrote these words, the two mainstream schools of thought in statistics have yet to solve the problem. Fortunately, the two mainstream frameworks aren't the only options. This series of papers rigorously develops a new and very general inferential model (IM) framework for imprecise-probabilistic statistical inference that is provably valid and efficient, while simultaneously accommodating incomplete or partial prior information about the relevant unknowns when it's available. The present paper, Part III in the series, tackles the marginal inference problem. Part II showed that, for parametric models, the likelihood function naturally plays a central role and, here, when nuisance parameters are present, the same principles suggest that the profile likelihood is the key player. When the likelihood factors nicely, so that the interest and nuisance parameters are perfectly separated, the valid and efficient profile-based marginal IM solution is immediate. But even when the likelihood doesn't factor nicely, the same profile-based solution remains valid and leads to efficiency gains. This is demonstrated in several examples, including the famous Behrens--Fisher and gamma mean problems, where I claim the proposed IM solution is the best solution available. Remarkably, the same profiling-based construction offers validity guarantees in the prediction and non-parametric inference problems. Finally, I show how a broader view of this new IM construction can handle non-parametric inference on risk minimizers and makes a connection between non-parametric IMs and conformal prediction.
Ryan Martin
2023-09-23T18:40:53Z
http://arxiv.org/abs/2309.13454v1
# Valid and efficient imprecise-probabilistic inference with partial priors, III. Marginalization ###### Abstract As Basu (1977) writes, "Eliminating nuisance parameters from a model is universally recognized as a major problem of statistics," but after more than 50 years since Basu wrote these words, the two mainstream schools of thought in statistics have yet to solve the problem. Fortunately, the two mainstream frameworks aren't the only options. This series of papers rigorously develops a new and very general inferential model (IM) framework for imprecise-probabilistic statistical inference that is provably valid and efficient, while simultaneously accommodating incomplete or partial prior information about the relevant unknowns when it's available. The present paper, Part III in the series, tackles the marginal inference problem. Part II showed that, for parametric models, the likelihood function naturally plays a central role and, here, when nuisance parameters are present, the same principles suggest that the profile likelihood is the key player. When the likelihood factors nicely, so that the interest and nuisance parameters are perfectly separated, the valid and efficient profile-based marginal IM solution is immediate. But even when the likelihood doesn't factor nicely, the same profile-based solution remains valid and leads to efficiency gains. This is demonstrated in several examples, including the famous Behrens-Fisher and gamma mean problems, where I claim the proposed IM solution is the best solution available. Remarkably, the same profiling-based construction offers validity guarantees in the prediction and non-parametric inference problems. Finally, I show how a broader view of this new IM construction can handle non-parametric inference on risk minimizers and makes a connection between non-parametric IMs and conformal prediction. _Keywords and phrases:_ Bayesian; frequentist; inferential model; non-parametric; nuisance parameter; possibility theory; prediction; profile likelihood. ###### Contents * 1 Introduction * 2 Background * 2.1 Recap of Part II * 2.2 Interest and nuisance parameters * 2.3 Classical marginalization * 3 Marginal possibilistic IMs * 3.1 Naive solution * 3.2 More efficient solutions * 3.3 Further efficiency gains * 3.4 First examples * 3.5 Two challenging practical examples * 3.6 Words of caution * 4 Predictive possibilistic IMs * 4.1 Setup * 4.2 Three valid IM constructions * 4.3 Summary * 4.4 Examples * 5 Non-parametric possibilistic IMs * 5.1 Setup * 5.2 Valid IM construction * 5.3 Approximations * 5.4 Examples * 6 Possibilistic IMs without a likelihood * 6.1 Setup * 6.2 Inference on risk minimizers * 6.3 Prediction * 7 Conclusion Introduction In Martin (2022b), henceforth Part II, I developed a new framework for valid and efficient (imprecise-probabilistic) statistical inference that incorporates general incomplete or partial prior specifications. In particular, it simultaneously covers the classical frequentist case of vacuous prior information and the classical Bayesian case of complete prior specification. Like Bayes, it provides a sort of "probabilistic" uncertainty quantification--that is, to each relevant hypothesis a data-dependent numerical score is assigned that represents its support/plausibility--but, unlike Bayes, this isn't done with probability theory and Bayes's formula. Moreover, the new framework comes equipped with both reliability and coherence-like properties, so it achieves both the classical frequentist and Bayesian objectives. The present paper is a follow-up to Part II that addresses the common yet non-trivial situation in which the quantity of interest is just one feature of the full unknown. In other words, this paper focuses on cases where nuisance parameters are present and need to be eliminated for valid and efficient _marginal_ inference. What's unique about this framework overall is that inference is _imprecise-probabilistic_ (actually, _possibilistic_--more on this below) in the sense that, in light of data and other relevant information, uncertainty about the unknowns is quantified in terms of an imprecise probability, or a lower and upper probability pair. This "imprecision" isn't a shortcoming of the approach, or an undesirable feature that warrants an apology, it's necessary for the kinds of reliability that statisticians expect their methods to satisfy. A detailed justification of this claim is given in Martin (2019, 2021), but here let me make a couple high-level points. When we teach the classical frequentist methods that are reliable by definition, we're careful not to express these in terms of probability statements about the unknown parameter. For example, we emphasize that * the p-value is not the conditional probability, given data, that the true parameter value meets the criteria specified by the null hypothesis, and * the confidence level is not the conditional probability, given data, that the true parameter value is contained in the stated interval. Fisher (1973a) himself commented on this: _[A p-value] is more primitive, or elemental than, and does not justify, any exact probability statement about the proposition_ (ibid, p. 46) _... It is clear, however, that no exact probability statements can be based on [confidence sets]_ (ibid, p. 74). Note that Fisher says no _exact_--or _precise_--_probability statements_ can be made based on these classical procedures, but he leaves open the prospect that they offer certain _inexact_ or _imprecise probability statements_. The proposed imprecise-probabilistic framework is simply embracing Fisher's points and trying to squeeze all that we possibly can out of these classical ideas. Of course, not just any kind of imprecise probability would be compatible with p-values/confidence sets and the properties they're intended to satisfy, but it turns out that the consonant (e.g., Shafer 1976, Ch. 10) or possibilistic (e.g., Dubois 2006; Dubois and Prade 1988) brand of imprecision is both simple and "right" for this task; see Martin (2021) and Section 3.2 of Part II. And as shown in Part II, the examples below, and elsewhere (e.g., Martin and Liu 2013, 2015a,b,c), this imprecision _need not_ come with any sacrifice in efficiency, but some care is needed. The goal of this paper is to flesh out what I mean by "care" when the task is marginal inference. The key point, again, is that a certain kind and degree of imprecision is necessary to guarantee statisticians' desired reliability and, by the false confidence theorem (Balch et al. 2019; Martin 2019), attempts to push the model + data pair beyond its imprecise limits, as the default-prior Bayes and fiducial solutions do, create risks for severe unreliability; further discussion and references on this can be found in Martin (2023b). As mentioned above, the focus here is on valid and efficient marginal inference. Basu (1977) writes "Eliminating nuisance parameters from a model is universally recognized as a major problem of statistics" and yet it remains unsolved. That is, reliable elimination of nuisance parameters is a challenging problem that requires significant care; see, e.g., the frequentist impossibility results in Gleser and Hwang (1987) and Dufour (1997) and the unreliable behavior of Bayesian solutions in, e.g., Fraser (2011) and Fraser et al. (2016). A benefit of the general, imprecise-probabilistic framework developed in this series is that it offers a reliable-but-naive strategy (Section 3.1) for eliminating nuisance parameters. The aforementioned solution is "reliable" in the sense that the corresponding marginal inference would be valid (i.e., confidence sets attain the nominal coverage probability), but "naive" in the sense that it's not tailored to any specific feature and, therefore, inference would tend to be inefficient for any particular feature. So the goal here is to avoid sacrificing on validity or efficiency--I want to tailor the IM solution to a particular feature of interest so that the corresponding inference is both valid and efficient. An early IM solution to the marginal inference problem was put forward in Martin and Liu (2015c), but this was limited by, for one thing, its reliance on an expression of the statistical model in terms of a (simple, relatively easily manipulated) data-generating equation. The new framework developed in Part II is likelihood-driven, rather than data-generating equation-driven, and, therefore, can readily be applied to a wider range of problems. Moreover, being likelihood-driven makes it possible to take advantage of certain structure, i.e., factorizations, in the likelihood function, which can aid in the elimination of nuisance parameters. For example, in certain "ideal" cases (Section 2.3), the likelihood factors in such a way that one term depends only on the interest parameter and the other only on the nuisance parameter. In such cases, as I show in Section 3.2 below, it's relatively easy to construct a valid and efficient marginal IM. The way that I suggest to take advantage of this special structure is via a _profiling_ step, where the likelihood function is maximized over the nuisance parameter with the interest parameter fixed; when the likelihood function factors in an "ideal" way, certain key terms involving the nuisance parameter cancel out in the relative profile likelihood and efficient marginalization can be achieved. This is illustrated in several important, classical examples. Most importantly, the same profiling strategy leads to valid and efficient marginal inference in most (but not all--see Section 3.6) cases that are "less-than-ideal" in one way or another. I show, in Section 3.5 that the profiling-based IM construction leads to valid and efficient marginal inference in two challenging and practically relevant examples, namely, the Behrens-Fisher and gamma mean problems. In fact, to my knowledge, these are the best available solutions that are exactly--not just asymptotically approximately--valid. An important, albeit extreme case of marginal inference is prediction, where the model parameter itself is a nuisance parameter and only a future observation is of interest. Section 4 tackles the case of a parametric statistical model and the goal is valid and efficient inference on/prediction of some feature of future observables. There are at least three different ways prediction can be carried out in this framework, and I consider each of these in turn. Interestingly enough, the same profiling strategy described above can be used in this context as well, and tends to produce the most efficient predictive inference. These ideas are illustrated in a number of non-trivial examples, including predicting the largest of \(k\) many future gamma observations. The focus so far has been on problems that come equipped with a parametric model amenable to a likelihood-based analysis. But there are important, even classical problems that don't fit this mold, e.g, inference on the mean of an otherwise unspecified distribution. Towards an IM solution to this problem, but staying relatively close to the theory developed so far, I proceed in Section 5 by treating the model itself as the parameter, forming a sort of "empirical likelihood" as developed by Owen (2001) and others, then applying the same profiling strategy in hopes of eliminating all but the quantity of interest. In this higher-complexity context, computation of the IM's lower and upper probabilities, etc., is more challenging, so I offer some suggestions on how to approximate these (e.g., using bootstrap) and a few illustrations. Finally, in Section 6, I take a different perspective on the non-parametric problem mentioned above, one that avoids both thinking about and constructing/using a likelihood. In parametric models, it's the likelihood function that plays the (very important) role of mathematically linking the observable data and the quantity of interest, but presumably there'd be other more direct ways to make this link in non-parametric cases. Further investigation along these lines is needed, but I consider two relevant problems, namely, inference on risk minimizers and (non-parametric) prediction of future observations. One highlight of this investigation is that I'm able to show how the now-widely-used _conformal prediction_ methodology (e.g., Vovk et al. 2005) can actually be derived by applying some of the key principles in Part II to this broader notion of an inferential model. The paper concludes with a brief summary and a discussion of some open problems and directions for future investigation. ## 2 Background ### Recap of Part II Part II of the series put forward a general IM construction that can accommodate partial prior information about the model parameter \(\Theta\), if any, and returns a necessity-possibility measure pair as output to be used for uncertainty quantification. Here I give a relatively quick recap of this construction and the relevant properties. First a bit of notation. The statistical model is a family of probability distributions for the observable data \(Y\), which I'll write as \(\{\mathsf{P}_{Y|\theta}:\theta\in\mathbb{T}\}\). Note that the subscript indicates which quantity is random/uncertain, with dependence on, in this case, a parameter \(\theta\) being marked by the vertical bar. I'll assume that, for each \(\theta\), \(\mathsf{P}_{Y|\theta}\) admits a density/mass function, denoted by \(y\mapsto p_{Y|\theta}(y)\). Moreover, I'll write upper-case \(\Theta\) for the uncertain value of the model parameter that's to be inferred. Write \(\mathrm{probs}(\mathbb{T})\) for the set of all probability measures defined on the Borel \(\sigma\)-algebra of \(\mathbb{T}\). The approach developed in Part II allows for various kinds of prior information about \(\Theta\) to be incorporated. This includes the traditional Bayesian case where a single prior distribution that completely and precisely quantifies the _a priori_ uncertainty about \(\Theta\), as well as cases where the available prior information is incomplete or imprecise to some degree, even vacuous. Mathematically, this can be described in general by a lower and upper probability pair \((\underline{\mathsf{P}}_{\Theta},\overline{\mathsf{P}}_{\Theta})\) that quantifies the available prior information about \(\Theta\). The lower and upper probabilities are related via the duality \[\overline{\mathsf{P}}_{\Theta}(A)=1-\underline{\mathsf{P}}_{\Theta}(A^{c}), \quad A\subseteq\mathbb{T}. \tag{1}\] I'll assume throughout that this _a priori_ assessment is coherent in the sense of, e.g., Walley (1991, Ch. 2.5), Miranda and de Cooman (2014, Sec. 2.2.1), and Troffaes and de Cooman (2014, Def. 4.10). The technical definition of coherence isn't relevant here, but there are a few key consequences that are worth mentioning. * The lower probability \(A\mapsto\underline{\mathsf{P}}_{\Theta}(A)\) is 2-monotone which, in particular, implies that it's also super-additive, i.e., \[\underline{\mathsf{P}}_{\Theta}(A\cup B)\geq\underline{\mathsf{P}}_{\Theta}(A )+\underline{\mathsf{P}}_{\Theta}(B),\quad A\cap B=\varnothing.\] Since \(\underline{\mathsf{P}}_{\Theta}(A\cup A^{c})=1\), it follows from (1) and super-additivity that \[\underline{\mathsf{P}}_{\Theta}(A)\leq\overline{\mathsf{P}}_{\Theta}(A), \quad A\subseteq\mathbb{T}.\] This explains the lower/upper terminology and the under/over-bar notation. * De Finetti's school treats probabilities as fully subjective, and their real-world meaning and interpretation is teased out through a sort of game where you and I are able buy and sell gambles to one another. In the present imprecise case, this is roughly as follows: \(\$\underline{\mathsf{P}}_{\Theta}(A)\) is the most that I'd be willing to pay you for a gamble that pays me \(\$1(\Theta\in A)\) and, similarly, \(\$\overline{\mathsf{P}}_{\Theta}(A)\) is the least that I'd be willing to accept from you in exchange for the gamble that pays you \(\$1(\Theta\in A)\). That is, \(\underline{\mathsf{P}}_{\Theta}\) and \(\overline{\mathsf{P}}_{\Theta}\) bound my buying and selling prices, respectively. If the imprecise probability that drives my pricing scheme is coherent, then I cannot be made a sure loser, i.e., there is no finite sequences of transactions for which my net winnings is sure to be negative. This no-sure-loss property is relatively weak, but if I fail to avoid sure-loss, that that's a clear sign my probability assessments are flawed. * The imprecise prior assessment is equivalent to a set of precise probability assessments. That is, the upper prior probability \(\overline{\mathsf{P}}_{\Theta}\) determines a (closed and convex) set of compatible probabilities, called a _credal set_, given by \[\mathscr{C}(\overline{\mathsf{P}}_{\Theta})=\{\mathsf{P}_{\Theta}\in\text{ probs}(\mathbb{T}):\mathsf{P}_{\Theta}(\cdot)\leq\overline{\mathsf{P}}_{\Theta}(\cdot)\},\] and that set, in turn, determines the lower and upper probabilities as its corresponding lower and upper envelopes: \[\underline{\mathsf{P}}_{\Theta}(A)=\inf_{\mathsf{P}_{\Theta}\in\mathscr{C}( \overline{\mathsf{P}}_{\Theta})}\mathsf{P}_{\Theta}(A)\quad\text{and}\quad \overline{\mathsf{P}}_{\Theta}(A)=\sup_{\mathsf{P}_{\Theta}\in\mathscr{C}( \overline{\mathsf{P}}_{\Theta})}\mathsf{P}_{\Theta}(A),\quad A\subseteq \mathbb{T}.\] These same properties hold for all the coherent imprecise probabilities discussed below, there's nothing mathematically special about the imprecise _prior_ probabilities. The most common situation in the statistics literature is where no prior information is assumed, i.e., all that can be said _a priori_ is that the "prior probability of \(\Theta\in A\)" is between \(0\) and \(1\). This can't be modeled with ordinary probability, but it's easy to handle with imprecise probability: the corresponding lower and upper prior probabilities would be \(\underline{\mathsf{P}}_{\Theta}(A)=0\) for all \(A\neq\mathbb{T}\) and \(\overline{\mathsf{P}}_{\Theta}(A)=1\) for all \(A\neq\varnothing\). It's also easy to see that the credal set \(\mathscr{C}(\overline{\mathsf{P}}_{\Theta})\) corresponds to the set of all probability distributions. This observation offers an interesting take-away message: the classical "no prior" case is more accurately described as "every prior" in the sense that the lack of prior information available actually means that one can't rule out any prior distributions; see Part I (Martin 2022a). This so-called vacuous prior case will be my primary focus in this paper, mostly for the sake of comparison with existing solutions in the key examples. The partial prior and the statistical model together determine an imprecise joint distribution \((\underline{\mathsf{P}}_{Y;\Theta},\overline{\mathsf{P}}_{Y,\Theta})\) for the pair \((Y,\Theta)\). The upper joint distribution \(\overline{\mathsf{P}}_{Y,\Theta}\) is \[\overline{\mathsf{P}}_{Y,\Theta}(Y\in B,\,\Theta\in A)=\sup_{\mathsf{P}_{ \theta}\in\mathscr{C}(\overline{\mathsf{P}}_{\Theta})}\int_{A}\mathsf{P}_{Y| \theta}(B)\,\mathsf{P}_{\Theta}(d\theta),\quad A\subseteq\mathbb{T},\quad B \subseteq\mathbb{Y}.\] The right-hand side above is a Choquet integral (e.g., Troflaes and de Cooman 2014, App. C)--which is familiar in certain statistical contexts (e.g., Huber 1973)--and there may be simplified expressions depending on the mathematical form of the partial prior; see Equation (3) below and Section 6.1 in Part II. In any case, the upper joint distribution, which depends on _exactly_ what the data analyst knows or is willing to assume about the application at hand, is what drives the construction of an IM for quantification of uncertainty about \(\Theta\) given the observed \(Y=y\). Following some lengthy justification in terms of what I called "outer consonant approximations," I arrived at the following IM construction, given \(Y=y\): first, define the (plausibility) contour function \[\pi_{y}(\theta)=\overline{\mathsf{P}}_{Y,\Theta}\{R_{q}(Y,\Theta)\leq R_{q}(y,\theta)\},\quad\theta\in\mathbb{T}, \tag{2}\] where \(R_{q}\) is a sort of normalized joint density \[R_{q}(y,\theta)=\frac{p_{Y|\theta}(y)\,q_{\Theta}(\theta)}{\sup_{\theta\in \mathbb{T}}\{p_{Y|\theta}(y)\,q_{\Theta}(\vartheta)\}},\quad\theta\in \mathbb{T},\] with \(q_{\Theta}(\theta):=\overline{\mathsf{P}}_{\Theta}(\{\theta\})\), a relevant summary of the partial prior. If \(q_{\Theta}\) satisfies \(\sup_{\theta}q_{\Theta}(\theta)=1\), which is within the user's control, then the right-hand side of (2) can be evaluated as \[\pi_{y}(\theta)=\int_{0}^{1}\sup_{\theta:q_{\Theta}(\vartheta)>s}\mathsf{P}_{Y |\vartheta}\{R_{q}(Y,\vartheta)\leq R_{q}(y,\theta)\}\,ds,\quad\theta\in \mathbb{T}, \tag{3}\] with, for example, the inner \(\mathsf{P}_{Y|\vartheta}\)-probability evaluated via Monte Carlo. Since the contour (2) clearly satisfies \(\sup_{\theta}\pi_{y}(\theta)=1\) for each \(y\), this contour can be used to directly define the IM's upper probability via consonance \[\overline{\Pi}_{y}(A)=\sup_{\theta\in A}\pi_{y}(\theta),\quad A\subseteq \mathbb{T},\] and the corresponding lower probability via the general duality (1). This is a generalization of the suggestion in Martin (2015, 2018), and a detailed justification for this is presented in Part II. The IM output is a coherent imprecise probability, so those properties described above for \((\underline{\mathsf{P}}_{\Theta},\overline{\mathsf{P}}_{\Theta})\) also hold for \((\underline{\Pi}_{y},\overline{\Pi}_{y})\). Moreover, the \(\overline{\Pi}_{y}\) term in the IM's output satisfies the properties of a _possibility measure_, so I will often refer to this as a _possibilistic IM_ and inference drawn from it _possibilistic inference_. While it might not look it at first glance, this solution is actually quite straightforward: the data and model are combined with available prior information via the rule (2). The key take-away, however, is that the IM output \((\underline{\Pi}_{y},\overline{\Pi}_{y})\)--which depends on the data \(y\), the posited model, and the available prior information--is special because it's completely determined by the contour function (2). Indeed, like how a Bayesian's posterior density determines everything, the contour function (2) determines the IM solution; the only difference is that I optimize the contour function whereas the Bayesian integrates the density function. See Section 3.2 of Part II for further details concerning this special (consonance) structure. Connections can also be made, in the vacuous prior case, between the IM output's credal set \(\mathscr{C}(\overline{\Pi}_{y})\) and Fisher's fiducial solution (e.g., Martin 2023a). The value in/quality of the IM solution lies in the properties that it satisfies. One property is related to, but more demanding than, the coherence property described above. This concerns the act of updating probabilities/prices based on new information, in our case, the data \(y\). The idea is my probabilities/prices should be such that you can't make me a sure-loser by leveraging some inadequacy in how I update my prior assessments. As I explain in Section 3.3 of Part I and Section 5.2.2 of Part II, and won't repeat in details here, the IM solution described above comes equipped with protection against this type of updating sure-loss as well. Some believe that, by de Finetti's theory, only Bayesian solutions are coherent, but the above result largely debunks this folklore. More directly relevant to the discussion in this paper are the IM solution's statistical properties. One basic property, called (strong) _validity_ (Definition 3 in Part I) has lots of practically relevant consequences. **Theorem 1** (Part II).: _The IM with contour as in (2) is (strongly) valid in the sense that_ \[\overline{\mathsf{P}}_{Y,\Theta}\{\pi_{Y}(\Theta)\leq\alpha\}\leq\alpha,\quad \alpha\in[0,1]. \tag{4}\] This closely resembles the familiar property satisfied by p-values in the context of statistical significance testing, but generally is different. In the case of vacuous prior information, the "prior" admits \(q_{\Theta}(\theta)\equiv 1\) and validity boils down to \[\sup_{\theta\in\mathbb{T}}\mathsf{P}_{Y|\theta}\{\pi_{Y}(\theta)\leq\alpha\} \leq\alpha,\quad\alpha\in[0,1].\] This looks even closer to the stochastically-no-smaller-than-uniform property satisfied by p-values. The key point is that validity ensures the IM output is suitably calibrated, so that inferences based on the magnitudes of the IM's lower and upper probabilities are reliable. As a consequence of this kind of high-level reliability, one can establish (Corollary 1 in Part II) more mathematically specific results for IM-driven statistical procedures. In particular, the set estimator \[C_{\alpha}(Y)=\{\theta\in\mathbb{T}:\pi_{Y}(\theta)>\alpha\},\quad\alpha\in[0,1],\] which can be interpreted as "the set of all sufficiently plausible values" of \(\Theta\) satisfies \[\overline{\mathsf{P}}_{Y,\Theta}\{C_{\alpha}(Y)\not\ni\Theta\}\leq\alpha.\] That is, the set estimator \(C_{\alpha}\) is a \(100(1-\alpha)\%\) confidence set, but in a partial prior-dependent sense through the evaluation via \(\overline{\mathsf{P}}_{Y,\Theta}\). In the vacuous prior case, this reduces to the usual coverage probability guarantees: \[\sup_{\theta\in\mathbb{T}}\mathsf{P}_{Y|\theta}\{C_{\alpha}(Y)\not\ni\theta \}\leq\alpha.\] Take-away: this IM construction accommodates very general forms of partial prior information in a way that's coherent in various senses, and does so without sacrificing on the reliability properties that are essential to the logic of scientific inference. ### Interest and nuisance parameters It's rare that the quantity of interest exactly corresponds to the unknown parameters of the posited statistical model for the observable data \(Y\). A more realistic situation is one where the quantity of interest is some functional or feature of the full model parameter. That is, if the model is \(\{\mathsf{P}_{\theta}:\theta\in\mathbb{T}\}\), with \(\Theta\) the uncertain value, then interest would often be in one or more features \(\Phi=f(\Theta)\) of \(\Theta\), where \(f:\mathbb{T}\to\mathbb{F}\) is a known mapping. One of the most common example of this is where the data are assumed to be normally distributed, where both the mean and variance are uncertain, but the goal is inference on the mean only. In such cases, it's often possible to decompose the full parameter \(\Theta\) as a pair \((\Phi,\Lambda)\), where the quantity of interest \(\Phi\) is the _interest parameter_, taking values in \(\mathbb{F}\), and \(\Lambda\) is the _nuisance parameter_, taking values in \(\mathbb{L}\). The residual feature \(\Lambda\) is only relevant for reconstructing the full parameter \(\Theta\) from \(\Phi\). Details of the IM construction in this setting will be relevant for the construction of IMs in prediction problems in Section 4 and in modern non- and semi-parametric inference problems in Section 5 below. It's important to emphasize that, despite what the notation suggests, \(\Phi\) need not be a sub-vector of the vector model parameter \(\Theta\). In general, the notation \(\Theta=(\Phi,\Lambda)\) is meant to indicate that the model parameter \(\Theta\) determines and is determined by the pair \((\Phi,\Lambda)\). For example, if \(\Theta\) is a vector and \(\Phi=\|\Theta\|\) is its length, then \(\Lambda=\Theta/\|\Theta\|\) is the unit vector pointing in the direction of \(\Theta\). All I'm assuming is that \(\Theta\) and \((\Phi,\Lambda)\) are in one-to-one correspondence. This is important because it'll often be the case that the quantity of interest \(\Phi\) isn't specific to any particular statistical model, e.g., \(\Phi\) might be a quantile or the coefficients that determine a linear conditional quantile function. So, if I impose a statistical model that's parametrized by \(\Theta\), then my quantity of interest becomes a general feature of the model parameter \(\Theta\), not necessarily a sub-component thereof. Furthermore, this structure also suggests that genuine partial prior information about \(\Phi\) would often be available in applications, whereas prior information about the nuisance parameter \(\Lambda\) would be vacuous. This kind of "partial prior factorization" will be useful in what follows. ### Classical marginalization There are a number of ways to carry out marginalization, depending on the statistical paradigm one is working in. One of the selling points of the Bayesian paradigm is that marginalization is at least conceptually straightforward--it's just an application of probability theory. I'll explain in the next subsection that there's a counterpart to this for possibilistic IMs, that's similarly straightforward, but it tends to be inefficient; the purpose of this paper is to explain how to do this more efficiently. Here I'll present some classical ideas about how, in certain cases, the likelihood function factors in a way that makes elimination of the nuisance parameters fairly convenient. My presentation will be based largely on the survey presented in Basu (1977, 1978), which is based on Neyman (1935), Olshevsky (1940), Fraser (1956), Sandved (1966), and Barndorff-Nielsen (1973). Certain models and interest-nuisance parameter decompositions allow for a convenient factorization of the likelihood function that suggests a strategy for eliminating the nuisance parameter. Recall that the likelihood function for the full parameter \(\Theta=(\Phi,\Lambda)\), given \(Y=y\), is \(\theta\mapsto p_{Y|\theta}(y)\), which I'll write as \((\phi,\lambda)\mapsto p_{Y|\phi,\lambda}(y)\) to emphasize the interest-nuisance parameter decomposition. In what follows, I'll slightly abuse notation by using "\(p\)" to represent all the (marginal and conditional) densities. **Ideal factorization.**: _Complete separation of \(\phi\) and \(\lambda\)._ The idea here is that the likelihood factors as a function depending on \(\phi\) (and data) times a function of \(\lambda\) (and data). Royall (1997, Ch. 7) refers to this as parameter orthogonality; see, also, Anscombe (1964). There are a number of different ways in which parameter orthogonality might manifest, and below I describe a few. Let \((U,V)\) denote a generic partition of the data \(Y\), so that \(y\) is equivalent to the pair \(\{U(y),V(y)\}\), and consider the following factorizations: \[p_{Y|\phi,\lambda}(y) =p_{U|\phi}(u)\,p_{V|u,\lambda}(v)\] \[p_{Y|\phi,\lambda}(y) =p_{U|\lambda}(u)\,p_{V|u,\phi}(v),\quad(u,v)=\{U(y),V(y)\}.\] These represent factorizations of the joint distribution of \(Y\) in terms of marginal and conditional distributions of the features \(U(Y)\) and \(V(Y)\). What differentiates the two is whether the interest parameter \(\phi\) goes with the marginal or the conditional, and I'll consider both cases below in turn. As Basu (1977) explains, the first case above is one where \(U\) is P-sufficient for \(\Phi\)--"P" for "partial"--and is S-ancillary for \(\Lambda\)--"\(S\)" for "Sandved." The point is that \(U=U(Y)\) is exhaustive concerning \(\Phi\) since the conditional distribution of \(V\), given \(U=u\), doesn't depend on \(\phi\), which aligns closely with the classical definition of sufficiency. Similarly, the marginal distribution of \(U\) doesn't depend on \(\phi\), so it's ancillary in a certain sense. In this case, if inference on \(\Phi\) is the goal, then one can safely ignore the \(\lambda\)-dependent term and work with the marginal likelihood, \(\phi\mapsto p_{U|\phi}(u)\). In the second case above, \(U\) is S-ancillary for \(\Phi\) and P-sufficient for \(\Lambda\), so one can safely ignore the \(\lambda\)-dependent term and work with the conditional likelihood, \(\phi\mapsto p_{V|u,\phi}(v)\). **Less-than-ideal factorization.**: _Incomplete separation of \(\phi\) and \(\lambda\)._ Here, consider the factorizations \[p_{Y|\phi,\lambda}(y) =p_{U|\phi}(u)\,p_{V|u,\phi,\lambda}(v)\] \[p_{Y|\phi,\lambda}(y) =p_{U|\phi,\lambda}(u)\,p_{V|u,\phi}(v),\quad(u,v)=\{U(y),V(y)\}.\] Note the incomplete separation: unlike above, here there is no factorization into a function of \(\phi\) (and data) times a function of \(\lambda\) (and data). There is a partial split, however. In the first case, \(U\) is what Basu would call \(\Phi\)-oriented whereas, in the second case, Basu would say that \(U\) is specific-sufficient for \(\Phi\), i.e., that \(U\) is sufficient for \(\Phi\) if \(\Lambda=\lambda\) was taken as known. Like above, if inference on \(\Phi\) is the goal, then one could choose to work with the marginal or conditional likelihood in the two cases, respectively. But this is not an obvious step like above because, here, ignoring the other factor implies some loss of information about \(\Phi\) There is, of course, no guarantee that every problem would fit into one of these two categories. Fortunately, the relatively simple strategy suggested by the ideal factorization works quite well--in the sense of improving efficiency in marginal inference--even outside the ideal factorization case. A notion that will prove to be quite useful in what follows is _profiling_. If \(\theta\mapsto p_{\theta}(y)=p_{Y|\phi,\lambda}(y)\) is the likelihood function for the pair \(\theta=(\phi,\lambda)\) based on data \(Y=y\), then the _profile likelihood function_ for the interest parameter \(\phi\) is determined by maximizing over the nuisance parameter \(\lambda\) for fixed \(\phi\); that is, the profile likelihood is \[\phi\mapsto\sup_{\lambda\in\mathbb{L}}p_{Y|\phi,\lambda}(y),\quad\phi\in \mathbb{F}.\] Of course, the profile is not a genuine likelihood in the sense that it doesn't correspond to a density in \(y\) that's being treated as a function of \(\phi\). But it does capture the property that a likelihood function is supposed to have, namely, that it provides a meaningful ranking of the \(\phi\) values in terms of how well they explain the data \(y\); it does so in a very optimistic way, i.e., assigning to \(\phi\) the rank corresponding to \((\phi,\hat{\lambda}_{y}(\phi))\) with \(\hat{\lambda}_{y}(\phi)\) the "best" companion to \(\phi\) for the given \(y\), the so-called conditional maximum likelihood estimator. Below I'll show that the profile likelihood is a powerful tool to help guide efficient marginal inference on \(\Phi\) in a wide range of problems. Note, however, that despite the wide range of problems in which profiling will lead to efficient marginal inference, there are cases in which profiling can be quite inefficient. Fortunately, those problematic cases have a feature in common that we can easily spot and then modify our approach accordingly. ## 3 Marginal possibilistic IMs ### Naive solution Since the IM framework returns a data-dependent, imprecise probability \((\underline{\Pi}_{y},\overline{\Pi}_{y})\), one has the option to carry out marginalization simply using the available imprecise probability calculus. This is exactly how the Bayesian framework proceed: get a posterior distribution for the full parameter \(\Theta\), then marginalizing using the ordinary probability calculus/integration. Of course, the imprecision baked into the IM output changes the technical details (of the probability calculus), but not the intuition. In the imprecise probability literature, the notion of taking an uncertainty quantification about some quantity, say \(\Theta\), and mapping it to an uncertainty quantification about a different quantity, say \(\Phi\), is often referred to as _extension_. It's arguably somewhat of a mismomer to refer to marginalization as "extension"--since, in this case, \(\Phi=f(\Theta)\) is actually contained in \(\Theta\)--but, nevertheless, the natural/naive approach to marginalization is to apply the available extension techniques. Since the IM output is a necessity-possibility measure pair, the appropriate extension is based on the so-called _extension principle_ of Zadeh (1975, 1978); see, also, Hose (2022, Sec. 3.2.3). In particular, the extension principle applied to the present marginalization task produces a marginal possibilistic IM that's determined by the contour function \[\pi_{y}^{f}(\phi)=\sup_{\theta:f(\theta)=\phi}\pi_{y}(\theta),\quad\phi\in \mathbb{F}. \tag{5}\] Then the corresponding upper probability \(\overline{\Pi}_{y}^{f}\) for quantifying uncertainty about \(\Phi\) is \[\overline{\Pi}_{y}^{f}(B)=\sup_{\phi\in B}\pi_{y}^{f}(\phi),\quad B\subseteq \mathbb{F}.\] The lower probability is defined via (1). To see that this is completely consistent with the possibilistic IM for \(\Theta\), we can elaborate the right-hand side above as \[\overline{\Pi}_{y}^{f}(B)=\sup_{\phi\in B}\pi_{y}^{f}(\phi)=\sup_{\theta:f( \theta)\in B}\pi_{y}(\theta)=\overline{\Pi}_{y}\{f^{-1}(B)\},\quad B\subseteq \mathbb{F}.\] That is, the marginal IM for \(\Phi\) is obtained by suitably mapping the original IM for \(\Theta\) via the function \(f\) almost exactly as in the familiar probability calculus: the key point is \[\phi\in B\iff f(\theta)\in B\iff\theta\in f^{-1}(B).\] So then the (imprecise) probability of the left-most assertion should equal the (imprecise) probability of the right-most assertion. The only difference here compared to the more familiar probability calculus is that the appropriate operation is optimization of the possibility contour rather than integration of the probability density. The possibility calculus is particularly well-suited for preserving the original IM's statistical properties through the marginalization process. Next is a result that demonstrates the above-defined marginal IM for \(\Phi=f(\Theta)\) remains (strongly) valid. Consequently, the marginal set estimator \[C_{\alpha}^{f}(Y)=\{\phi:\pi_{Y}^{f}(\phi)>\alpha\},\quad\alpha\in[0,1],\] is a nominal \(100(1-\alpha)\%\) confidence set in the sense described above. **Corollary 1**.: _The marginal IM for \(\Phi=f(\Theta)\) derived from the IM for \(\Theta\) via the extension principle is (strongly) valid in the sense that_ \[\overline{\mathsf{P}}_{Y,\Theta}\big{\{}\pi_{Y}^{f}\big{(}f(\Theta)\big{)} \leq\alpha\big{\}}\leq\alpha,\quad\alpha\in[0,1].\] Proof.: Follows immediately from Theorem 1 and the fact that \(\pi_{Y}^{f}(f(\Theta))\leq\pi_{Y}(\Theta)\), which follows from the definition of \(\pi_{y}^{f}\) as a supremum in (5). It may help to consider the special case of vacuous prior information about \(\Theta\), where the above result can be compared to more familiar results in the (non-Bayesian) statistics literature. Suppose, I'm in possession of a \(100(1-\alpha)\%\) confidence set for \(\Theta\), say \(C_{\alpha}(Y)\), and I want a corresponding confidence set for \(\Phi=f(\Theta)\)--how should I proceed? Naturally, I'd just map \(C_{\alpha}(Y)\) to a subset of \(\mathbb{F}\) via the mapping \(f\), i.e., \[C_{\alpha}^{f}(Y)=f\{C_{\alpha}(Y)\}=\{\phi:\phi=f(\theta)\text{ for some }\theta\in C_{\alpha}(Y)\}.\] In light of the duality between confidence sets and tests of significance, there exists a p-value function, say, \(\theta\mapsto\varpi_{y}(\theta)\) such that \[C_{\alpha}(Y)=\{\theta:\varpi_{Y}(\theta)>\alpha\}\] and, therefore, \[C_{\alpha}^{f}(Y) =\{\phi:\phi=f(\theta)\text{ for some }\theta\text{ with }\varpi_{Y}(\theta)>\alpha\}\] \[=\Big{\{}\phi:\sup_{\theta:f(\theta)=\phi}\varpi_{Y}(\theta)> \alpha\Big{\}}.\] So, the notion of marginalization via optimization is completely natural and done by statisticians without second thought. The reason is that it perfectly aligns with the preservation of desirable statistical properties, such as coverage probability guarantees. If a valid marginal IM for \(\Phi=f(\Theta)\) is available for any \(f\), then what's left to do? Isn't the problem of eliminating nuisance parameters settled? The only concern with the above (naive) IM solution is that it generally will fall short in the sense of efficiency compared to what's possible for any specific \(f\). To better understand this notion of efficiency, let's consider a simple example involving iid data from a normal model where \(\Theta\) denotes the uncertain mean and standard deviation. If \(n=10\) and the observed sample mean and standard deviation are \(0\) and \(1\), respectively, then the IM's joint contour function for the pair, based on a vacuous prior is displayed in Panel (a) of Figure 1; this is the same plot shown in Figure 11 of Part II. Panel (b) displays the naive marginal IM's contour function (red) for \(\Phi=\text{mean}\) based on the extension principle and it looks as one would expect. It also displays the more efficient marginal IM's contour (black) as described below. Note that the two curves are symmetric around the same point, the sample mean, but the latter vanishes much more rapidly. It's this faster decay that makes the latter marginal IM solution more efficient than the former. The primary reason for this lack of efficiency is as follows: the solution based on the extension principle must produce a valid marginal IM for any choice of feature mapping \(f\), so it necessarily can't be tailored toward efficient marginal inference concerning any specific feature. To me, the benefit of the naive marginal IM solution is that it's simple, i.e., it doesn't require any model-specific considerations, just solving an optimization problem. This can be useful in exploratory situations where the inferential target isn't or hasn't yet been determined, or perhaps if the original IM solution has already been carried out and all I have access to is the resulting contour function. But in light of its inefficiencies, it's necessary to push for more efficient marginal IMs. ### More efficient solutions Before jumping into some new details, it may help to revisit marginalization in the Bayesian context. While it's true that Bayesian marginalization is at least conceptually straightforward--application of the ordinary probability calculus--this doesn't necessarily "work" from a statistical point of view. The situations that I have in mind here are those where little or no prior information is available about \(\Theta\), so the Bayesian is required to proceed with the choice of a default prior distribution. In such cases, what justifies the default-prior Bayes solution is that the derived procedures, e.g., credible sets, have good statistical properties. It's known, however, that a suitable default prior for \(\Theta\) might lead to a marginal posterior for \(\Phi=f(\Theta)\) that has poor statistical properties; a classical example of this is Stein (1959), but see, also, Fraser (2011). To prevent such cases, the Bayesian is forced to choose a default prior for \(\Theta\) that's designed for the specific choice of \(f\) so that the corresponding marginal posterior for \(\Phi=f(\Theta)\) has the aforementioned desired statistical properties; see, for example, Jeffreys (1946), Tibshirani (1989), Berger and Bernardo (1992), Berger et al. (1999), Berger et al. (2009), Liu et al. (2014), and others. The point is that _reliable_ marginalization often doesn't follow simply from the framework's marginalization calculus; instead, it requires careful consideration of the specific inferential target. What I present below is in this vein, but far less nebulous and convoluted than defining and constructing statistically-suitable default priors. As discussed in Part II, efficiency gains are a consequence of reducing the complexity of the Choquet integral--the upper probability with respect to "\(\overline{\mathsf{P}}_{Y,\Theta}\)"--that defines the IM's contour. My _Principle of Minimum Complexity_ says that, for the sake of efficiency, drop the complexity as much as possible, and this is typically achieved by reducing the dimension of the variables being integrated over. Towards achieving this dimension/complexity reduction, one can leverage special structure in the model's likelihood function, in particular, the various forms of factorization discussed in Section 2.3 above. To start, I'll ignore any available partial prior information and just focus on the model/data; the partial prior will come back into the picture shortly. The profile rela Figure 1: Panel (a) shows the joint contour function for \(\Theta\), the mean and standard deviation of the normal model; same as Figure 11 in Part II. Panel (b) shows the marginal contour for \(\Phi=\) mean based on the naive extension principle (red) and the more efficient marginalization strategy (black) described in Section 3.2. tive likelihood (e.g., Kalbfleisch and Sprott 1970; Maclaren 2018; Murphy and van der Vaart 2000) offers a natural data-dependent plausibility order exclusively on the interest parameter space \(\mathbb{F}\); that is, if \(R(y,\phi)>R(y,\phi^{\prime})\), where \[R(y,\phi)=\frac{\sup_{\lambda\in\mathbb{L}}p_{Y|\phi,\lambda}(y)}{\sup_{\varphi \in\mathbb{F},\lambda\in\mathbb{L}}p_{Y|\varphi,\lambda}(y)},\quad\phi\in \mathbb{F},\] then the value \(\phi\) is understood as being "more compatible" with data \(y\) than the value \(\phi^{\prime}\). The use of the relative profile likelihood also aligns with the principles in Part II that justified the likelihood-based IM construction. The key point is that, by removing the direct dependence on \(\lambda\) in the plausibility ordering, an opportunity is created for the \(\Lambda\) dimension in the Choquet integral calculation that defines the IM's contour to collapse, leading to improved efficiency. Removing direct dependence on the nuisance parameters to create an opportunity for dimension reduction is a recurring theme in this paper. This begs the question: how can this opportunity for dimension reduction and/or efficiency gain be realized? When the likelihood function factors as in Section 2.3, the effective dimension drops because terms in the relative likelihood cancel out. First, in the "Ideal factorization" case, suppose the data decomposes as \(y\mapsto\{U(y),V(y)\}\) and \(U\) is P-sufficient for \(\Phi\). Then the relative profile likelihood easily simplifies to \[R(y,\phi)=\frac{p_{U|\phi}(u)}{\sup_{\varphi\in\mathbb{F}}p_{U|\varphi}(u)}, \quad\phi\in\mathbb{F},\quad u=U(y),\] and note that the right-hand side depends on data \(y\) only through the value \(u\) of the statistic \(U(y)\); I'll abuse notation and write this as \(R(u,\phi)\). Since \(U(y)\) is lower-dimensional than \(y\) itself, we have effectively achieved a dimension reduction. Moreover, if there's no interest in \(\Lambda\), then there's no utility in fleshing out partial prior information for \(\Lambda\)--it's safe to focus on _a priori_ uncertainty quantification about \(\Phi\). The necessary function is \(q_{\Phi}(\phi)=\overline{\mathsf{Q}}(\{\phi\}\times\mathbb{L})\), and this can be combined directly with the (simplified) relative profile likelihood to get \[R_{q}(u,\phi)=\frac{R(u,\phi)\,q_{\Phi}(\phi)}{\sup_{\varphi\in\mathbb{F}}\{R (u,\varphi)\,q_{\Phi}(\varphi)\}},\quad\phi\in\mathbb{F},\quad u=U(y).\] The critical observation is that, in the calculation of the IM's contour function, the (Choquet) integration over the \(V\) and \(\Lambda\) dimensions disappears: \[\pi_{y}(\phi) =\mathsf{P}_{Y,\Theta}\{R_{q}(U(Y),\Phi(\Theta))\leq R_{q}(U(y), \phi)\}\] \[=\overline{\mathsf{P}}_{U,\Phi}\{R_{q}(U,\Phi)\leq R(U(y),\phi)\},\quad\phi\in\mathbb{F}.\] Note that the far left-hand side, \(\pi_{y}(\phi)\), depends on \(y\) only through the value \(u\) of \(U(y)\), so, with my usual abuse of notation, I'll denote this instead \(\pi_{U(y)}(\phi)\). The collapsing in the dimension happens because the relative profile likelihood only depends on the random variable \(U(Y)\), and its distribution depends only on the uncertain \(\Phi\), not on \(\Lambda\). More formally, since \(Y=\{U(Y),V(Y)\}\) and \(\Theta=(\Phi,\Lambda)\), \[\pi_{U(y)}(\phi) =\overline{\mathsf{P}}_{Y,\Theta}\{R_{q}(U(Y),\Phi(\Theta))\leq R_ {q}(U(y),\phi)\}\] \[=\overline{\mathsf{P}}_{U,V,\Phi,\Lambda}\{R_{q}(U(Y),\Phi)\leq R _{q}(U(y),\phi)\}\] \[=\sup_{\mathsf{Q}_{\Phi,\Lambda}}\int\mathsf{P}_{U,V|\varphi, \lambda}\{R_{q}(U,\varphi)\leq R_{q}(u,\phi)\}\,\mathsf{Q}_{\Phi,\Lambda}(d \varphi,d\lambda)\] \[=\sup_{\mathsf{Q}_{\Phi}}\int\mathsf{P}_{U|\varphi}\{R_{q}(U, \varphi)\leq R_{q}(u,\phi)\}\,\mathsf{Q}_{\Phi}(d\varphi)\] \[=\overline{\mathsf{P}}_{U,\Phi}\{R_{q}(U,\Phi)\leq R_{q}(u,\phi)\}.\] The suprema above are over all the joint and marginal priors in the respective credal sets. I'll discuss below how, at least in some cases, further dimension reduction is possible. A couple more points deserve note. First, (strong) validity still holds--the reason is that the same collapsing "\(\overline{\mathsf{P}}_{Y,\Theta}\searrow\overline{\mathsf{P}}_{U,\Phi}\)" occurs when (upper) probabilities concerning \((Y,\Theta)\mapsto\pi_{U(Y)}(\Phi)\), so it's as if the problem originated with the marginal model for \(U(Y)\) depending only on the uncertain \(\Phi\) and I constructed the IM solution from there. Second, the above reduction happens automatically without any direct intervention from the data analyst. That is, even if one doesn't recognize that the \((V,\Lambda)\) dimensions can be collapsed, they get collapsed anyway and, consequently, the results obtained end up the same either way. So, the efficiency gain that occurs from recognizing this particular opportunity for dimension reduction is computational, not statistical. Sticking with the "Ideal factorization" case, with a decomposition \(y\mapsto\{U(y),V(y)\}\), now suppose that \(U\) is S-ancillary for \(\Phi\). Then the relative profile likelihood reduces to \[R(y,\phi)=\frac{p_{V|u,\phi}(v)}{\sup_{\varphi\in\mathbb{F}}p_{V|u,\varphi}(v )},\quad\phi\in\mathbb{F},\quad(u,v)=\{U(y),V(y)\},\] and note that the right-hand side (basically) only depends on the data \(y\) through the value \(v\) of the statistic \(V(y)\). I say "basically" because, since the dependence is through a conditional likelihood, there's an opportunity to take the value \(u\) of \(U(y)\) as _fixed_, thereby making the effective dimension that of \(v\). Towards this, let me write the left-hand side above as \(R(v,\phi\mid u)\). With the partial prior for \(\Phi\) described by \(q_{\Phi}\), define \[R_{q}(v,\phi\mid u):=\frac{R(v,\phi\mid u)\,q_{\Phi}(\phi)}{\sup_{\varphi\in \mathbb{F}}\{R(v,\varphi\mid u)\,q_{\Phi}(\varphi)\}},\quad\phi\in\mathbb{F}.\] The appearance of the word "ancillary" and the notation I've introduced suggests a strategy wherein the observed value \(u\) of \(U(y)\) is conditioned on: \[\pi_{y}(\phi) =\pi_{v|u}(\phi)\] \[=\overline{\mathsf{P}}_{V,\Phi|u}\{R_{q}(V,\Phi\mid u)\leq R_{q}(v,\phi\mid u)\}\] \[=\sup_{\mathsf{Q}_{\Phi}}\int\mathsf{P}_{V|u,\varphi}\{R_{q}(V, \varphi\mid u)\leq R_{q}(v,\phi\mid u)\}\,\mathsf{Q}_{\Phi}(d\varphi),\quad \phi\in\mathbb{F}. \tag{6}\] Similar to the P-sufficient case above, there is a reduction in dimension because the (Choquet) integration over the \(U\) and \(\Lambda\) spaces collapses. But note that here in the S-ancillary case, the user would have to intervene--to carry out the "condition on \(U=u\)" step manually--it doesn't happen automatically like in the P-sufficient case above. There are benefits to be enjoyed as a result of this careful intervention, however. First, just like in Section 6.1 of Part II, this conditioning preserved validity. Second, there is a computational efficiency gain resulting from the dimension reduction. Finally, since the observed value of \(U(Y)\) often can be interpreted as a sort of "informativeness index," by conditioning on this value, the inference is adaptive in the sense that improved efficiency is achieved if it's warranted by the data in hand. Beyond the ideal factorization confines, it's far less obvious how to proceed. In the less-than-ideal factorization cases described above, and even more generally, the profiling strategy can still be carried out. Consider first the \(\Phi\)-oriented case, where the relative profile likelihood function is \[R_{q}(y,\phi)=\frac{p_{U|\phi}(u)\,\sup_{\lambda\in\mathbb{L}}\{p_{V|u,\phi, \lambda}(v)\,q_{\Phi,\Lambda}(\phi,\lambda)\}}{\sup_{\varphi\in\mathbb{F}}\sup _{\lambda\in\mathbb{L}}\{p_{U|\varphi}(u)\,p_{V|u,\varphi,\lambda}(v)\,q_{ \Phi,\Lambda}(\varphi,\lambda)\}},\quad\phi\in\mathbb{F},\] where \((u,v)\) is the observed value of \(\{U(y),V(y)\}\). Unfortunately, none of the terms cancel, but what really matters is that the right-hand side doesn't depend directly on the nuisance parameter value. So the profiling strategy is still viable, but it's clearly not the only option. In fact, one might have good reason (e.g., Section 3.6 below) to simply ignore the term that involves both \((\phi,\lambda)\), and work with a different kind of plausibility order, based on the relative marginal likelihood: \[R_{q}(y,\phi)=\frac{p_{U|\phi}(u)\,q_{\Phi}(\phi)}{\sup_{\varphi\in\mathbb{F}} p_{U|\varphi}(u)\,q_{\Phi}(\varphi)},\quad\phi\in\mathbb{F}.\] This boils down to ignoring some relevant information about \(\Phi\), but in exchange for added simplicity and perhaps greater efficiency. Indeed, working just with the marginal distribution of the statistic \(U\) completely eliminates \(V\) and \(\Lambda\), significantly simplifying the IM construction. The same points apply to the specific-sufficient case under the less-than-ideal factorization umbrella. There, the relative profile likelihood is \[R_{q}(y,\phi)=\frac{p_{V|u,\phi}(v)\,\sup_{\lambda\in\mathbb{L}}\{p_{U|\phi, \lambda}(u)\,q_{\Phi,\Lambda}(\phi,\lambda)\}}{\sup_{\varphi\in\mathbb{F}} \sup_{\lambda\in\mathbb{L}}\{p_{V|u,\varphi}(v)\,p_{U|\varphi,\lambda}(u)\,q _{\Phi,\Lambda}(\varphi,\lambda)\}},\quad\phi\in\mathbb{F},\] where \((u,v)\) is the observed value of \(\{U(y),V(y)\}\). Again, there are no simplifications that can be made here, but the profiling strategy can still be applied. But the user might--for good reason--opt to ignore the information about \(\Phi\) in the marginal distribution of \(U\) and use simply the relative conditional likelihood \[R_{q}(y,\phi)=\frac{p_{V|u,\phi}(v)\,q_{\Phi}(\phi)}{\sup_{\varphi\in\mathbb{F }}p_{V|u,\varphi}(u)\,q_{\Phi}(\varphi)},\quad\phi\in\mathbb{F}.\] As above, such a decision offers substantial simplification compared to profiling, along with potential efficiency gains in certain cases. More generally, there will be no obvious factorizations that can be made to the likelihood, neither ideal nor less-than-ideal. One can, of course, still use the profiling strategy described above, and strong validity still holds. In my experience, which I'll share in the examples below, profiling is quite reliable and often leads to an optimal/most efficient solution; but there are cases in which better efficiency can be achieved by working with the relative marginal likelihood instead. So, while my go-to nuisance parameter elimination strategy is profiling, it's important to keep in mind that efficient marginal inference is an extremely challenging problem and one can't expect a single strategy to work best uniformly over all problems. Therefore, I'm open to finding creative alternative--perhaps not likelihood-based--plausibility orderings on a case-by-case basis; such creativity is probably necessary for efficient inference in non-parametric problems (Section 6). ### Further efficiency gains The above discussion focused on how to collapse the dimensions in the Choquet integration directly related to the nuisance parameter--both in the \(\Lambda\) dimension and in that of the statistic carrying information relevant to \(\Lambda\). Depending on the form of the partial prior information, there may be extra opportunities to reduce the dimension further. This, again, relies on my _Principle of Minimum Complexity_, and I discuss this idea at length in Part II; so I'll only give a brief explanation here. In the fully vacuous prior case, validity is equivalent to \[\sup_{\lambda\in\mathbb{L}}\mathsf{P}_{Y|\phi,\lambda}\{\pi_{Y}(\phi)\leq \alpha\}\leq\alpha,\quad\phi\in\mathbb{F},\quad\alpha\in[0,1].\] So it's enough to take the contour function as \[\pi_{y}(\phi)=\sup_{\lambda\in\mathbb{L}}\mathsf{P}_{Y|\phi,\lambda}\{R(Y, \phi)\leq R(y,\phi)\},\quad\phi\in\mathbb{F}. \tag{7}\] The point is that the value of \(\phi\) is taken as fixed, so there's no (Choquet) integration over \(\phi\) needed. In the ideal factorization cases above, e.g., the P-sufficient case, the plausibility ordering was expressed exclusively in terms of the statistic \(U\) whose sampling distribution doesn't depend on the nuisance parameter. In such a case, the \(\Lambda\) dimension collapses too, so the above display reduces to \[\pi_{y}(\phi)=\mathsf{P}_{U|\phi}\{R(U,\phi)\leq R(U(y),\phi)\},\quad\phi\in \mathbb{F},\] i.e., all that's required is (ordinary) integration over the \(U\) space. Similar extra reduction is possible in the S-ancillary case as well. In the examples follow, I'll focus primarily on the vacuous prior case and, therefore, I'll use the contour function in (7) or, whenever possible, its no-\(\lambda\) version. The opposite extreme case, when there's a complete (precise) prior distribution for \((\Phi,\Lambda)\), there is a similarly extreme dimension reduction that can be achieved. Roughly, one can collapse the integration over the entire \(Y\)-space, and the IM contour is obtained by (ordinary) integration with respect to the usual conditional distribution of \((\Phi,\Lambda)\), given \(Y=y\). Applications where complete prior information is available are quite rare, so these details are really only of theoretical interest. For the general partial prior case, which is far more common in application, I'm not aware of a one-size-fits-all dimension reduction strategy and, if such a strategy exists and, if so, what does it look like are important open questions. In the examples below where I incorporate partial prior information, I use the basic Choquet integral formula with no extra dimension reduction tricks. ### First examples **Example 1** (Multinomial).: As a first and relatively simple example, consider a random sample of size \(n\) from a population having three distinct categories; let \(Y=(Y_{1},Y_{2},Y_{3})\) denote the corresponding vector of counts, with \(Y_{k}\) the number of category-\(k\) observations, for \(k=1,2,3\), so that \(Y_{1}+Y_{2}+Y_{3}=n\). Let \(\mathsf{P}_{Y|\theta}\) denote a multinomial model where \(\theta=(\theta_{1},\theta_{2},\theta_{3})\) is such that \(\theta_{k}\geq 0\) is the category-\(k\) probability and \(\theta_{1}+\theta_{2}+\theta_{3}=1\). Write \(\Theta\) for the uncertain value, with components \(\Theta_{k}\), \(k=1,2,3\). Suppose interest is in \(\Phi=\Theta_{1}\). Thanks to the probability vector constraints, \(\Theta\) is equivalent to \((\Phi,\Lambda)\) where \(\Lambda=\Theta_{2}/(1-\Theta_{1})\) is the conditional probability of category 2 given categories 2 or 3. Similarly, \(Y\) is equivalent to the pair \((U,V)\), where \(U=Y_{1}\) and \(V=Y_{2}\). Then it's easy to check that \(U\) is P-sufficient for \(\Phi\), so this example falls under the "ideal factorization" umbrella and the marginal IM construction is simple and leads to efficient inference on \(\Phi\). Indeed, the marginal distribution of \(U\), given \(\Phi=\phi\), is binomial with parameters \(n\) and \(\phi\), and so the marginal IM for \(\Phi\) is, after this reduction, exactly like that given in Example 1 of Part II. **Example 2** (Two binomial counts).: Let \(Y=(Y_{1},Y_{2})\) denote two independent binomial counts, with \((Y_{1}\mid\Theta_{1}=\theta_{1})\sim\mathsf{Bin}(n_{1},\theta_{1})\) and \((Y_{2}\mid\Theta_{2}=\theta_{2})\sim\mathsf{Bin}(n_{2},\theta_{2})\), where \(n=(n_{1},n_{2})\) is known but \(\Theta=(\Theta_{1},\Theta_{2})\) is unknown. This setup is common in, e.g., clinical trails, where \(Y_{1}\) and \(Y_{2}\) correspond to the number of events observed in the control and treatment groups, respectively. A relevant feature of \(\Theta\) is the log odds ratio \[\Phi=\log\Bigl{(}\frac{\Theta_{2}}{1-\Theta_{2}}\div\frac{\Theta_{1}}{1- \Theta_{1}}\Bigr{)}.\] It's well known that the conditional distribution of \(V=Y_{2}\), given \(U=Y_{1}+Y_{2}\) and \(\Phi=\phi\), has a mass function \[p_{V|u,\phi}(v)\propto\binom{n_{2}}{v}\binom{n_{1}}{u-v}e^{\theta v},\quad v= \max(u-n_{1},0),\ldots,\min(n_{2},u),\] where \(\phi\) is the log-odds ratio corresponding to \(\theta\). The key observation is that this conditional distribution only depends on \(\phi\), not on any other features of \(\theta\). So this example is one where \(U\) is S-ancillary and so the marginal IM solution proceeds by conditioning on the observed value \(u=\) of \(U=Y_{1}+Y_{2}\). That is, the marginal IM's contour function is \(\pi_{y}(\phi)=\pi_{v|u}(\phi)\) as in (6). For illustration, I consider two mortality data sets presented in Table 1 of Normand (1999), namely, Trials 1 and 6. Two plausibility contours for \(\Phi\) are shown in each of the two panels in Figure 2: one for a vacuous prior and one for a partial prior. The partial prior for \(\Phi\) I'm considering here is the so-called _Markov prior_, like in Example 2 of Part II, that represents the entire collection precise prior distributions with \(\mathsf{E}|\Phi|\leq 1\); the contour is given by \[q_{\Phi}(\phi)=1\wedge|\phi|^{-1},\quad\phi\in\mathbb{R}.\] The estimated log-odds ratios are 0.83 and 0.99 for Trial 1 and 6, respectively, but both cases are challenging thanks to the number of events being relatively small. Trial 6 is an overall larger study, so the plausibility contour is much more tightly concentrated than that for Trial 1. For Trial 1, the data is quite compatible with the partial prior information so, as expected, there's a non-trivial gain in efficiency when comparing the partial-prior IM to the vacuous-prior IM. For Trial 6, on the other hand, the data and prior are still compatible, but now the data is much more informative, so the difference in efficiency isn't as wide. The same data was analyzed in Hannig and Xie (2012) and Martin (2018) but the results look quite different compared to those presented here in Figure 2. This is because the former reference makes certain adjustments to effectively eliminate the discreteness of the data, while the latter does something very similar to the vacuous-prior IM solution here, just is less efficient due to its failure to make full use of the relative conditional likelihood. The reader might be surprised to learn that the often-simple normal model, considered next, is a less-than-ideal factorization case. Fortunately, despite being less-than-ideal, there are still some nice features that make this example rather straightforward. **Example 3** (Normal).: Let \(Y=(Y_{1},\ldots,Y_{n})\) be an iid sample from a normal distribution with uncertain mean \(\Theta_{1}\) and standard deviation \(\Theta_{2}\). For brevity in this illustration, I'll suppose the prior information about \(\Theta=(\Theta_{1},\Theta_{2})\) is vacuous. To start, suppose the mean \(\Phi=\Theta_{1}\) is the interest parameter and \(\Lambda=\Theta_{2}\) is the nuisance parameter. As hinted at above, this example doesn't fall even under the "less-than-ideal factorization" umbrella. In particular, there is no \(\Phi\)-oriented statistic--e.g., the distribution of the sample mean depends on both \(\Theta_{1}\) and \(\Theta_{2}\)--and, therefore, no P-sufficient statistic. But there's nothing stopping me from proceeding with the profiling strategy described above. The relative profile likelihood is easy to get in this case, and it's given by \[R(y,\phi)=\Big{(}\frac{\hat{\lambda}_{y}^{2}}{\hat{\lambda}_{y}^{2}(\phi)} \Big{)}^{n/2},\] where \(\hat{\lambda}_{y}^{2}=n^{-1}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}\) is the maximum likelihood estimator of \(\Lambda^{2}\) and Figure 2: Plausibility contours for the log odds ratio in two mortality data sets (Trial 1 and Trial 6) presented in Table 1 of Normand (1999): vacuous prior (black), partial prior (red), and prior contour (red dotted). \(n^{-1}\sum_{i=1}^{n}(y_{i}-\phi)^{2}\). Some simple algebra reveals \[R(Y,\phi)\leq R(y,\phi) \iff\frac{\hat{\lambda}_{Y}^{2}}{\hat{\lambda}_{Y}^{2}(\phi)}\leq \frac{\hat{\lambda}_{y}^{2}}{\hat{\lambda}_{y}^{2}(\phi)}\] \[\iff\frac{\hat{\lambda}_{Y}^{2}(\phi)}{\hat{\lambda}_{Y}^{2}}\geq \frac{\hat{\lambda}_{y}^{2}(\phi)}{\hat{\lambda}_{y}^{2}}\] \[\iff\frac{n(\bar{Y}-\phi)^{2}}{\hat{\lambda}_{Y}^{2}}\geq\frac{ n(\bar{y}-\phi)^{2}}{\hat{\lambda}_{y}^{2}}.\] Under \(\mathsf{P}_{Y|\phi,\lambda}\), the random variable on the left-hand side is a pivot--distribution independent of both \(\phi\) and \(\lambda\)--and, in particular, is distributed as \(\mathsf{F}(1,n-1)\) or, equivalently, as the square of \(T\sim\mathsf{t}(n-1)\). Then the marginal IM contour is given by \[\pi_{y}(\phi) =\mathsf{P}\Big{\{}T^{2}\geq\frac{n(\bar{y}-\phi)^{2}}{\hat{ \lambda}_{y}^{2}}\Big{\}}\] \[=1-\Big{|}2F_{n-1}\Big{(}\frac{n(\bar{y}-\phi)^{2}}{\hat{ \lambda}_{y}^{2}}\Big{)}-1\Big{|},\quad\phi\in\mathbb{R}.\] A plot of this curve was shown in Figure 1 to demonstrate the efficiency gains that are possible when the IM construction is tailored to the interest parameter. This function is just the usual p-value for the Student-t test, and the marginal IM plausibility intervals for \(\Phi\) are exactly the familiar textbook Student-t confidence intervals. Next, suppose that the standard deviation \(\Phi=\Theta_{2}\) is of interest and the mean \(\Lambda=\Theta_{1}\) is the nuisance parameter. In this case, there exists \(\Phi\)-oriented statistics--including the maximum likelihood estimator--but Basu (1977, p. 280) argues that none is "maximally" \(\Phi\)-oriented. This suggests two possible strategies to construct a marginal IM: one is to work with the marginal likelihood based on the maximum likelihood estimator, which is tied directly to a chi-square distribution, and another is to stick with the general profiling strategy and work out the details. It turns out, however, that the two solutions are almost the same in this case. Write \(\hat{\phi}_{y}^{2}=n^{-1}\sum_{i=1}^{n}(y_{i}-\bar{y})^{2}\) for the maximum likelihood estimator of the variance \(\Phi^{2}\). Then the relative profile likelihood is easily shown to be \[R^{\text{pr}}(y,\phi)\propto\Big{(}\frac{n\hat{\phi}_{y}^{2}}{\phi^{2}}\Big{)} ^{n/2}\exp\Bigl{(}-\frac{n}{2}\frac{\hat{\phi}_{y}^{2}}{\phi^{2}}\Big{)},\quad \phi>0.\] Clearly, \(R(Y,\phi)\) depends on \((Y,\phi)\) only through \(n\hat{\phi}_{Y}^{2}/\phi^{2}\), which is a pivot (chi-square) when \(Y\) is normal with variance \(\phi\), so computation of the marginal IM contour is straightforward. Similarly, the relative marginal likelihood is easily shown to be \[R^{\text{ma}}(y,\phi)\propto\Big{(}\frac{n\hat{\phi}_{y}^{2}}{\phi^{2}}\Big{)} ^{(n-1)/2}\exp\Bigl{(}-\frac{n}{2}\frac{\hat{\phi}_{y}^{2}}{\phi^{2}}\Big{)},\quad\phi>0.\] The only difference between the two relative likelihoods is the power on the polynomial term, which is a negligible difference even for moderate \(n\). In terms of the corresponding marginal IM contours, this difference mainly only affects the location of its peak: for the profile likelihood the peak is at the maximum likelihood estimator and, for the marginal likelihood, the peak is at the sample variance. Figure 3(a) shows the same joint contour for the normal mean and standard deviation--based on \(n=10\)--presented in Figure 1(a) above; then Panel (b) shows three different marginal IM contours for the standard deviation \(\Phi\): one based on the naive marginalization via the extension principle and the two relative likelihood-based strategies above. Note the efficiency gain in the two direct marginal IM constructions compared to that based on naive marginalization post-construction. My preferred solution is that based on the profiling strategy, since the maximum likelihood estimator ought to be the most plausible. Of course, the above analysis carries over to many of the commonly used normal models, like in linear regression and analysis of variance, and the suitably constructed marginal IMs will reproduce the standard textbook results. For example, the marginal IM for a subset of the regression coefficients in an ordinary linear model would correspond to the p-value function based on Hotelling's \(T^{2}\) statistic and the F-distribution. **Example 4** (Fieller-Creasy).: The so-called Fieller-Creasy problem (Creasy 1954; Fieller 1954), in its simplest form, starts with an independent pair of observables, \((Y_{i}\mid\Theta_{i}=\theta_{i})\sim\mathsf{N}(\theta_{i},1)\), for \(i=1,2\). It's not important that the variances are equal (to 1), e.g., the \(Y_{i}\)'s could be averages based on different sample sizes, say; all that's effectively being assumed here is that the variances are known. This is a seemingly trivial problem: inference about an unknown normal mean vector with known variances is one of the few statistical problems that's actually _solved_. What's unique about this example, and what makes it surprisingly challenging, is that interest is in the ratio \(\Phi=\Theta_{1}/\Theta_{2}\). This example's fame--or infamy--originated from the two distinct fiducial solutions (one by Fieller and one by Creasy) which both appeared justified based on Fisher's reasoning; in the end, Figure 3: Panel (a) shows the joint contour function for \(\Theta\), the mean and standard deviation of the normal model; same as in Figure 1(a). Panel (b) shows the marginal contour for \(\Phi=\mathrm{sd}\) based on the naive extension principle (green) and the more efficient marginalization profile (black) and marginal (red) strategies. Fisher sided with Fieller's approach which, by the way, produces confidence intervals that attain the exact nominal coverage. This example is also one of the simplest among those in the class of problems for which there exists no set estimator that has finite length (almost surely) and positive coverage probability (Gleser and Hwang 1987). This implies that the usual strategies for constructing set estimators--such as "estimate \(\pm\) standard error" or using quantiles of a marginal Bayesian or fiducial posterior distribution--that produce almost surely finite-length intervals, simply aren't going to work. For these and perhaps other reasons, the late Sir D. R. Cox listed this as one of his "challenge problems" Fraser et al. (2018). More generally, even in relatively simple model, solutions can be quite challenging when the quantity of interest involves a ratio of the model parameters; see, e.g., the gamma mean problem in Example 6 below. As this is just a marginal inference problem within a fairly standard model, it's worth to see how the proposed IM framework can handle this. If we write \(\Lambda=\Theta_{2}\), then we find that \(\Theta_{1}=\Phi\Lambda\) and we have a complete reparametrization. The log-likelihood function, in this new parametrization, is quite simple: \[\ell_{y}(\phi,\lambda)=-\tfrac{1}{2}(y_{1}-\phi\lambda)^{2}-\tfrac{1}{2}(y_{2 }-\lambda)^{2}.\] This isn't an "ideal factorization" case, hence no P-sufficiency or S-ancillarity to guide us; we do find that \(Y_{2}\) is \(\Lambda\)-oriented, but that doesn't help for inference on \(\Phi\). So I'll proceed here with the general recommendation to rely on the relative profile likelihood; for now, let's assume that the prior information available is vacuous, It's straightforward to identify the global maximum likelihood estimators of \(\Phi\) and \(\Lambda\), and the corresponding global maximum value of the likelihood function is a constant in both data and parameters, so can be ignored. Concerning the profile likelihood, with \(\phi\) fixed, it's not difficult to show that the maximum is attained at \[\hat{\lambda}_{y}(\phi)=\frac{y_{1}+\phi y_{2}}{1+\phi^{2}}.\] Then the log relative profile likelihood (ignoring additive constants) is \[\log R(y,\phi)=-\frac{1}{2}\frac{(y_{1}-\phi y_{2})^{2}}{1+\phi^{2}},\quad \phi\in\mathbb{R}.\] The key observation is that \(\log R(Y,\phi)\) is a pivot when \(\Phi=\phi\), so the marginal IM contour for \(\Phi\) is easy to get. In fact, if \(\Phi=\phi\), then \(-2\log R(Y,\phi)\sim\mathsf{ChiSq}(1)\), so the contour has basically a closed-form expression: \[\pi_{y}(\phi)=1-G_{1}\{-2\log R(y,\phi)\},\quad\phi\in\mathbb{R},\] where \(G_{d}\) is the \(\mathsf{ChiSq}(d)\) distribution function. As in the general case, this determines a full marginal IM that can be used for reliable uncertainty quantification for \(\Phi\). It's also exactly the marginal IM solution produced in Martin and Liu (2015c) which, by the way, returns marginal IM plausibility regions that agree with Fieller's exact confidence intervals. As this is special kind of problem with some unusual features, it's worth considering a quick illustration. I'll focus here on a relatively "weird" case and, since this is easy to compute, leave experiments with other data sets to the reader. Following Schweder and Hjort (2013), suppose that the observed data is \(y=(1.33,0.33)\). Since \(y_{2}\) is relatively close to \(0\), we ought to be worried about instability affecting our inferences about the ratio \(\Phi\). Indeed, Figure 4 shows a plot of the marginal IM's contour function for \(\Phi\) in this case, and the immediate observation is that, unlike in all the other examples in this paper, _the tails of the contour don't vanish as \(\phi\to\pm\infty\)_. This implies that the marginal IM's plausibility region for \(\Phi\) is unbounded, which is a necessary condition if it's to have non-zero coverage probability. For other cases, where \(y_{2}\) isn't too close to \(0\), the marginal IM contour looks similar to, e.g., the black line in Figure 3(b). That there's a class of examples for which the standard frequentist and Bayesian solutions fail is, to me at least, quite striking. The reader might be wondering why the statistical community has swept this issue under the rug as opposed to confronting it head on. The reason, I think, is that in this case at least it's easy to pinpoint the cause of the problematic behavior and explain it away. Indeed, all the trouble identified by Gleser-Hwang is caused by the fact that there's nothing that prevents \(\Lambda\) from being close to or equal to \(0\), and it's this "divide by (near) \(0\)" that creates a singularity that leads to the non-existence of finite-length confidence sets. So, all one has to do is say "I don't think \(\Lambda\) is close to \(0\)" and they've given themselves license to ignore the aforementioned issues. My claim, however, is that if one genuinely doesn't believe \(\Lambda\) is close to \(0\), then that (partial prior) information should be incorporated into the analysis from the beginning, both for the sake of transparency and for the opportunity to improve efficiency. The problem, of course, is that the mainstream schools of statistical thought have no way to accommodate partial prior information of this sort. Here, however, it's at least conceptually straightforward to incorporate partial prior information if available. For illustration, suppose the partial prior can be described by a possibility contour \[q_{\Phi}(\phi)=1\wedge 5|\phi|^{-1},\quad\phi\in\mathbb{R}.\] This is the same style of prior I used in the log-odds ratio illustration of Example 2, just now it encode the prior belief that "\(\mathsf{E}|\Phi|\leq 5\)," or, more colloquially, "I don't expect \(\Phi\) to be too large." Note that this prior is vacuous about \(\Lambda\), only mildly informative about \(\Phi\). Following the basic Choquet integral formula (3), my partial prior marginal IM contour for \(\Phi\) takes the form \[\pi_{y}(\phi)=\int_{0}^{1}\Bigl{[}\sup_{\lambda}\sup_{\varphi:q_{\Phi}( \varphi)>s}\mathsf{P}_{Y|\varphi,\lambda}\{R_{q}(Y,\varphi)\leq R_{q}(y,\phi) \}\Bigr{]}\,ds,\] where \(R_{q}\) is the \(q_{\Phi}\)-regularized relative profile likelihood and the inside \(\mathsf{P}_{Y|\varphi,\lambda}\) probability is evaluated via Monte Carlo over a grid of \((\varphi,\lambda)\) values. Intuitively, since the prior discounts large values of \(\Phi\), we can expect that the final marginal IM contour has thinner--potentially vanishing--tails. The red line in Figure 4 shows this contour and, indeed, the tails are vanishing and more efficient marginal inference obtains if prior information of the form "I don't expect \(\Phi\) to be large" is incorporated into the analysis at the start. Note, also, that increasing the efficiency in this way doesn't make the IM solution susceptible to the risk identified in Gleser-Hwang. The reason is that the definition of _validity_ also takes the partial prior information into account: the prior can be leveraged to improve efficiency because validity itself is relative to the prior. Unlike Bayes, since no prior is required for the IM solution, I don't run a risk of making unjustified assumptions--I'm free to make those assumptions, and enjoy the efficiency gains, only when I can justify them. ### Two challenging practical examples The examples above are mostly illustrative in nature, shedding light on how the marginal IM construction works. Here I want to consider two practically important and non-trivial examples--namely, the Behrens-Fisher and gamma mean problems--which I see as _challenge examples_ for any framework designed for efficient marginal inference. **Example 5** (Behrens-Fisher).: Fisher (1935, Sec. 3) refers to the following inference problem. Let \(Y\) consist of mutually independent pairs \((Y_{ki})\), where \[Y_{k1},\ldots,Y_{kn_{k}}\;\stackrel{{\mbox{\tiny iid}}}{{\sim}} \;\mathsf{N}(\mu_{k},\sigma_{k}^{2}),\quad k=1,2,\] with \(\theta=(\mu_{1},\mu_{2},\sigma_{1}^{2},\sigma_{2}^{2})\) unknown: the goal is inference on the difference \(\phi=\mu_{2}-\mu_{1}\). Fisher's solution makes reference to a relevant result by Walter Behrens, and hence the problem became known as the _Behrens-Fisher problem_. This example has attracted a lot of attention over the years--and still does--because unlike many other problems involving inference on parameters of a normal model, this one doesn't admit a pivot that provides an exact solution. The issue, of course, is the two variances corresponding to the two normal populations are not assumed to have any relationship; if they were equal or proportional, then the difficulties inherent in the Behrens-Fisher problem would disappear. For more details on the history and the various proposed solutions to the Behrens-Fisher problem, see, e.g., Kim and Cohen (1998). Here I'll construct a strongly valid and efficient marginal IM for the unknown \(\Phi\). In this case, the relative profile likelihood \(R(y,\phi)\) doesn't have a closed-form expression, but it's not difficult to evaluate numerically. However, it turns out that the Figure 4: Marginal IM contours for the mean ratio \(\Phi\) in the Fieller–Creasy illustration described in Example 4, based on data \(y=(1.33,0.33)\). Black line is for the vacuous prior case, red line is for the partial prior case. distribution of \(R(Y,\phi)\), as a function of the random variable \(Y\) with \(\phi\) as the true mean difference, only depends on the variance ratio, \(\lambda=\sigma_{1}^{2}/\sigma_{2}^{2}\). So, for a given data set \(y\), it's easy to evaluate (via Monte Carlo) the function \[\phi\mapsto\mathsf{P}_{Y|\phi,\lambda}\{R(Y,\phi)\leq R(y,\phi)\},\quad\text{ for any $\lambda$.} \tag{8}\] This suggest a marginal IM for \(\Phi\) with contour function that equals the pointwise maximum of (8) over \(\lambda\), i.e., \[\pi_{y}(\phi)=\sup_{\lambda>0}\mathsf{P}_{Y|\phi,\lambda}\{R(Y,\phi)\leq R(y, \phi)\},\quad\phi\in\mathbb{R}, \tag{9}\] where, numerically, the supremum is replaced by a max over a grid of \(\lambda\) values; in my experiments (see below), I've seen very little variation across the different values of \(\lambda\) on this grid. This marginal IM is strongly valid, which implies that the marginal \(100(1-\alpha)\%\) plausibility interval for \(\Phi\) has guaranteed coverage probability at least \(1-\alpha\) uniformly across samples sizes and nuisance parameter values. Most of the other methods available in the literature can only offer asymptotic coverage guarantees. As an illustration, consider the often-used data in Lehmann (1975, p. 83) on travel times to work via two different routes. This is the same example used by Kim and Cohen (1998) and others. The relevant summary statistics--sample sizes, sample means, and sample standard deviations--are as follows: \[n_{1} =5 \text{mean}(y_{1}) =7.580 \text{sd}(y_{1}) =2.237\] \[n_{2} =11 \text{mean}(y_{2}) =6.136 \text{sd}(y_{2}) =0.073.\] Note the relatively wide discrepancy between the two standard deviations; this would make it difficult to justify treating this using a simpler model formulation where the two normal variances are assumed to be the same. Figure 5(a) plots a couple different things: the gray lines correspond to the functions (8) for 75 different values of \(\lambda\) over the range 0.001 to 100, and the black line is the marginal IM contour (9), the pointwise maximum of the gray curves, with the vertical lines marking the endpoints of the marginal IM's 95% plausibility interval for \(\Phi\). Note that there's very little variation in the gray curves indexed by different values of the nuisance parameter. For comparison, the 95% confidence intervals for \(\Phi\) produced by several different methods are presented in Table 1. The solution by Hsu (1938) and Scheffe (1970) in the top row has coverage guarantees but tends to be conservative; the marginal IM solution presented in Martin and Liu (2015c) gives interval estimates that agree with the Hsu-Scheffe intervals. The methods in the 2nd through 4th rows of the table offer only approximate coverage probability guarantees. The last row is the marginal IM solution presented here, and note that my interval is the shortest of all those presented. It's worth to explore the performance of the proposed marginal IM solution compared to other methods across data sets. For brevity, I'll reproduce one part of the extensive simulation study carried out in Fraser et al. (2009). I've chosen one of the most difficult, unbalanced cases with sample sizes \(n_{1}=2\) and \(n_{2}=20\). In these simulations, the true values are \(\mu_{1}=2\), \(\mu_{2}=0\), \(\sigma_{1}^{2}=1\), and \(\sigma_{2}^{2}=2\); note that the true mean difference is \(\phi=-2\). I ran 10000 simulations and Figure 5(b) plots the (estimated) distribution function of the random variable \(\pi_{Y}(\phi)\), a function of the simulated data. It's clear that this distribution function closely follows the diagonal line, indicating that \(\pi_{Y}(\phi)\) is exactly (or at least approximately) uniformly distributed. Therefore, even in this difficult unbalanced setting, the coverage probability of the marginal IM solution matches the nominal level exactly (up to Monte Carlo error), no sign of conservatism. For comparison, the coverage probability for the methods compared in Fraser et al. (2009) are presented in Table 2. With the exception of Jeffreys's default-prior Bayes solution (which agrees with Fisher's fiducial solution) and the 3rd order accurate solution, all the existing methods fall drastically short of the 90% coverage target. The profile-based marginal IM solution proposed here, on the other hand, hits the target on the nose. **Example 6** (Gamma mean).: Consider the two-parameter gamma model with unknown shape parameter \(\Theta_{1}>0\) and unknown scale parameter \(\Theta_{2}>0\). Example 10 in Part II presents a joint IM for simultaneous inference on the pair \(\Theta=(\Theta_{1},\Theta_{2})\). From this, \begin{table} \begin{tabular}{c c c} \hline Method & Lower limit & Upper limit \\ \hline Hsu (1938), Scheffe (1970) & \(-3.314\) & \(0.427\) \\ Fisher (1935), Jeffreys (1940) & \(-3.308\) & \(0.421\) \\ Welch (1938) & \(-3.293\) & \(0.406\) \\ Welch (1947), Aspin (1948) & \(-3.273\) & \(0.386\) \\ _Profile marginal IM_ & \(-3.106\) & \(0.227\) \\ \hline \end{tabular} \end{table} Table 1: Lower and upper limits of various 95% confidence intervals for the difference of means \(\Phi\) based on the Lehmann’s data. The values in this table (except for the last row) are taken from Kim and Cohen (1998, Table 2). Figure 5: Panel (a) shows several nuisance parameter-dependent tentative contours (gray) and the actual marginal IM contour (black) for the mean difference \(\Phi\) in the Behrens–Fisher problem based on Lehmann’s travel time data. Panel (b) shows the estimated distribution function of the random variable \(\pi_{Y}(\phi)\) as a function of simulated data as described in the text, where \(\phi\) is the true value of the mean difference. of course, one can obtain (naive) marginal inference on any feature \(\Phi=f(\Theta)\) of \(\Theta\). The focus here in the present example is on the construction of an efficient marginal IM specifically for inference on the mean \(\Phi=\Theta_{1}\Theta_{2}\), the product. This is a challenging marginal inference problem that has attracted the attention of a number of researchers, including Grice and Bain (1980), Shiue and Bain (1990), Wong (1993), Fraser et al. (1997), and Bhaumik et al. (2009). Here I construct a strongly valid marginal IM for \(\Phi\) using the general machinery described above. Let \(Y=(Y_{1},\ldots,Y_{n})\) denote an iid sample from a gamma distribution with the following parametrization: the shape parameter is \(\Lambda\) and the scale parameter is \(\Phi/\Lambda\), so that the mean of the distribution is \(\Phi\). Unfortunately, there is no closed-form expression for the relative profile likelihood \(R(y,\phi)\), but it can be readily evaluated numerically. The more significant challenge is that the distribution of the relative profile likelihood depends on both the interest and nuisance parameters, so some effort is required to evaluate the corresponding Choquet integral. Indeed, in the case of vacuous prior information, the marginal IM contour for \(\Phi\) is given by \[\pi_{y}(\phi)=\sup_{\lambda>0}\mathsf{P}_{Y|\phi,\lambda}\{R(Y,\phi)\leq R(y, \phi)\},\quad\phi>0,\] and the computational obstacle is evaluating the supremum over \(\lambda\). Since the relative profile likelihood is, by Wilks's theorem, an approximate pivot in this example, one would expect that the right-hand side's dependence on \(\lambda\) is relatively mild, so this can be accurately approximated by maximizing over a relatively coarse grid of \(\lambda\) values. For illustration, consider the real data presented in Example 3 of Fraser et al. (1997), which consists of the survival time (in weeks) for \(n=20\) rats exposed to a certain amount of radiation. Figure 6 displays a plot of the mapping \[\phi\mapsto\mathsf{P}_{Y|\phi,\lambda}\{R(Y,\phi)\leq R(y,\phi)\}, \tag{10}\] for a range of different values of the shape parameter \(\lambda\), along with the corresponding marginal IM's contour based on optimizing over \(\lambda\). There are a few key observations worth making here. First, since gamma is an exponential family model, the maximum likelihood estimator of \(\Phi\) is the sample mean, \(\bar{y}\approx 113.5\), which is where the plausibility contours peak. Second, note that there is effectively no change in the curves (10) as \(\lambda\) varies in the grid \(\{0.1,0.5,1,5,10,50,100\}\), which suggests that simple approximations of the marginal IM contour, e.g., by using (10) with \(\lambda\) fixed at its maximum likelihood estimator, ought to be reasonably accurate. Third, the vertical bars correspond to the 95% confidence intervals based on three other methods as presented in Fraser et al. (1997): the red line is based on the first-order normal approximation of the sampl \begin{table} \begin{tabular}{c c c c c c} \hline Jeffreys (1940) & Ghosh and Kim (2001) & Welch (1947) & 1st order & 3rd order & IM \\ \hline 0.9296 & 0.7873 & 0.8362 & 0.7399 & 0.8617 & 0.9082 \\ \hline \end{tabular} \end{table} Table 2: Coverage probability of various 90% confidence intervals for \(\Phi\) in a very unbalanced version of the Behrens–Fisher problem, with \(n_{1}=2\) and \(n_{2}=20\); all but the last entry is taken from Fraser et al. (2009, Table 1a). Here “1st order” and “3rd order” correspond to the likelihood ratio-based approximations of Fraser et al. distribution of the maximum likelihood estimator; the green line is based on the more sophisticated method in Shiue and Bain (1990); and the blue line is based on the third-order approximation derived in Fraser et al. (1997). The marginal IM's interval is quite different from those bounded by the red and green lines, which isn't too surprising given that the latter are based on relatively crude approximations. That the marginal IM's interval closely matches that bounded by the blue lines also isn't surprising because the IM solution is precisely that which the third-order method is aiming to approximate. Beyond the confidence interval comparisons, the marginal IM for \(\Phi\) is provably valid and efficient, so it can reliably answer any relevant question concerning the mean \(\Phi\). To dig deeper into the efficiency of the proposed IM solution, we reproduce the simulation study in Fraser et al. (1997, Ex. 2). In particular, I generate 10000 samples of size \(n=10\) from a gamma distribution with shape 2 and mean \(\phi=1\). Panel (b) of Figure 6 shows the estimated distribution function of the random variable \(\pi_{Y}(\phi)\), and it's indistinguishable from uniform, as the general theory indicates. Table 3 shows p-value percentages for a number of different methods available in the literature: including the textbook solution based on the first-order approximate normality of the maximum likelihood estimator, the methods proposed in Shiue and Bain (1990) and Wong (1993), and the two third-order signed likelihood root approximations presented in Fraser et al. (1997). With the exception of the textbook first-order solution, which is terrible, all the methods perform similarly. The marginal IM solution is a bit more conservative than some of the others--see the "\(<0.5\%\)" to "\(<2.5\%\)" bins--which is perhaps to be expected given that it has the exact validity constraint and the sample size is small. Figure 6: Panel (a): Marginal IM contour for the gamma mean as in Example 6. The gray curves correspond to the function (10) for particular values of the gamma shape parameter \(\lambda\); the black line represents the marginal IM contour for \(\Phi\) based on a point-wise maximum of the gray curves. The colored vertical lines mark confidence intervals based on various other methods, as explained in the text. Panel (b): estimated distribution function of the random variable \(\pi_{Y}(\phi)\) where \(\phi\) is the true value of the mean. ### Words of caution It was demonstrated above that in both ideal and less-than-ideal factorization cases, the profiling strategy effectively eliminated nuisance parameters and led to valid and efficient marginal IMs for the interest parameter. As hinted at previously, however, there are examples outside the ideal-factorization cases in which profiling will fail to give an efficient marginal IM solution, so blind application of profiling is dangerous. Here I'll look at two such examples. Fortunately, what makes the profiling strategy fail can be spotted in the problem setup, and other dimension reduction strategies can be used. **Example 7** (Mean vector length).: Consider the classical problem in which \(Y\) is a \(n\)-dimensional normal random vector with unknown mean vector \(\Theta\) and known covariance matrix, say, the identity matrix \(I\). Inference on the mean itself is the same regardless of the dimension, but suppose that the quantity of interest is \(\Phi=f(\Theta)=\|\Theta\|\), the Euclidean length of the mean vector. As mentioned above, the mean vector itself can be reconstructed by introducing the dual nuisance parameter \(\Lambda=\Theta/\|\Theta\|\) that represents the unit vector in the direction of \(\Theta\). This turns out to be a non-trivial problem, as first pointed out by Stein (1956, 1959); it was also recently listed in Fraser et al. (2018) as one of the "challenge problems" put forward by the late Sir D. R. Cox. The likelihood function is relatively straightforward in this case: \[p_{Y|\theta}(y)=\exp\{-\tfrac{1}{2}\|y-\theta\|^{2}\},\quad\theta\in\mathbb{T }=\mathbb{R}^{n}.\] In terms of \((\phi,\lambda)\), the log-likelihood can be re-expressed as \[\log p_{Y|\phi,\lambda}(y)=-\tfrac{1}{2}\|y-\phi\lambda\|^{2}=-\tfrac{1}{2}\| y\|^{2}+(\phi\|y\|)\lambda^{\top}(y/\|y\|)-\tfrac{1}{2}\phi^{2}.\] Note that the likelihood depends on \(y\) only through the pair \(\{U(y),V(y)\}\), where \(U(y)=\|y\|\) and \(V(y)=y/\|y\|\). It's also relatively well-known (e.g., Mardia and Jupp 2000, Sections 3.5.4 and 9.3.2) that the joint distribution can be factored as \[p_{Y|\theta}(y)=p_{U|\phi}(u)\,p_{V|u,\phi,\lambda}(v),\] where \(p_{U|\phi}\) is a non-central chi-square density and \(p_{V|u,\phi,\lambda}\) is a von Mises-Fisher distribution. The specific details of the von Mises-Fisher distribution aren't relevant here; \begin{table} \begin{tabular}{c c c c c c c} \hline Method & \(<0.5\%\) & \(<1.25\%\) & \(<2.5\%\) & \(>97.5\%\) & \(>98.75\%\) & \(>99.5\%\) \\ \hline 1st order & 5.00 & 5.73 & 8.53 & 1.52 & 0.74 & 0.35 \\ Shiue and Bain (1990) & 0.28 & 0.70 & 1.19 & 1.20 & 0.67 & 0.30 \\ Wong (1993) & 0.29 & 0.69 & 2.18 & 2.05 & 0.95 & 0.41 \\ 3rd order (a) & 0.37 & 0.90 & 2.30 & 2.41 & 1.13 & 0.47 \\ 3rd order (b) & 0.37 & 0.91 & 2.30 & 2.41 & 1.13 & 0.47 \\ _Profile marginal IM_ & 0.22 & 0.65 & 1.57 & 2.71 & 1.31 & 0.59 \\ \hline \end{tabular} \end{table} Table 3: Results for the gamma mean simulation in Example 6. The table shows the percentage of \(p\)-values in the stated bins (across 10000 simulations) for the various methods which relates to the coverage probability of the corresponding confidence intervals for the mean \(\Phi\). All the values except for the last row are taken from Table 2 in Fraser et al. (1997). The standard errors across the board are all less than \(0.16\%\). the key point is that the conditional distribution of \(V\), given \(U=u\), depends on both \(\phi\) and \(\lambda\). Therefore, this isn't one of the "ideal factorization" cases discussed above. It is, however, a "less-than-ideal factorization," so there's a choice to be made: do we follow the general profile likelihood strategy, which avoids throwing away relevant information in \(V\), or go for the simplicity of the marginal likelihood based on \(U\)? Both IM solutions are valid, so efficiency considerations will tip the scale in one direction or the other. The profile-based IM solution will have a contour that's maximized at the global maximum likelihood estimator \(\hat{\phi}_{y}\) of \(\Phi\). It's well-known that that the maximum likelihood estimator of \(\Theta\) is the data vector \(y\) so, by invariance of maximum likelihood estimators, it immediately follows that \(\hat{\phi}_{y}=u=\|y\|\), the length of the data vector \(y\). Indeed, the profile likelihood function is \[\sup_{\lambda}p_{Y|\phi,\lambda}(y)=\exp\{-\tfrac{1}{2}(\|y\|-\phi)^{2}\}, \quad\phi>0,\] so above claim is easily confirmed without applying the invariance principle. Although it's natural to estimate the mean vector length by the data vector length, this is actually a pretty lousy estimator: it has non-negligible upward bias. Intuitively, the reason why the maximum likelihood estimator of \(\Phi\) has such poor properties is this case is that there are too many nuisance parameters, in fact, \(O(n)\) many. If the peak of the IM contour for \(\Phi\) is attained at a severely biased estimator, then it must be rather wide in order to achieve validity; see Figure 7. This suggests a deficiency in the profile likelihood-based IM solution, so let's consider the marginal likelihood-based solution. Neither the profile nor the marginal likelihood-based IM contour functions have a closed-form expression, but they can readily evaluated numerically via Monte Carlo. To get a feeling of why the marginal likelihood-based solution might be superior, note that the corresponding contour will be maximized the maximum marginal likelihood estimator, i.e., \(\arg\max_{\phi}p_{U|\phi}(u)\). There's no closed-form expression for this either, but a decent approximation is the method of moments estimator: \((U^{2}-n)^{1/2}\). Note the automatic correction for the upward bias of \(U\) mentioned above. Since the peak of the marginal likelihood-based IM contour for \(\Phi\) tends to be closer to the true value, more efficiency is expected compared to the profile likelihood-based solution. To see the above claims in action, I simulated a Gaussian random vector of dimension \(n=5\) with mean vector such that the true value of \(\Phi\) is 3. Note that the marginal likelihood-based IM contour has peak closer to the true value of \(\Phi\) than the profile likelihood-based IM contour, and it's also narrower; note that the 95% plausibility interval based on the former solution is significantly narrower than that based on the latter. **Example 8** (Neyman-Scott).: In the early days of statistical theory, it was generally believed that maximum likelihood estimators were consistent. This famous example put forward in Neyman and Scott (1948) busted this myth by showing that the maximum likelihood estimator could be inconsistent. The problem setup is as follows. Suppose that data \(Y\) consists of independent pairs \(Y_{i}=(Y_{i1},Y_{i2})\), where \[Y_{i1},Y_{i2}\stackrel{{\text{\tiny iid}}}{{\sim}}\mathsf{N}( \mu_{i},\sigma^{2}),\quad i=1,\ldots,n.\] The quantity of interest \(\Phi\) is the uncertain value of the variance \(\sigma^{2}\), and the nuisance parameter \(\Lambda\) is the unknown vector of means \((\mu_{1},\ldots,\mu_{n})\). It's a standard exercise in a stat theory course to show that the maximum likelihood estimator is \[\hat{\phi}_{Y}=\frac{1}{n}\sum_{i=1}^{n}(Y_{i1}-Y_{i2})^{2},\] and that its in-probability limit is \(\frac{1}{2}\Phi\neq\Phi\). Of course, this asymptotic bias is easy to correct for, but that's not the point: this is a sign of some inadequacy in the naive likelihood-based solution. Therefore, we should know here to proceed with caution in our IM construction. In particular, like in Example 7 above, we can expect that a profile likelihood-based solution, which returns a contour function maximized at the flawed \(\hat{\phi}_{Y}\) above, so a marginal likelihood-based approach might be preferred. As demonstrated clearly in Boos and Stefanski (2013, Sec. 2.4), and perhaps elsewhere, the change-of-variables \(y\mapsto\{U(y),V(y)\}\), with \[U_{i}=2^{-1/2}(Y_{i1}-Y_{i2})\quad\text{and}\quad V_{i}=2^{-1/2}(Y_{i1}+Y_{i2}),\quad i=1,\ldots,n,\] has the following properties: * the marginal distribution of \(U=(U_{1},\ldots,U_{n})\) doesn't depend on \((\mu_{1},\ldots,\mu_{n})\); * \((U_{1},\ldots,U_{n})\), given \(\Phi=\phi\), are iid \(\mathsf{N}(0,\phi)\); * and \(U\) and \(V\) are independent. Since the marginal distribution of \(V\) depends on both \(\Phi\) and \(\Lambda\), we're in the \(\Phi\)-oriented case under the "less-than-ideal factorization" above. Instead of profiling, I suggest taking the marginal likelihood for \(\Phi\) based on \(U\) alone. The maximum marginal likelihood estimator is easily shown to be \(2\hat{\phi}_{Y}\), which is consistent. From here the story is very similar to that in Example 7 above, so I'll skip the details. But the point, again, is that, when there are a growing number of nuisance parameters and the ordinary maximum likelihood estimator is inconsistent, a valid and efficient marginal IM should for \(\Phi\) be Figure 7: Marginal IM contours for the mean vector length \(\Phi\) in Example 7 based on the marginal likelihood (black) and the profile likelihood (red). constructed based on the marginal likelihood. It's counter-intuitive, at least to me, that efficient marginal inference in these cases requires that some information about \(\Phi\) in \(V\) be intentionally ignored. ## 4 Predictive possibilistic IMs ### Setup Prediction of future responses is a fundamentally important problem in statistics and machine learning. Here I'll focus primarily on the simple but practical case of prediction with respect to an assumed parametric model. Specifically, let \((Y,Z\mid\Theta=\theta)\sim\mathsf{P}_{Y,Z\mid\theta}\) be the statistical for the pair \((Y,Z)\in\mathbb{Y}\times\mathbb{Z}\) consisting of an observable \(Y\) and a to-be-predicted \(Z\), depending on an unknown \(\Theta\). Note that \(Y\) and \(Z\) might not have the same form, e.g., \(Y\) and \(Z\) might have different marginal distributions due to dependence on distinct covariate values (hidden in the notation), \(Z\) might be a function, such as a maximum, of several independent \(Y\)-like realization, or, more generally, \(Y\) might be the sufficient statistic based on a sample of \(Z\)-like random variables. The particular form of the relationship between \(Y\) and \(Z\) isn't important; all that matters is the dependence on a common parameter \(\Theta\). A valid IM construction in this case was presented in Martin and Lingham (2016), but the approach here is more general in several respects. Partial prior information could be available about \(\Theta\) in the form of an upper probability distribution with contour \(q(\theta)=q_{\Theta}(\theta)\). This problem roughly fits into the setup described in the previous section, where \(\Phi=Z\) is of primary interest and \(\Lambda=\Theta\) is of only secondary importance. From this perspective, it's clear that prediction is just an extreme version of the marginal inference problem where the model parameter itself is a nuisance. Recall that marginalization can be carried out in at least two ways: one based on applying the basic extension principle to the joint IM's output and the other taking the priorities of the analysis into special consideration in the IM construction. For the prediction problem here it turns out that there are even more ways one could imagine proceeding. In particular, I can see three options for constructing a _predictive IM_ for inference on/prediction of \(Z\): 1. First construction an IM for \(\Theta\), given \(Y=y\), as described above, then combine this in an appropriate way with a possibilistic representation of the model that relates \(Z\) to \(\Theta\) and then marginalize to \(Z\) via the extension principle. This is similar to Bayes's rule combines first by multiplying the model density and the posterior density, and then integrates to get the predictive density. 2. Construct a full IM for the pair \((Z,\Theta)\), given \(Y=y\), as described above, and then marginalize to \(Z\) via the extension principle. 3. Use the fact that \(\Theta\) is a nuisance parameter in order to reduce dimension before constructing a marginal IM for \(Z\). These three options are in decreasing order of simplicity and increasing order of efficiency. That is, Option 1 is relatively simple but less efficient, while Option 3 is less simple but more efficient. Option 2 isn't particularly appealing because, usually, either simplicity or efficiency is the priority but this one achieves neither. So I'll only briefly describe Option 2 and focus my attention on Options 1 and 3; see Section 4.2 below. Before describing the details of these options, it'll help to be clear about what the objectives are. Like in the previous sections, my goal here is to construct a pair \((\underline{\Pi}_{y},\overline{\Pi}_{y})\) of lower and upper probabilities on \(\mathbb{Z}\), with a consonance structure so that they're fully determined by a contour function \(\pi_{y}(z)\). Analogous to (4), I'll require that this predictive IM have a _strong prediction validity_ property, i.e., \[\overline{\mathsf{P}}_{Y,Z,\Theta}\{\pi_{Y}(Z)\leq\alpha\}\leq\alpha,\quad \alpha\in[0,1]. \tag{11}\] Since the event "\(\pi_{Y}(Z)\leq\alpha\)" doesn't directly depend on \(\Theta\), the \(\overline{\mathsf{P}}_{Y,Z,\Theta}\)-probability on the left-hand side above boils down to a supremum over all marginal distributions for \((Y,Z)\) induced by precise priors \(\mathsf{Q}\) for \(\Theta\) in the prior credal set \(\mathscr{Q}\). That is, the left-hand side of the above display can be rewritten as \[\sup_{\mathsf{Q}\in\mathscr{Q}}\int_{\mathbb{T}}\mathsf{P}_{Y,Z|\theta}\{\pi_ {Y}(Z)\leq\alpha\}\,\mathsf{Q}(d\theta).\] In the vacuous-prior case, for example, the property in (11) reduces to \[\sup_{\theta}\mathsf{P}_{Y,Z|\theta}\{\pi_{Y}(Z)\leq\alpha\}\leq\alpha,\quad \alpha\in[0,1],\] which is probably a more familiar looking condition to the reader. The setup above and the details below focus on the very special--albeit practically relevant--case where prediction is with respect to a specified parametric model. It many applications, however, the goal is to carry out prediction without assuming a particular distributional form for the observables. It turns out that this more general prediction problem can also be handled within the proposed framework, but I'll postpone discussion of this till Section 6 below. ### Three valid IM constructions #### Option 2 I'll start with a brief explanation of Option 2. This isn't an ideal solution, in my opinion, because it's neither the simplest nor the most efficient. But the construction here will help with the development of Option 3 below, so it's worth briefly mentioning this case. Recall that Option 2 suggests first constructing an IM for the pair \((Z,\Theta)\) and then marginalizing over \(\Theta\) using the extension principle, leaving a marginal IM for \(Z\). Let \(p_{Y,Z|\theta}(y,z)\) denote the joint density/mass function of \((Y,Z)\), given \(\Theta=\theta\). For the first step, following the general framework, the "relative likelihood" function is \[R_{q}(y,z,\theta)=\frac{p_{Y,Z|\theta}(y,z)\,q_{\Theta}(\theta)}{\sup_{x, \vartheta}\{p_{Y,Z|\vartheta}(y,x)\,q_{\Theta}(\vartheta)\}},\quad(y,z,\theta )\in\mathbb{Y}\times\mathbb{Z}\times\mathbb{T}.\] In general, there's no further simplification that can be made to this relative likelihood function. So to proceed with the IM construction for \((Z,\Theta)\), I'd simply plug this formula into the general recipe described above, which leads to the an IM contour function \[\pi_{y}(z,\theta)=\overline{\mathsf{P}}_{Y,Z,\Theta}\{R_{q}(Y,Z,\Theta)\leq R _{q}(y,z,\theta)\},\quad(z,\theta)\in\mathbb{Z}\times\mathbb{T}.\] This joint IM for \((Z,\Theta)\) is strongly valid by the general theory in Part II; moreover, the marginal predictive IM for \(Z\), with contour \[\pi_{y}(z)=\sup_{\theta\in\mathbb{T}}\pi_{y}(z,\theta),\quad z\in\mathbb{Z},\] would also satisfy the strong prediction validity as described in the previous subsection as a consequence of the validity-preserving property of the extension principle. I'll have more to say about the prediction validity property for Options 1 and 3 below. In certain special cases, however, some simplifications can be made and dimension can be reduced for the sake of efficiency. These are the vacuous- and complete-prior cases. * Recall that, with a vacuous prior, i.e., \(q_{\Theta}(\theta)\equiv 1\), there's an opportunity to reduce dimension by _fixing_\(\theta\) in the relative likelihood. In this case, the IM contour is \[\pi_{y}(z,\theta)=\mathsf{P}_{Y,Z|\theta}\{R(Y,Z,\theta)\leq R(y,z,\theta)\},\quad(z,\theta)\in\mathbb{Z}\times\mathbb{T},\] and efficiency is gained by not integrating/optimizing over \(\theta\). * With a complete prior, where \(q\) now denotes the prior probability density/mass function, the dimension reduction is achieved by _fixing_\(y\). That is, the numerator in the expression for \(\eta\) can be rewritten as \[p_{Y,Z|\theta}(y,z)\,q_{\Theta}(\theta)\propto p_{Z|y,\theta}(z)\,q_{\Theta|y }(\theta),\] where "\(\propto\)" means as a function of \((z,\theta)\), and \(q_{\Theta|y}(\theta)\) is the Bayesian posterior density/mass function for \(\Theta\), given \(Y=y\); the proportionality "constant" is the marginal density of \(Y\), with respect to prior \(q_{\theta}\), which only depends on \(y\). That proportionality constant cancels in the ratio that defines \(\eta\), after some further simplifications, the IM for \((Z,\Theta)\) has contour function \[\pi_{y}(z,\theta)=\mathsf{P}_{Z,\Theta|y}\{p_{Z|y,\Theta}(Z)\,q_{\Theta|y}( \Theta)\leq p_{Z|y,\theta}(z)\,q_{\Theta|y}(\theta)\},\] where the probability is with respect to the conditional distribution of \((Z,\Theta)\), given \(Y=y\), which has density/mass function \(p_{Z|y,\theta}(z)\,q_{\Theta|y}(\theta)\). #### Option 3 Next I'll describe Option 3 because it's more in line with the development in Section 3 above than is Option 1. Recall that Option 3 is based on the understanding that only prediction of \(Z\) is relevant and, therefore, \(\Theta\) is a nuisance parameter. Then the marginal inference strategy described in the previous section can be applied, which suggests the relative profile likelihood \[R_{q}(y,z)=\frac{\sup_{\theta}\{p_{Y,Z|\theta}(y,z)\,q_{\Theta}(\theta)\}}{ \sup_{x,\theta}\{p_{Y,Z|\theta}(y,x)\,q_{\Theta}(\theta)\}},\quad(y,z)\in \mathbb{Y}\times\mathbb{Z}.\] Of course, if the prior information for \(\Theta\) were vacuous, then \(q_{\Theta}(\theta)\equiv 1\) and that term can be dropped from the above expression; the case of a complete prior will be considered below. In any case, the construction of a _predictive_ IM for \(Z\), given \(Y=y\), now proceeds by defining the contour function \[\pi_{y}(z)=\overline{\mathsf{P}}_{Y,Z,\Theta}\{R_{q}(Y,Z)\leq R_{q}(y,z)\},\quad z \in\mathbb{Z}.\] When the partial prior information is encoded in a possibility measure, the above Choquet integral can be simplified as before, i.e., \[\pi_{y}(z)=\int_{0}^{1}\Bigl{[}\sup_{\theta:q_{\Theta}(\theta)>s}\mathsf{P}_{Y,Z|\theta}\{R_{q}(Y,Z)\leq R_{q}(y,z)\}\Bigr{]}\,ds,\quad z\in\mathbb{Z}.\] In the vacuous prior case, with \(q(\theta)\equiv 1\), this simplifies even further: \[\pi_{y}(z)=\sup_{\theta}\mathsf{P}_{Y,Z|\theta}\{R(Y,Z)\leq R(y,z)\},\quad z \in\mathbb{Z}.\] Depending on the structure of the problem, it may happen that \(R(Y,Z)\) is a pivot (see Example 9), in which case the IM computation becomes relatively straightforward. For example, in the vacuous-prior case, if \(R(Y,Z)\) is a pivot, then the supremum in the above display can be dropped and computation of the IM contour function is easy. That prediction validity (11) holds for the IM constructed above is easy to verify. Indeed, \[\overline{\mathsf{P}}_{Y,Z,\Theta}\{\pi_{Y}(Z)\leq\alpha\} =\sup_{\mathsf{Q}}\mathsf{P}_{Y,Z|\mathsf{Q}}\Bigl{[}\sup_{ \mathsf{Q}^{\prime}}\mathsf{P}_{Y^{\prime},Z^{\prime}|\mathsf{Q}^{\prime}}\{R _{q}(Y^{\prime},Z^{\prime})\leq R_{q}(Y,Z)\}\leq\alpha\Bigr{]}\] \[\leq\sup_{\mathsf{Q}}\mathsf{P}_{Y,Z|\mathsf{Q}}\bigl{[}\mathsf{P} _{Y^{\prime},Z^{\prime}|\mathsf{Q}}\{R_{q}(Y^{\prime},Z^{\prime})\leq R_{q}(Y, Z)\}\leq\alpha\bigr{]}.\] The \(\mathsf{Q}\)-specific probability on the right-hand side is upper-bounded by \(\alpha\) by standard arguments, for each \(\mathsf{Q}\). Then the supremum must also be bounded by \(\alpha\), proving the strong prediction validity claim. Next, consider the case of a precise prior for \(\Theta\), where \(q_{\Theta}\) is the prior density/mass function. As before, dimension can be reduced--and efficiency gained--by _fixing_\(y\). Towards this, note that the numerator of relative profile likelihood at the start of this subsection can be rewritten as \[\sup_{\theta}\{p_{Y,Z|\theta}(y,z)\,q_{\Theta}(\theta)\} =\sup_{\theta}\{p_{Z|y}(z)\,q_{\Theta|y,z}(\theta)\,p_{Y}(y)\}\] \[=p_{Z|y}(z)\,p_{Y}(y)\,\sup_{\theta}q_{\Theta|y,z}(\theta),\] where \(p_{Z|y}(z)\) is the posterior predictive distribution of \(Z\), given \(Y=y\), \(p_{Y}(y)\) is the marginal distribution of \(Y\) under the Bayes model with prior \(q_{\Theta}\), and \(q_{\Theta|y,z}(\theta)\) is the posterior distribution of \(\Theta\), given \((Y,Z)=(y,z)\). The marginal density term, \(p_{Y}(y)\), cancels in the ratio that defines \(R\), which leaves just \[R_{q}(y,z)=\frac{\sup_{\theta}q_{\Theta|y,z}(\theta)}{\sup_{x,\theta}\{p_{Z|y} (x)\,q_{\Theta|y,x}(\theta)\}}\,p_{Z|y}(z).\] In the above display, the denominator of the leading term only depends on \(y\) and, therefore, can be treated as a constant. The numerator technically depends on \(z\), but that dependence is rather weak. In fact, in some cases (Example 9), that term doesn't actually depend on \(\tilde{y}\), so it too can be treated as a constant. More generally, it'll be typically be the case that \(y\) is more informative than \(z\)--recall that \(y\) here represents the observed data set (or sufficient statistic) so it would have far more influence on the posterior than the single realization \(z\). Putting all this together, dropping proportionality constants where possible, the relative profile likelihood can be re-expressed as \[R_{q}(y,z)=\left\{\sup_{\theta}q_{\Theta|y,z}(\theta)\right\}p_{Z|y}(z)\propto p _{Z|y}(z).\] The "\(\propto\)" above is generally not exact--the term in curly brackets might depend mildly on \(z\)--but there's no clear reason not to just ignore that term and work with the predictive density itself on the right-hand side; more on this choice below. Then the complete-prior predictive IM for \(Z\), given \(Y=y\), has contour function \[\pi_{y}(z)=\mathsf{P}_{Z|y}\{p_{Z|y}(Z)\leq p_{Z|y}(z)\},\quad z\in\mathbb{Z},\] where \(\mathsf{P}_{Z|y}\) denotes the posterior predictive distribution of \(Z\), given \(Y=y\), with respect to the Bayes model with (precise) prior \(q\). This is the probability-to-possibility transform of the Bayesian predictive distribution and, therefore, enjoys certain optimality properties. For example, the \(100(1-\alpha)\%\) prediction regions derived from \(\pi_{y}(z)\) above are optimal in the sense that they have the smallest Lebesgue measure of all sets having posterior predictive probability at least \(1-\alpha\). #### Option 1 Option 1 is very different from the previous two options described above. This is ideally suited for a case where, for example, the user is handed a strongly valid IM for \(\Theta\), given \(Y=y\), and the task is to update this to a predictive IM for \(Z\), given \(Y=y\). This is different from the previous cases in that, with Options 2-3, the user knew that prediction of \(Z\) was the primary goal and could focus directly on that task. Here, it's as if the user first wanted inference on \(\Theta\) and then later was asked to use that same IM for predictive inference on \(Z\). To keep the details relatively simple, here I'll assume that \(Y\) and \(Z\) are conditionally independent, given \(\Theta\). Lots of applications satisfy the independence assumption, so this is not a severe restriction; and results similar to those below ought to be possible even in certain cases where conditional independence fails. To set the scene, let \(\pi_{y}(\theta)\) be the contour function of a strongly valid IM for \(\Theta\), given \(Y=y\). Moreover, let \(p_{\theta}(z)\) denotes the density of \(Z\), given \(\Theta=\theta\), and consider a possibilistic representation, say, \(f_{\theta}(z)\) thereof: \[f_{\theta}(z)=\mathsf{P}_{Z|\theta}\{p_{Z|\theta}(Z)\leq p_{Z|\theta}(z)\}, \quad z\in\mathbb{Z}.\] Then the strategy is to suitably combine \(f_{\theta}(z)\) and \(\pi_{y}(\theta)\) into a joint possibility distribution for \((Z,\Theta)\), given \(Y=y\), and then marginalize out \(\Theta\). Various combination strategies are discussed in, e.g., Destercke et al. (2009), Troffaes et al. (2013), and Hose (2022, Sec. 3.5.4) but, in the present context, they'd all take the form \[\pi_{y}(z)=\sup_{\theta}\mathcal{K}\{f_{\theta}(z),\pi_{y}(\theta)\}, \tag{12}\] for a suitable function \(\mathcal{K}:[0,1]^{2}\to[0,1]\). Intuitively, \(f_{\theta}(z)\) encodes a conditional distribution of \(Z\), given \(\Theta=\theta\), and \(\pi_{y}(\theta)\) encodes a conditional distribution of \(\Theta\), given \(Y=y\), so the role that \(\mathcal{K}\) plays is like a possibilistic analogue of the probabilistic multiplication operation that converts this pair into a sort of joint distribution for \((Z,\Theta)\), given \(Y=y\). Then the supremum over \(\theta\) on the outside corresponds to marginalization over \(\Theta\), via the extension principle, to get a possibilistic conditional distribution for \(Z\), given \(Y=y\), which will play the role of a predictive IM. For statistical and historical reasons, I prefer the combination rule that's motivated by Fisher's classical strategy for combining p-values in significance testing contexts (Fisher 1973b, Sec. 21.1). This boils down to the choice of \(\mathcal{K}\) as \[\mathcal{K}(u,v)=uv\{1-\log(uv)\},\quad(u,v)\in[0,1]^{2}.\] The connection between this formula and Fisher's p-value combination rule is as follows. In the context of significance testing, let \(U\) and \(V\) denote independent p-values, so that \(U,V\stackrel{{\text{iid}}}{{\sim}}\mathsf{Unif}(0,1)\) under the null hypothesis. Then \(-2(\log U+\log V)\) has a chi-square distribution with 4 degrees of freedom, so the p-value for the combined test, based on the product of p-values rule, is given by \[\mathsf{P}(UV\leq uv)=\mathsf{P}\{\underbrace{-2(\log U+\log V)}_{\sim\ \mathsf{ChiSq}(4)}\geq-2(\log u+\log v)\}\] where \((u,v)\in[0,1]^{2}\) here denote the observed p-values. Fisher would've stopped here and suggested compared the observed \(-2(\log u+\log v)\) to the critical value of a \(\mathsf{ChiSq}(4)\) distribution. Jost,1 however, has shown that the chi-square probability in the above display can be evaluated in closed-form, and the expression is \(\mathcal{K}(u,v)\). Footnote 1: [http://www.loujost.com/StatisticsandPhysics/SignificanceLevels/CombiningPValues.htm](http://www.loujost.com/StatisticsandPhysics/SignificanceLevels/CombiningPValues.htm), accessed October 31st, 2022 Suppose that the IM for \(\Theta\), given \(Y=y\), is strongly valid with respect to the vacuous prior. In that case, both \(f_{\theta}(Z)\) and \(\pi_{Y}(\theta)\) are independent and stochastically no smaller than \(\mathsf{Unif}(0,1)\) under \(\mathsf{P}_{Y,Z|\theta}\). Then strong prediction validity (11), relative to the vacuous prior for \(\Theta\), follows immediately from the Fisher/p-value connection described in the previous paragraph. To see this, first note: \[\sup_{\theta}\mathsf{P}_{Y,Z|\theta}\{\pi_{Y}(Z)\leq\alpha\} =\sup_{\theta}\mathsf{P}_{Y,Z|\theta}\{\sup_{\vartheta}\mathcal{ K}(f_{\vartheta}(Z),\pi_{Y}(\vartheta))\leq\alpha\}\] \[\leq\sup_{\theta}\mathsf{P}_{Y,Z|\theta}\{\mathcal{K}(f_{\theta} (Z),\pi_{Y}(\theta))\leq\alpha\}.\] Now, by the above properties of \(f_{\theta}(Z)\) and \(\pi_{Y}(\theta)\), it follows that \(\mathcal{K}\{f_{\theta}(Z),\pi_{Y}(\theta)\}\) is stochastically no smaller than \(\mathsf{Unif}(0,1)\), uniformly in \(\theta\). Therefore, the right-hand side of the above display is upper-bounded by \(\alpha\), hence strong prediction validity. ### Summary To summarize, I presented three different options above for construction of a predictive IM for \(Z\), given \(Y=y\), under fairly general models, i.e., no independence or identically distributed assumptions, only that \((Y,Z)\) are related to the same model parameter \(\Theta\) extension beyond the parametric model case considered here will be discussed briefly in Section 5. Of the three option, my recommendation is Option 3 as its one and only goal is strongly valid and efficient prediction. It achieves this by following the Principle of Minimal Complexity--reducing the dimension as much as possible before carrying out the Choquet integration. The other two constructions don't fully commit to the prediction task, they hold on to the option of making inference on \(\Theta\) too, which is an added constraint that limits the data analysts' ability to reduce dimension. Therefore, as I mentioned above, there will generally be an efficiency loss compared to Option 3 and the other two options. This can be clearly seen in the results of Example 9 below. Option 1 is unique since it's designed specifically for cases where an IM for inference about \(\Theta\), given \(Y=y\), has been constructed, and then prediction is required as an afterthought. In this case, the assumption is that the data analyst doesn't have access to the data that went into the construction of an IM for \(\Theta\), he only has the IM output. This constraint limits his ability to construct an efficient predictive IM in that he's unable to carry out the preliminary dimension reduction steps that lead to efficiency. Moreover, it's unclear at this point whether validity can be achieved through a combination strategy like that presented above except under special conditions, e.g., conditional independence and vacuous prior assumptions. Nevertheless, the use of various strategies to combine different valid IMs into a single valid IM is technically interesting and practically useful. For example, in meta-analysis, one can imagine there being IM output produced and published independently by different research groups. Then the goal might be to combine these various IMs for inference about the common parameter, or to predict the results of a new follow-up study. So, I think this IM combination idea warrants further investigation. ### Examples **Example 9** (Normal).: For a simple illustration, suppose that \((Y\mid\Theta=\theta)\sim\mathsf{N}(\theta,n^{-1}\sigma^{2})\) and \((Z\mid\Theta=\theta)\sim\mathsf{N}(\theta,\sigma^{2})\). In this case, \(\sigma>0\) is taken to be fixed; analogous results can be obtained when \(\sigma\) is unknown but the details are more involved. So then \(Y\) is just the minimal sufficient statistic for the normal mean model based on \(n\) iid samples. I'll also consider the vacuous prior case for the sake of comparison. The objective of this example is to illustrate and compare the different options for constructing a strongly valid predictive IM for \(Z\). I'll flesh out each of these constructions below in turn. With a slight abuse of notation, I'll write \(p_{Y\mid\theta}(y)\) and \(p_{Z\mid\theta}(z)\) for the densities of \((Y\mid\Theta=\theta)\) and \((Z\mid\Theta=\theta)\). 1. For this option, I first construct a joint IM for \((Z,\Theta)\) and then marginalize out \(\Theta\). In this case, the relative likelihood function takes the form \[R(y,z,\theta)=\frac{p_{Y\mid\theta}(y)\,p_{Z\mid\theta}(z)}{\sup_{x,\theta}p_ {Y\mid\theta}(y)\,p_{Z\mid\theta}(x)}=\exp\Bigl{\{}-\frac{n(y-\theta)^{2}+(z- \theta)^{2}}{2\sigma^{2}}\Bigr{\}}.\] Since \(n(Y-\Theta)^{2}+(Z-\Theta)^{2}\) is a pivot, it's easy to get the joint IM: \[\pi_{y}(z,\theta) =\mathsf{P}_{Y,Z\mid\theta}\{R(Y,Z,\theta)\leq R(y,z,\theta)\}\] \[=1-\mathsf{pchisq}\Bigl{(}\frac{n(y-\theta)^{2}+(z-\theta)^{2}}{ \sigma^{2}},\,\mathsf{df}=2\Bigr{)}.\] Applying the extension principle to marginalize over \(\Theta\) leads to \[\pi_{y}(z)=\sup_{\theta}\pi_{y}(z,\theta)=1-\texttt{pchisq}\Big{(}\frac{(z-y)^{2}} {\sigma^{2}(1+n^{-1})},\,\texttt{df}=2\Big{)}.\] 3. For this option, the strategy is to marginalizing before calculating the IM contour. This amounts using a profile relative likelihood, which in this case is given by \[R(y,z)=\frac{\sup_{\theta}p_{Y|\theta}(y)\,p_{Z|\theta}(z)}{\sup_{x,\theta}p_{ Y|\theta}(y)\,p_{Z|\theta}(x)}=\exp\Bigl{\{}-\frac{(z-y)^{2}}{2\sigma^{2}(1+n^{-1}) }\Bigr{\}}.\] Since \(Z-Y\) is a pivot, I can easily get the contour \[\pi_{y}(z) =\sup_{\theta}\texttt{P}_{Y,Z|\theta}\{R(Y,Z)\leq R(y,z)\}\] 1. Option 1 starts with possibilistic representations of "\((Z\mid\Theta)\)" and "\((\Theta\mid Y)\)," and combines these into a sort of "joint IM" for \((Z,\Theta)\), given \(Y\), and then marginalizing via the extension principle. Here \(Y\) and \(Z\) are conditionally independent, so the combination strategy described above is appropriate. The two possibilistic representations I'll take as the starting point are \[f_{\theta}(z) =1-\texttt{pchisq}\Big{(}\frac{(z-\theta)^{2}}{\sigma^{2}}, \texttt{df}=1\Big{)}\] \[\pi_{y}(\theta) =1-\texttt{pchisq}\Big{(}\frac{n(\theta-y)^{2}}{\sigma^{2}}, \texttt{df}=1\Big{)}.\] Unfortunately, the combination and marginalization can't be done in closed-form, but it's not too difficult to carry out these steps numerically. The expressions are sort of messy, so it'll help to be able to visualize the results of the three different constructions. Figure 8 shows the three IM plausibility contour functions for \(Z\), given \(Y=y\), based on \(y=0\), \(n=5\), and \(\sigma=1\). In this case, we see that Options 1 and 2 are very similar, with Option 2 appearing to be slightly more efficient than Option 1, but the solution based on Option 3 is by far the most efficient. That Option 3 is more efficient than Option 2 can actually be seen from the formulas above: the former has "\(\texttt{df}=1\)" while the latter has "\(\texttt{df}=2\)", which explains Option 3's sharp peak compared to Option 2's rounded peak. Note that the \(100(1-\alpha)\%\) prediction plausibility interval derived from the Option 3 solution agrees exactly with the standard prediction interval presented in textbooks, which is optimal in all the usual senses. Incidentally, a standard result in the Bayesian literature is that, with a default/flat prior for \(\Theta\), the posterior predictive distribution is \((Z\mid Y=y)\sim\textsf{N}(y,\sigma^{2}(1+n^{-1}))\). This is the maximal inner probabilistic approximation of the predictive possibility measure derived in Option 3 above. In other words, the probability-to-possibility transform of the default Bayes predictive distribution agrees exactly with the Option 3 solution. **Example 10** (Multinomial).: Inference and prediction in the multinomial model is a fundamentally important problem; see, also, Example 12 below. Let \((Y\mid\Theta=\theta)\) have a multinomial distribution, \(\mathsf{Mult}_{K}(n,\theta)\), where \(K\) is the cardinality of the support, i.e., the number of categories, \(n\) is the sample size, and \(\theta=(\theta_{1},\ldots,\theta_{K})\) is a probability vector taking values the probability simplex in \(\mathbb{R}^{K}\). A realization of \(Y\) is just a frequency table \((Y_{1},\ldots,Y_{K})\) counting the instances of the \(K\) categories in the sample, with the sum of those table entries equal to \(n\). The mass function of \(Y\) is given by \[p_{Y\mid\theta}(y)\propto\prod_{k=1}^{K}\theta_{k}^{y_{k}}.\] Let \((Z\mid\Theta=\theta)\sim\mathsf{Mult}_{K}(1,\theta)\) denote a single independent realization from the same multinomial model. Technically, \(Z\) is a \(K\)-vector of \(0\)'s with a single entry equal to \(1\), but here I'll treat \(Z\) as the position of the \(1\). The goal is to construct a predictive IM for \(Z\in\{1,2,\ldots,K\}\), given \(Y=y\). I'll focus here on just the Option 3 construction, again with the vacuous prior for illustration. The relative profile likelihood function in the context is \[R(y,z)=\frac{\sup_{\theta}\theta_{z}^{y_{x}+1}\prod_{k\neq z}\theta_{k}^{y_{k }}}{\max_{\zeta}\sup_{\theta}\theta_{\zeta}^{y_{\zeta}+1}\prod_{k\neq\zeta} \theta_{k}^{y_{k}}}.\] This expression looks a lot messier than it really is, since there are closed-form expressions for the optimization problems in both the numerator and the denominator, though these aren't worth displaying here.2 Then the predictive IM has contour Footnote 2: Note that the optimization problem in the denominator amounts to a sort of entropy minimization. Entropy is maximized by a uniform distribution, so minimization amounts to taking \(\zeta\) to be the category with the largest probability, i.e., a rich-get-richer rule. \[\pi_{y}(z)=\sup_{\theta}\mathsf{P}_{Y,Z\mid\theta}\{R(Y,Z)\leq R(y,z)\},\quad z \in\{1,2,\ldots,K\}.\] Figure 8: Plots of the predictive IM plausibility contour functions for the three different construction options described in the main text, based on \(y=0\), \(n=5\), and \(\sigma=1\). For illustration, consider the application in Goodman (1965) and Deneoux (2006) with \(n=220\) psychiatric patients and \(K=4\) categories corresponding to four diagnoses: neurotic, depressed, schizophrenic, or having a personality disorder. The observed counts are \(y=(91,49,37,43)\), so there's a clear tendency towards the first category, corresponding to neurosis. Figure 9 shows a plot of the IM's predictive plausibility contour. As expected, the plausibility contour mode is the first category, but the other three categories have non-negligible plausibility too. In fact, all reasonable \(100(1-\alpha)\%\) prediction sets for \(Z\) contain all four categories. The same conclusion is drawn from other approaches, e.g., Deneoux (2006) develops a belief function fo prediction and, for small \(\alpha\), the smallest set that his method assigns at least \(1-\alpha\) belief to is all four categories. **Example 11** (Gamma).: Consider a gamma model like in Example 6, where \(Y=(Y_{1},\ldots,Y_{n})\) denotes an iid sample of size \(n\) from a gamma distribution with an unknown \(\Theta=(\Theta_{1},\Theta_{2})\) consisting of the model's shape and scale parameter, respectively. The focus here, however, is on prediction of a future observable, say, \(Z\). One case, of course, is where \(Z=Y_{n+1}\) is a subsequent observation from the same gamma model, but that's not the only possibility. Another practically relevant case, not uncommon in reliability applications (e.g., Hamada et al. 2004; Wang et al. 2012), is that where \(Z=\max\{Y_{n+1},\ldots,Y_{n+k}\}\) is the maximum of \(k\)-many future observations, with \(k\) a given non-negative integer. Note that \(Z\) is independent of \(Y\). For this example here, I'll focus on "Option 1" where the IM plausibility contour \(\pi_{y}(\theta)\) as presented in Example 10 of Part II is combined with information about the \(\theta\)- and \(k\)-dependent distribution of \(Z\) through the relationship (12), with the Fisher-based function \(\mathcal{K}\) recommended above. This is applicable since \(Y\) and \(Z\) are independent and I'm assuming, as before, that the prior information about \(\Theta\) is vacuous. It's easy to check that the probability density function for \(Z\), given \(\Theta=\theta\) and a fixed Figure 9: Plot of the predictive IM plausibility contour, \(\pi_{y}(z)\), for the real-data illustration in Example 10 involving \(K=4\) categories. Circles correspond to the maximum likelihood estimator of \(\theta=(\theta_{1},\ldots,\theta_{4})\). \(k\), is given by \[p_{\theta}^{(k)}(z)=k\,p_{\theta}(z)\,\{1-P_{\theta}(z)\}^{k-1},\quad z>0,\] where \(p_{\theta}\) and \(P_{\theta}\) are the density and distribution functions of \(\mathsf{Gamma}(\theta_{1},\theta_{2})\), with \(\theta=(\theta_{1},\theta_{2})\). Then the possibilistic representation of the distribution of \(Z\) is \[f_{\theta}(z)=\mathsf{P}_{Z|\theta}\{p_{\theta}^{(k)}(Z)\leq p_{\theta}^{(k)} (z)\},\quad z>0,\] which, for given \(\theta\) and \(k\), can easily be evaluated based on Monte Carlo. Plugging this and the joint contour \(\pi_{y}(\theta)\) as presented in Example 10 of Part II into the formula (12) gives a (strongly valid) predictive IM with contour \(\pi_{y}(z)\) for \(Z\), given \(Y=y\), with \(k=3\). For the same data as in Example 6, Figure 10 shows both the joint contour and this predictive plausibility contour. The sample mean is about 113 and the sample standard deviation is about 36, so one can't rule out the possibility that \(Z\) is considerably larger than the maximum in the sample, which is 165. So the fact that the IM's predictive contour has a long tail and the corresponding 90% prediction interval stretches has upper bound near 244 is not unexpected. ## 5 Non-parametric possibilistic IMs ### Setup To set the notation and terminology, suppose that the distribution \(\mathsf{P}_{Y|\theta}\) of the data \(Y\) is indexed by \(\theta\in\mathbb{T}\), where \(\theta\) is an infinite-dimensional index such as the density. The point is that I'm just using \(\theta\) to label the distribution \(\mathsf{P}_{Y|\theta}\), so it's not a parametric model in the usual sense. To make this point clear, I'll directly write \(\theta\) for the density/mass Figure 10: Panel (a) shows the joint IM contour for \(\Theta\) based on the same rat survival time data analyzed in Example 6. Panel (b) shows the IM’s predictive contour for \(Z\), the maximum of \(k=3\) many future gamma observations. function of \(\mathsf{P}_{Y|\theta}\) in some of the expressions below. The _non-parametric_ problem assumes that the distribution of \(Y\) is unknown, which means there's an uncertain variable \(\Theta\in\mathbb{T}\) about which inference is to be drawn. Just like in the previous sections of this paper, it'll often be the case that only some (finite-dimensional) feature \(\Phi\) of \(\Theta\) is of interest; that is, \(\Phi=f(\Theta)\) for some functional \(f:\mathbb{T}\to\mathbb{F}\). In addition, there might also be (partial) prior information about \(\Theta\) or directly about \(\Phi\) to be included in the model formulation. If \(\Theta\) denotes the density function, then prior information might come in the form of lending more credibility to densities that are smoother. An advantage, however, of the partial prior IM framework is that one need not say anything about the density \(\Theta\) directly, available prior information about the relevant feature \(\Phi\) can be incorporated directly. Indeed, all one needs is a contour \(q_{\Phi}(\phi)\) and they're ready to construct a valid, partial prior-dependent IM for \(\Phi\). ### Valid IM construction None of the theory presented in the previous sections of the paper required that the unknowns were finite-dimensional, so there's nothing new here. For the non-parametric case, if partial prior information for \(\Theta\) is available and encoded in the contour \(q_{\Theta}\), then I'll first construct the plausibility order \[R_{q}(y,\theta)=\frac{\theta(y)\,q_{\Theta}(\theta)}{\sup_{\theta\in\mathbb{T} }\{\vartheta(y)\,q_{\Theta}(\vartheta)\}},\quad(y,\theta)\in\mathbb{Y}\times \mathbb{T},\] and then define the IM for \(\Theta\)--or, equivalently, for the distribution of \(Y\) itself--as \[\pi_{y}(\theta)=\overline{\mathsf{P}}_{Y,\Theta}\{R_{q}(Y,\Theta)\leq R_{q}( y,\theta)\},\quad\theta\in\mathbb{T},\] where \(\overline{\mathsf{P}}_{Y,\Theta}\) is the upper joint distribution of \((Y,\Theta)\) based on the non-parametric model and the partial prior information. Obviously, since \(\Theta\) is such a complex object and the partial prior is completely general, there's no opportunity for dimension reduction and efficiency gain--we simply get what we get. But strong validity of this IM for \(\Theta\) holds by the general theory in Part II. In case prior information about \(\Theta\) is vacuous, then there's an opportunity to reduce dimension as before, \[\pi_{y}(\theta)=\mathsf{P}_{Y|\theta}\{R(Y,\theta)\leq R(y,\theta)\},\quad \theta\in\mathbb{T}, \tag{13}\] where, now, \(R(y,\theta)=\theta(y)/\sup_{\vartheta\in\mathbb{T}}\vartheta(y)\) is the non-parametric relative likelihood. A complete prior can be handled in this non-parametric case too, as described in Part II, i.e., rather than fixing the value of \(\theta\) as in the above display, I fix the value of \(y\). The point is that, in the vacuous- or complete-prior cases, there's some opportunity for dimension reduction: the vacuous prior collapses the Choquet integral in the \(\Theta\) dimension whereas the complete prior does so in the \(Y\) dimension. Next, suppose the goal is inference on a finite- and probably relatively low-dimensional feature \(\Phi=f(\Theta)\), e.g., a moment or quantile. That the quantity of interest is low-dimensional creates an opportunity for efficiency gain compared to starting with the IM for \(\Theta\) and marginalizing to \(\Phi\) from there via the extension principle. Following the general framework presented in Section 3, the construction of an IM for \(\Phi\), based on partial prior information encoded in the contour \(q_{\Phi}\), starts with a likelihood-driven plausibility ordering \[R_{q}(y,\phi)=\frac{\{\sup_{\theta:f(\theta)=\phi}\theta(y)\}\,q_{\Phi}(\phi)}{ \sup_{\theta}\{\theta(y)\,q_{\Phi}(f(\theta))\}},\quad(y,\phi)\in\mathbb{Y} \times\mathbb{F}.\] In what follows, it's important to remember that \(\theta(y)\) represents \(\mathsf{P}_{Y|\theta}(\{y\})\), the distribution determined by \(\theta\) evaluated at the data \(y\). So the optimization over \(\theta\) in the above display is closely related to the so-called _empirical likelihood_ framework described by, e.g., Owen (1990, 1988, 2001), Qin and Lawless (1994), Tang and Leng (2010), and others. Then the IM construction proceeds exactly as before, producing a contour \[\pi_{y}(\phi)=\overline{\mathsf{P}}_{Y,\Theta}\{R_{q}(Y,f(\Theta))\leq R_{q}( y,\phi)\},\quad\phi\in\mathbb{F}.\] Strong validity holds by the general theory, so the critical question is how to carry out the necessary computations. For this, let me consider the special case of vacuous prior information, so that the plausibility ordering takes the form \[R(y,\phi)=\frac{\sup_{\theta:f(\theta)=\phi}\theta(y)}{\sup_{\theta}\theta(y) },\quad(y,\phi)\in\mathbb{Y}\times\mathbb{F}. \tag{14}\] This is exactly the empirical likelihood ratio statistic discussed and analyzed extensively in Owen (2001). In that case, the IM contour function is \[\pi_{y}(\phi)=\sup_{\theta:f(\theta)=\phi}\mathsf{P}_{Y|\theta}\{R(Y,\phi) \leq R(y,\phi)\},\quad\phi\in\mathbb{F}, \tag{15}\] which is simply the p-value function for the empirical likelihood ratio test of \(H_{0}:\Phi=\phi\) based on the statistic (14). Under regularity conditions (e.g., Owen 2001, Theorem 3.6), when \(Y=(Y_{1},\ldots,Y_{n})\) is an iid sample and \(n\to\infty\), a version of Wilks's theorem applies to \(-2\) times the log empirical likelihood ratio, that is, it has a limiting chi-square distribution that's independent of \(\theta\). Therefore, the strongly valid, vacuous-prior IM for \(\Phi\) can be well approximated by a chi-square tail probability fairly generally. Other kinds of approximations are possible as well; see Section 5.3. My point is that the very same general framework developed above and illustrated in simple, low-dimensional parametric inference problems can be readily applied to non-parametric problems. The catch, however, is that computation is obviously a much more serious challenge in these high-complexity settings. I imagine that there will be cases in which it's much more efficient--computationally and/or statistically--to construct an approximate pivot by some other means than via the relative empirical likelihood. The downside to a non-likelihood-based construction is that it's no longer clear how partial prior information can be incorporated in a principled way. More on this in Section 6. I should also mention briefly about how the ideas developed above would apply to the case of prediction without parametric model assumptions, as I hinted at in Section 4. Suppose, for simplicity, that \(Y=(Y_{1},\ldots,Y_{n})\) and \(Z=Y_{n+1}\) consist of iid samples from a distribution \(\mathsf{P}_{Y|\Theta}\), where \(\Theta\) is uncertain, and the goal is prediction of \(Z\). Following the prediction strategy above, and assuming prior information about \(\Theta\) is vacuous, I'd take the plausibility ordering to be \[R(y,z)=\frac{\sup_{\theta\in\mathbb{T}}\{\theta(y)\,\theta(z)\}}{\sup_{x\in \mathbb{Y},\theta\in\mathbb{T}}\{\theta(y)\,\theta(x)\}},\quad(y,z)\in\mathbb{ Y}^{n}\times\mathbb{Y}.\] From here, it's straightforward to write down an expression for the predictive IM for \(Z\), given \(Y=y\), that's free of parametric model assumptions. The problem, however, is that it's not clear how the predictive IM can be computed. If it turned out that \(R(Y,Z)\) were a pivot, with a known distribution independent of \(\Theta\), then the predictive IM computations would be immediate. Since it's not clear if/when this pivotal structure holds, one might consider an alternative strategy that sacrifices some of the efficiency-related benefits of working exclusively with likelihoods for the benefit of a pivotal structure that aids in computation; see Section 6 for more discussion on this. ### Approximations In the previous subsection I already suggested the option to approximate the IM's plausibility contour for \(\Phi\), in the vacuous prior case, using the Wilks-like limiting distribution of the empirical likelihood ratio statistic. One can also make certain adjustments (one-to-one transformations) to the plausibility ordering, e.g., Bartlett correction (e.g., Owen 2001, Sec. 3.3) that would improve the accuracy of of this limiting approximation. But asymptotic approximations aren't the only approximation games in town and here I want to mention two such alternatives. This discussion will be relatively high-level, focusing on the main ideas rather than details. Also, for ease of connecting this new framework to the existing literature, I'll focus here exclusively on the vacuous-prior case; extending these ideas to handle the case with partial prior information is a huge open question that will have to be resolved elsewhere. The first of these alternatives is a relatively obvious one: just use the _bootstrap_ (e.g., Efron 1979; Efron and Tibshirani 1993; Owen 1988). There are, however, some subtleties in getting this to work properly, as I explain next. To start, for simplicity, let \(Y=(Y_{1},\ldots,Y_{n})\) consist of \(n\) many iid observations from \({\sf P}_{Y|\Psi}\) depending on the unknown (infinite-dimensional) \(\Theta\). Suppose the goal is inference on \(\Phi=f(\Theta)\) and let \(\hat{\phi}_{Y}\) denote the maximum likelihood estimator of \(\Phi\), i.e., \[\hat{\phi}_{Y}=f(\hat{\theta}_{Y}),\] where \(\hat{\theta}_{Y}\) is the (non-parametric) maximum likelihood estimator of \(\Theta\). For a bootstrap sample size \(B\), let \(y^{b}\) denote a random sample of size \(n\), with replacement, drawn from the observed data values \(y=(y_{1},\ldots,y_{n})\). Then a bootstrap approximation of the IM contour \(\pi_{y}(\phi)\) for \(\Phi\), given \(Y=y\), in (15) is \[\pi_{y}(\phi)\approx\pi_{y}^{\rm boot}(\phi):=\frac{1}{B}\sum_{b=1}^{B}1\{R(y ^{b},\hat{\phi}_{y})\leq R(y,\phi)\},\quad\phi\in\mathbb{F}, \tag{16}\] where \(R\) is as given in (14). Two remarks deserve to be made here. * First, note that \(\hat{\phi}_{y}\) remains fixed as \(b=1,\ldots,B\). The reason is that the \(y^{b}\)'s represent samples from the "population" \(y\) and the "true \(\Phi\)" corresponding to the "population \(y\)" is \(\hat{\phi}_{y}\). As the bootstrap story goes, if the \(y\) sample is sufficiently informative, say, as \(n\to\infty\), then the "population \(y\)" approximates \({\sf P}_{Y|\Theta}\) and, therefore, the \(R(y^{b},\hat{\phi}_{y})\)'s are approximately representative samples of \(R(Y,f(\Theta))\), so the approximation in (16) should be relatively accurate. * Second, note that there's no "\(\sup_{\theta:f(\theta)=\phi}\)" in (16) like there is in (15). The reason is that, technically, strong validity only requires that \(\pi_{Y}(\Phi)\) be stochastically bounded at the "true \(\Phi\)" or, in this case, at \(f(\Theta)\). This control at the "true value" can't be achieved exactly with our less-than-fully-informative finite samples, so the supremum is a conservative adjustment to make up for this shortcoming. In the Utopian bootstrap world, where \(n\to\infty\) and samples are fully informative, the true values are recovered and there's no need for a supremum. It's no different than the situation described above where the relative profile empirical likelihood is an asymptotic pivot and, therefore, the supremum over \(\theta\) such that \(f(\theta)=\phi\) drops out. The point is that there's no finite-sample strong validity guarantees for the bootstrap approximation in (16), only asymptotically approximation strong validity as \(n\to\infty\); see, e.g., Cella and Martin (2022a, Theorem 1). Another interesting but very different kind of approximation is that based on the so-called _universal inference_ framework developed in Wasserman et al. (2020). The developments here are based on (profile) relative likelihood functions, but the previous make use of a split (profile) relative likelihood function, which I explain below. Suppose, for simplicity, that \(Y\) consists of \(n\) many iid observations like above. Then split this collection into two chunks, denoted by \(Y^{(1)}\) and \(Y^{(2)}\), where, for concreteness, \(Y^{(1)}\) corresponds to the first \(\lceil n/2\rceil\) many observations and \(Y^{(2)}\) the rest. Let \(\hat{\theta}_{Y^{(2)}}\) denote the maximum likelihood estimator of \(\Theta\) based on the second chunk of data, i.e., \[\hat{\theta}_{Y^{(2)}}=\arg\max_{\theta\in\mathbb{T}}\theta(Y^{(2)}).\] Now, for the quantity of interest \(\Phi=f(\Theta)\), define the split-dependent plausibility ordering given by \[R_{\text{split}}(y,\phi)=\frac{\sup_{\theta:f(\theta)=\phi}\theta(y^{(1)})} {\hat{\theta}_{y^{(2)}}(y^{(1)})},\quad(y,\phi)\in\mathbb{Y}\times\mathbb{F}.\] This is just a ratio of \(y^{(1)}\)-data likelihoods, the numerator profiled over those \(\psi\) satisfying \(f(\theta)=\phi\) and the denominator evaluated at \(\hat{\theta}_{y^{(2)}}\). If both chunks of data are "similarly informative," then one would expect \(\hat{\theta}_{y^{(1)}}\approx\hat{\theta}_{y^{(2)}}\), in which case \(R_{\text{split}}\) is just the relative profile likelihood based on \(y^{(1)}\). But note that \(R_{\text{split}}(y,\phi)\) is not bounded above by 1; when this not-bounded-by-1 feature might be an issue, see below, it's easy enough to just truncate it at 1. Applying the general IM construction above to this split-based plausibility order gives \[\pi_{y}(\phi)=\sup_{\theta:f(\theta)=\phi}\mathsf{P}_{Y|\theta}\{R_{\text{ split}}(Y,\phi)\leq R_{\text{split}}(y,\phi)\}.\] This is no more straightforward to compute than the original, no-split contour, but it's easy to approximate. Indeed, by Markov's inequality, \[\pi_{y}(\phi) =\sup_{\theta:f(\theta)=\phi}\mathsf{P}_{Y|\theta}\{R_{\text{ split}}(Y,\phi)\leq R_{\text{split}}(y,\phi)\}\] \[=\sup_{\theta:f(\theta)=\phi}\mathsf{P}_{Y|\theta}\{R_{\text{ split}}(Y,\phi)^{-1}\geq R_{\text{split}}(y,\phi)^{-1}\}\] \[\leq 1\wedge R_{\text{split}}(y,\phi),\] where I've made use of the key result in Equation (6) of Wasserman et al. (2020), a simple consequence of the law of iterated expectation, which states that the \(\mathsf{P}_{Y|\theta}\)-expected value of \(R_{\mathrm{split}}(Y,\phi)^{-1}\) is no more than \(1\), uniformly in \(\theta\) with \(f(\theta)=\phi\). Then the approximation I propose is to take \[\pi_{y}^{\mathrm{split}}(\phi)=1\wedge R_{\mathrm{split}}(y,\phi),\] which is relatively easy to compute--it only requires evaluating the split relative profile likelihood function, no probability calculations necessary. Moreover, it follows from Theorem 3 in Wasserman et al. (2020) that the contour in the above display defines a strongly valid IM for \(\Phi\), given \(Y=y\). This might appear too good to be true, but there's a price for the apparent simplicity: the data-splitting strategy generally results in a loss of efficiency. This could be an acceptable trade-off in complex non-parametric problems where there might not be any other options for constructing a valid IM. ### Examples **Example 12** (Multinomial).: Reconsider the multinomial model from Example 10, with \(\Theta=(\Theta_{1},\ldots,\Theta_{K})\) the \(K\)-dimensional vector in the probability simplex \(\mathbb{T}\), so that \(\Theta_{k}\) denotes the probability of class \(k\), for \(k=1,\ldots,K\). Since every discrete distribution on \(\{1,\ldots,K\}\) can be described by such a \(\Theta\) vector, I refer to this as the "discrete non-parametric" model. It's for this reason that the multinomial model, while relatively simple, is of fundamental importance. Many of the more general non-parametric developments, such as Bayesian non-parametrics via the Dirichlet process (e.g., Ferguson 1973), can be seen as extensions of the multinomial model. More specifically, let \((Y\mid\Theta=\theta)\sim\mathsf{Mult}_{K}(n,\theta)\), so that \(Y=(Y_{1},\ldots,Y_{K})\) where \(Y_{k}\) is the sample frequency count for category \(k\), with \(k=1,\ldots,K\). As before, this determines a likelihood function which, using the notation of this section, is given by \(\theta(y)\propto\prod_{k=1}^{K}\theta_{k}^{y_{k}}\), for \(\theta\in\mathbb{T}\). Assuming vacuous prior information for \(\Theta\), just for simplicity, the plausibility ordering is determined by the relative likelihood alone, \[R(y,\theta)=\prod_{k=1}^{K}\Bigl{(}\frac{n\theta_{k}}{y_{k}}\Bigr{)}^{y_{k}},\] where I've plugged in the likelihood function maximizer, which is available in closed form. From here it's straightforward to evaluate the contour function of the (strongly valid) IM for \(\Theta\), via Monte Carlo, using the formula (13). This is the same multinomial problem considered in a recent discussion paper (Jacob et al. 2021a,b) published in the _Journal of the American Statistical Association_. They're approaching the problem from the perspective of Dempster-Shafer inference, so a comparison with the proposed solution here makes sense. Jacob et al. compared their solution to another with an IM-like flavor (Lawrence et al. 2009). Jacob, et al. consider two real-data examples, both involving scientifically relevant questions concerning the multinomial parameter \(\Theta\), and I'll reanalyze one of them here. In the \(K=4\) case, Rao (1973, p. 368) considered a so-called linkage model wherein the parameter \(\Theta=(\Theta_{1},\ldots,\Theta_{4})\) has the following low-dimensional structure: \[\vartheta(\omega)=\Bigl{(}\frac{1}{2}+\frac{\omega}{4},\frac{1-\omega}{4}, \frac{1-\omega}{4},\frac{\omega}{4}\Bigr{)},\quad\omega\in(0,1).\] In Rao's example, the categories represent four different phenotypes in animals, so the above model represents a simple(r) relationship between the proportions of these phenotypes in the animal population in question. The data analyzed in Jacob et al. (2021a) and in Lawrence et al. (2009) has observed cell counts \(y=(25,3,4,7)\). My focus here is on the question of whether the above linkage model is _plausible_ given the observed data. So I have in mind the following assertion/hypothesis about the uncertain \(\Theta\): \[A=\{\Theta=\vartheta(\omega)\text{ for some }\omega\in(0,1)\},\] and the goal is to calculate \(\overline{\Pi}_{y}(A)\), the IM's upper probability assigned to the above assertion; if this quantity is small, then I might be willing to reject the claim that the low-dimensional structure imposed by the linkage model is present in this data example. By the IM's consonance structure, this amounts to solving a simple, one-dimensional optimization problem \[\overline{\Pi}_{y}(A)=\sup_{\omega\in(0,1)}\pi_{y}\{\vartheta(\omega)\}.\] Figure 11 shows a plot3 of \(\omega\mapsto\pi_{y}\{\vartheta(\omega)\}\) and, clearly, the supremum is attained at \(\omega\approx 0.61\) and the plausibility there is \(\approx 0.97\). Here, obviously the plausibility is large, so the data shows effectively no signs of disagreement with Rao's linkage model that assumes \(\Theta\) is of the form \(\vartheta(\omega)\) for some \(\omega\in(0,1)\). This conclusion isn't surprising, given that Rao--who's a pretty smart guy--already took the linkage model as given for his analysis of these data. From here, one can take the reduced model and carry out the IM construction to make inference on the uncertain linkage parameter \(\Omega\in(0,1)\) directly. Footnote 3: Note that this is different from marginalization via the extension principle. In marginalization, every value of the full parameter determines a value of the reduced parameter. Here, however, not all \(\Theta\in\mathbb{T}\) correspond to a \(\vartheta(\omega)\) for some \(\omega\). This explains why the curve in Figure 11 doesn’t reach the value \(1\) as we’d expect in the case of marginalization, e.g., Figure 2. **Example 13** (Non-parametric quantile).: Suppose the goal is inference on, say, the \(r^{\text{th}}\) quantile \(\Phi=\Phi_{r}\) of a completely unknown distribution, \(\mathsf{P}_{Y|\Theta}\), i.e., where \(\Phi\) is such that \[\mathsf{P}_{Y|\Theta}(Y\leq\Phi)=r,\quad r\in(0,1).\] Let \(Y=(Y_{1},\ldots,Y_{n})\) be an iid sample of size \(n\) from this unknown distribution. Following the general framework as described above, I want \(R(y,\phi)\) to be the empirical likelihood ratio statistic in (14) which, in this case (e.g., Wasserman 1990, Theorem 5), is \[R(y,\phi)=\Big{\{}\frac{r}{u(y,\phi)}\Big{\}}^{u(y,\phi)}\Big{\{}\frac{1-r}{ n-u(y,\phi)}\Big{\}}^{n-u(y,\phi)},\] where \[u(y,\phi)=\begin{cases}|\{i:y_{i}\leq\phi\}|&\text{if }\phi<\hat{\phi}_{y}\\ nr&\text{if }\phi=\hat{\phi}_{y}\\ |\{i:y_{i}<\phi\}|&\text{if }\phi>\hat{\phi}_{y},\end{cases}\] and \(\hat{\phi}_{y}\) the \(r^{\text{th}}\) quantile of the sample \(y\). This is assuming, for computational simplicity, that the prior information about \(\Phi\) is vacuous. In this case, it's straightforward to get a bootstrap approximation, \(\pi_{y}^{\rm boot}(\phi)\), of the marginal IM's plausibility contour for \(\Phi\) as in (16). I simulated \(n=25\) observations from a \(\mathsf{Gamma}(3,1)\) distribution and that bootstrap-based approximate plausibility contour is plotted in Figure 12(a). The stairst-step pattern is a result of \(R\) only depending on certain sample counts rather than the numerical values. The true quantile in this case is \(\approx 3.6\), which is right near the peak of the plausibility contour, as desired. Recall that this is only an approximate IM, so there's no guarantee that strong validity holds exactly. It's not difficult, however, to get empirical confirmation that validity does hold, at least approximately. Figure 12(b) shows a plot of the distribution function \[\alpha\mapsto\mathsf{P}_{Y|\theta}\{\pi_{Y}^{\rm boot}(f(\theta))\leq\alpha\},\quad\alpha\in[0,1], \tag{17}\] and the fact that this curve falls below the diagonal line is an indication that strong validity holds. This is only for one distribution, namely, \(\mathsf{Gamma}(3,1)\) but the other experiments I conducted lead to the same conclusion, namely, that strong validity holds for the bootstrap-based approximation, even for relatively small \(n\). **Example 14** (Non-parametric mean).: Arguably the most fundamental problems in statistics is inference on the mean of a population based on random sampling. Here I let \(\Phi\) denote that unknown mean but I assume nothing more about the underlying distribution, \(\Theta\), other than that its tails are such that it admits a finite mean. For this case, assuming vacuous prior information about \(\Phi\), just for simplicity, I can follow the IM construction in (15), with \(R(y,\phi)\) being the empirical likelihood ratio statistic for the mean, which is fleshed out in detail in, e.g., Owen (2001, Ch. 2.9). For the computation of \(R(y,\phi)\), I used the function el.test in the R package emplik[222]. Using the bootstrap approximation suggested above, I found the plausibility contour \(\pi_{y}^{\rm boot}(\phi)\) for based on a real data set consisting of \(n=29\) observations on the density of the Earth Figure 11: Plot of the function \(\omega\mapsto\pi_{y}\{\vartheta(\omega)\}\) in the multinomial/genetic linkage model in Example 12. The maximum value is \(\overline{\Pi}_{y}(A)\), the plausibility that the unknown \(\Theta\) satisfies the linkage model constraint indexed by \(\omega\). relative to water taken by Cavendish back in 1798 (Stigler 1977, Table 8). The peak of the contour is, of course, at the sample mean \(\bar{y}=5.48\), and the circle marks the plausibility contour \(\pi_{y}^{\rm boot}(\phi^{\star})\) at the "true value" of \(\Phi\), which is \(\phi^{\star}=5.517\). The horizontal line at \(\alpha=0.05\) determines the upper-\(\alpha\) level set, so it's clear that the true value is contained in the \(100(1-\alpha)\%\) plausibility region based on the analysis here. ## 6 Possibilistic IMs without a likelihood ### Setup In all of the cases discussed previously in the paper and almost all of the previous literature on IMs, the focus was on cases involving parametric models that connect the observable data to the unknown quantities of interest, e.g., model parameters and/or future observations. But there is a wide class of classical and modern problems that don't fit this mold. Perhaps the simplest of these problems is inference on an unknown quantile. More specifically, suppose that \(Y=(Y_{1},\ldots,Y_{n})\) is an iid sample from a distribution \(\mathsf{P}\) supported on \(\mathbb{R}\) and, for a given \(q\in(0,1)\), the quantity of interest is the \(q\)-quantile of \(\mathsf{P}\), defined by the equation \(\mathsf{P}(Y_{1}\leq\phi)=q\). It's straightforward to estimate \(\theta\), via the corresponding sample quantile, and, if desired, (approximate) confidence intervals are available. But what about "probabilistic inference" in the sense that I'm concerned with here? Do I first have to infer the whole (infinite-dimensional) \(\mathsf{P}\) and then marginalize to the scalar \(\phi\)? Wasserman (2008) describes this gap poignantly: Figure 12: Plots pertaining to Example 13. Panel (a) is the bootstrap approximation (16) of the marginal IM’s plausibility contour for \(\Phi\) based on a sample of size \(n=25\) from a \(\mathsf{Gamma}(3,1)\) distribution; vertical line is the true quantile, \(\approx 3.6\). Panel (b) shows a Monte Carlo approximation of the distribution function (17), for the same gamma model, and that the curve falls below the diagonal line confirms strong validity. _The idea that statistical problems do not have to be solved as one coherent whole is anathema to Bayesians but is liberating for frequentists. To estimate a quantile, an honest Bayesian needs to put a prior on the space of all distributions and then find the marginal posterior. The frequentist need not care about the rest of the distribution and can focus on much simpler tasks._ My claim is that it's the frequentists' implicit imprecision that's liberating--they leave unspecified (via vacuous models) those things that aren't relevant to their analysis. My modest goal here is to suggest a framework that would allow probabilistic inference without the anathema, without the Bayesians' requirement to have a (precise) model for everything. The full force of this will be developed in a subsequent, follow-up paper. Issues similar to those in the quantile problem arise is more modern problems. Machine learning applications often start with a loss function \((y,\phi)\mapsto\ell_{\phi}(y)\) mapping data and decision rule pairs \((y,\phi)\in\mathbb{Y}\times\mathbb{F}\) to a loss incurred by applying rule \(\phi\) when data is \(y\). Then the data analyst's task boils down to estimation of and inference on the rule that minimizes the risk, or expected loss, i.e., \(\Phi=\arg\min_{\phi\in\mathbb{F}}\mathsf{P}\ell_{\phi}\). Alternatively, it might be that \(\Phi\) is defined as the solution to a so-called estimating equation (e.g., Godambe 1960; Huber 1981), i.e., \(\mathsf{P}z_{\Phi}=0\) for a given (possibly vector-valued) function \((y,\phi)\mapsto z_{\phi}(y)\). In any case, since a parametric statistical model isn't required to define the inferential target, the risk minimizer, estimating equation solution, etc., Manski's law (Manski 2003, p. 1) dictates that the data analyst make as few model assumptions as possible. Lots of problems, including the quantile example above, fit in this inference-on-risk-minimizers setting, so it's important to address this gap between frequentist, Bayesian, and other probabilistic inference frameworks. Our work on _generalized posterior distributions_ (e.g., Martin and Syring 2022; Syring and Martin 2017, 2019, 2023; Wu and Martin 2022, and the references therein) shows how to construct "posterior distributions" without a like Figure 13: Plot of the IM plausibility contour for the mean \(\Phi\), based on the bootstrap approximation (16), in the nonparametric case considered in Example 14. The data used here in this illustration are those measurements of the density of the Earth relative to water, taken by Cavendish back in 1798 (Stigler 1977, Table 8). The circle marks the plausibility evaluated at the “true value” of \(\Phi\), which is 5.517. lihood function, thus providing a generalization of Bayesian inference that's particularly suited for cases where the quantity of interest is a risk minimizer. The key point at the heart of Wasserman's remark and of what I'm suggesting here is that, while it's possible to connect the observable data \(Y\) to quantity of interest \(\Phi\) by thinking in terms of a non-parametric model (e.g., with an empirical likelihood as described in Section 5), this might not be the most statistically, computationally, or conceptually efficient solution. As an alternative, one might consider defining a plausibility order for \(\Phi\) in terms of a generic mapping \((y,\phi)\mapsto\rho(y,\phi)\) that makes no reference to a (empirical, marginal, or profile) likelihood. Of course, this lacks the principles developed in Part II for the case when a model/likelihood is available, but efficient marginal inference in non-parametric problems will likely require bending the rules a bit. Since it's currently not clear how one can incorporate partial prior information about \(\Phi\) into these no-likelihood applications--that's an important open problem--I'll assume in what follows that the prior information is vacuous. In that case, all we have to use is the mapping \(\rho\), but the principles detailed in Part II and applied above offer some guidance. If \(\Phi=f(\mathsf{P})\) is a relevant feature, some functional applied to the uncertain distribution \(\mathsf{P}\) for data \(Y\), then I suggest constructing a (marginal) IM for \(\Phi\) with contour \[\pi_{y}(\phi)=\sup_{\mathsf{P}:f(\mathsf{P})=\phi}\mathsf{P}\{\rho(Y,\phi) \leq\rho(y,\phi)\},\quad\phi\in\mathbb{F}. \tag{18}\] All the desirable properties of the IMs constructed in this manner carry over to this likelihood-free case. In particular, the plausibility regions \(\{\phi:\pi_{y}(\phi)>\alpha\}\) are exact \(100(1-\alpha)\%\) confidence regions. If one could choose \(\rho\) such that \(\rho(Y,\phi)\) is a pivot under \(\mathsf{P}\) with \(f(\mathsf{P})=\phi\), then this would be easy to implement via Monte Carlo. While this can be done in certain applications (Cella 2023), there are currently no broadly general strategies available for constructing pivots. Below I'll highlight, in two practically relevant scenarios, where alternative strategies can be applied to make the above computation manageable, and with little or no sacrifice in validity. These surveys are meant to just to give an idea of what's possible, further investigations are needed. ### Inference on risk minimizers As described above, for a given loss function \((y,\phi)\mapsto\ell_{\phi}(y)\), suppose that \(\Phi\) is defined as the minimizer of the corresponding risk (expected loss) function, i.e., \[\Phi=\arg\min_{\phi\in\mathbb{F}}r(\phi),\quad\text{where}\quad r(\phi)= \mathsf{P}\ell_{\phi}.\] This is an unknown/uncertain quantity because \(\mathsf{P}\) itself is unknown/uncertain. Since the goal is direct inference on \(\Phi\), I don't want to introduce an indirectly-relevant "model parameter" \(\Theta\) so that I can form a likelihood as in the previous sections. Fortunately, the definition of \(\Phi\) as a risk minimizer is enough structure to suggest a plausibility ordering \(\rho\) and a corresponding marginal IM for \(\Phi\). Let \(Y^{n}=(Y_{1},\ldots,Y_{n})\) consist of an iid sample from \(\mathsf{P}\); note that these could be independent-dependent variable pairs with joint distribution \(\mathsf{P}\), but I'll not make this explicit in the notation. For the observed \(y^{n}\), the corresponding empirical risk is \[r_{y^{n}}(\phi)=\frac{1}{n}\sum_{i=1}^{n}\ell_{\phi}(y_{i}),\] and a natural estimate of \(\Phi\) is obtained by minimizing the empirical risk: \[\hat{\phi}_{y^{n}}=\arg\min_{\phi}r_{y^{n}}(\phi).\] Analogous to the relative likelihood plausibility ordering, I propose the following: \[\rho(y^{n},\phi)=\exp[-\{r_{y^{n}}(\phi)-r_{y^{n}}(\hat{\phi}_{y^{n}})\}]\in[0,1].\] (The exponential form isn't necessary, that's just to make it resemble the relative likelihood.) From here, one can define a marginal IM for \(\Phi\) with contour as in (18). This is exactly the IM solution presented in Cella and Martin (2022a), and they proposed a bootstrap approximation analogous to that in Section 5.3 above to carry out the necessary computations. With a bootstrap approximation, there's virtually no hope of having an exact validity result, but they proved an asymptotic validity theorem and demonstrated the IM's strong finite-sample performance in simulations. ### Prediction Let's revisit the prediction problem discussed in a few places above. Assume that \(Y^{n}=(Y_{1},\ldots,Y_{n})\) consists of iid observations from a common distribution \(\mathsf{P}\), and that the goal is to predict the next observation \(Y_{n+1}\). In fact, I can be even more general and assume that the \(Y\)-process is exchangeable and that \(\mathsf{P}\) is the full joint distribution for the process. This is an extreme case of marginal inference, where the entirety of the highly-complex \(\mathsf{P}\) is a nuisance parameter to be eliminated. It may not be realistic/attractive to introduce a density function and a corresponding likelihood as suggested in Section 5.2 above, so here I'll avoid the use of likelihood. A popular prediction method in the literature these days is _conformal prediction_ (e.g., Shafer and Vovk 2008; Vovk et al. 2005). A close connection between conformal prediction and IMs has already been demonstrated in Cella and Martin (2022b,c), and what I present below offers some new perspectives. Let \(\rho:\mathbb{Y}^{n}\times\mathbb{Y}\) be a mapping with two inputs: one is a data set and the other is a candidate value for the next observation. Without loss of generality, I'll assume that \(\rho(y^{n},y_{n+1})\) is a measure of "conformity" of the candidate value \(y_{n+1}\) with the data set \(y^{n}\); that is, larger values correspond to \(y_{n+1}\) that's consistent with the values in \(y^{n}\). For example, if \(\hat{y}_{y^{n}}\) is a point prediction of the next observation, then the plausibility order \(\rho\) can be defined as \[\rho(y^{n},y_{n+1})=-d(\hat{y}_{y^{n}},y_{n+1}),\] where \(d\geq 0\) is any suitable measure of distance between two points in \(\mathbb{Y}\). The only other constraint is that \(\rho\) be symmetric in its first argument, i.e., the data set \(y^{n}\) can be shuffled arbitrarily without affecting the value of \(\rho\). Having specified this plausibility order, the predictive IM contour for \(Y_{n+1}\) is \[\pi_{y^{n}}(y_{n+1})=\sup_{\mathsf{P}}\mathsf{P}\{\rho(Y^{n},Y_{n+1})\leq\rho (y^{n},y_{n+1})\},\quad y_{n+1}\in\mathbb{Y},\] where the supremum is over all exchangeable joint distributions for the full \(Y\)-process. Note that the probability calculation above is with respect to the joint distribution of \((Y^{n},Y_{n+1})\) under \(\mathsf{P}\). Even though this looks a little different than the setup above, all the same strong (prediction) validity properties hold, in particular, \[\sup_{\mathsf{P}}\mathsf{P}\{\pi_{Y^{n}}(Y_{n+1})\leq\alpha\}\leq\alpha,\quad \text{all $\alpha\in[0,1]$, all $n$.} \tag{19}\] The problem, of course, is that the supremum makes evaluation of the IM contour unattainable. A key point, however, is that the supremum also makes the resulting IM unnecessarily conservative. By applying the _Principle of Minimal Complexity_ from Part II, it's possible to reduce the dimension of the aforementioned Choquet integral, which makes the computation simpler and the IM more efficient. As explained in Part II and applied in a few places above, the implementation of the _Principle_ often boils down to conditioning on things that can be meaningfully conditioned on. In this case, the structure of the problem makes it possible to condition on the _set of values_\(\{y_{1},\ldots,y_{n},y_{n+1}\}\) while leaving their arrangement unspecified. This set is, of course, a minimal sufficient statistic, so conditioning on this feature will eliminate the dependence on the unknown \(\mathsf{P}\), so the supremum drops out completely. That is, the new predictive IM contour--based on conditioning and the aforementioned _Principle_--is given by \[\pi_{y^{n}}(y_{n+1}) =\sup_{\mathsf{P}}\mathsf{P}\big{[}\rho(Y^{n},Y_{n+1})\leq\rho(y^ {n},y_{n+1})\mid\{y_{1},\ldots,y_{n},y_{n+1}\}\big{]}\] \[=\frac{1}{(n+1)!}\sum_{\sigma}1\{\rho(y^{\sigma(1:n)},y_{\sigma(n +1)})\leq\rho(y^{n},y_{n+1})\},\] where the sum is over all \((n+1)!\) many permutations, \(\sigma\), of the integers \(1,\ldots,n,n+1\). Finally, since \(\rho\) is symmetric in its first argument, the right-hand side above can be further simplfied: \[\pi_{y^{n}}(y_{n+1})=\frac{1}{n+1}\sum_{i=1}^{n+1}1\{\rho(y_{-i}^{n+1},y_{i}) \leq\rho(y^{n},y_{n+1})\},\quad y_{n+1}\in\mathbb{Y}.\] The reader will surely recognize the right-hand side above as the so-called "transducer" or "p-value" output produced by the inductive conformal prediction algorithm. In particular, a result establishing what is equivalent to the prediction validity property in (19) can be found in Corollary 2.9 of Vovk et al. (2005). The derivation above, which makes use of conditioning and sufficiency of the empirical distribution more closely resembles that in Faulkenberry (1973) and, more recently, Hoff (2023). That one can arrive at the very powerful conformal prediction methodology through a (generalized--in the sense of allowing generic orderings \(\rho\)) IM-driven line of reasoning is quite remarkable. ## 7 Conclusion This paper, Part III of the series, considered the problem of efficient marginal inference on an interest parameter through a suitable elimination of the underlying nuisance parameters. Depending on the problem at hand, this can be relatively straightforward or quite difficult (at least computationally). For inference on parameters in a posited statistical model, if there's an "ideal factorization," then valid and efficient marginal inference is almost immediate, through a general relative profile likelihood-based formulation. Outside the "ideal" class of problems, the same proposal still works and is shown to very strong solutions in some challenging applications, namely, the gamma mean and Behrens-Fisher problem. In fact, based on the results presented in Example 5 above, my conjecture is that the proposed IM solution is the best available among in the sense of being exactly valid and also empirically efficient. There are certain cases where the profile relative likelihood strategy is inefficient, in particular, when there's a large number of nuisance parameters; but this risk can be anticipated, and other marginalization strategies can be applied, as I showed in Examples 7-8. Prediction problems can be viewed as extreme cases of marginal inference, where all of the model parameters are nuisance and to be eliminated. Here the same relative profile likelihood-based construction is possible, leading to what I called a predictive IM that is provably valid and, among other things, can be used to construct prediction regions for features of future observables. For instance, in Example 11, I showed how to construct an valid predictive IM for the maximum of the next \(k\) realizations in a sequence of gamma observables. This same problem has been investigated in, e.g., Hamada et al. (2004), Wang et al. (2012), and Martin and Lingham (2016), but none of these proposals come equipped with exact prediction coverage guarantees. The first part of this paper, and most of the previous literature on IMs, focused on the case of a finite-dimensional parametric model for the observable data. Section 5 lays the groundwork for a new, (empirical) likelihood-based approach for marginal inference on certain features of the non-parametric model, indexed by an infinite-dimensional unknown. In this case, as expected, computation is a more serious challenge, and there I put forward some first thoughts on efficient approximations via, say, bootstrap. A few examples of this were presented, in particular, non-parametric inference on a mean and on a quantile. There are other "non-parametric" problems in which it may be preferable to proceed without thinking in terms of an infinite-dimensional unknown, e.g., in machine learning problems where the quantity of interest is defined as a risk minimizer. Section 6 briefly describes how an approach similar to what was developed in the first part of the paper can be applied even in this seemingly-very-different context. In fact, for prediction, I showed how the powerful and now widely-used conformal prediction algorithm can be derived from (a slightly broader perspective on) this general IM framework. I'll conclude this discussion with a brief mention of some directions for future investigation. These and/or other things will be addressed in subsequent parts of this series. * An important problem that's closely related to the elimination of nuisance parameters is _model assessment_ and, in turn, the task of _model selection_. The point is that, if the model and, as usual, the model parameters are both unknown, then there's really an uncertain pair \((\Gamma,\Theta_{\Gamma})\), where \(\Gamma\in\mathbb{G}\) is the uncertain model index and \(\Theta_{\Gamma}\in\mathbb{T}_{\Gamma}\) is the uncertain, model-specific parameter. When it comes to model assessment, the entirety of \(\Theta_{\Gamma}\) is a nuisance parameter and the goal is marginal inference on \(\Gamma\). From this perspective, it's only natural to consider the same relative profile likelihood-based IM construction presented here. The result would be a strongly valid IM on the model space \(\mathbb{G}\), offering provably reliable possibilistic uncertainty quantification about the model, something no other frameworks are able to offer. This can also accommodate partial prior information, e.g., to encourage simplicity/sparsity/parsimony/etc. * In the context considered in Section 6, when there's no likelihood directly in consideration, it's no longer clear how to incorporate partial prior information. For sure, it's not so simple as normalizing the likelihood times prior to get a relative likelihood function. One of the challenges is in defining the upper joint distribution "\(\overline{\mathsf{P}}_{Y,\Phi}\)" that would be used to carry out the Choquet integration. While there might still be a work-around, I can also see that this difficulty is to be expected and perhaps insurmountable: the Choquet integral requires specification of an imprecise probability, which in turn requires a _probabilistic_ link between data and parameters, hence a sort of model, likelihood, etc. * Finally, computation of the marginal IM clearly is feasible in the examples presented here in this paper. There's also lots of other similar examples where the same (naive) Monte Carlo-driven strategies can be put to work. For problems that involve a lot of nuisance parameters, however, this might be quite expensive. One option would be to give up some information/efficiency about the interest parameter in exchange for computational benefits, e.g., to work with a marginal- instead of profile-based relative likelihood in a "not-so-ideal factorization" case. Another option is to develop some new and less-naive strategies for Monte Carlo-based optimization using, say, stochastic gradient decent. Some initial work on this was presented in Syring and Martin (2021), but I think more can be done. ## Acknowledgments Thanks to Leonardo Cella for helpful discussions and comments on an earlier draft. This work is partially supported by the U.S. National Science Foundation, SES-2051225.
2309.15025
Large Language Model Alignment: A Survey
Recent years have witnessed remarkable progress made in large language models (LLMs). Such advancements, while garnering significant attention, have concurrently elicited various concerns. The potential of these models is undeniably vast; however, they may yield texts that are imprecise, misleading, or even detrimental. Consequently, it becomes paramount to employ alignment techniques to ensure these models to exhibit behaviors consistent with human values. This survey endeavors to furnish an extensive exploration of alignment methodologies designed for LLMs, in conjunction with the extant capability research in this domain. Adopting the lens of AI alignment, we categorize the prevailing methods and emergent proposals for the alignment of LLMs into outer and inner alignment. We also probe into salient issues including the models' interpretability, and potential vulnerabilities to adversarial attacks. To assess LLM alignment, we present a wide variety of benchmarks and evaluation methodologies. After discussing the state of alignment research for LLMs, we finally cast a vision toward the future, contemplating the promising avenues of research that lie ahead. Our aspiration for this survey extends beyond merely spurring research interests in this realm. We also envision bridging the gap between the AI alignment research community and the researchers engrossed in the capability exploration of LLMs for both capable and safe LLMs.
Tianhao Shen, Renren Jin, Yufei Huang, Chuang Liu, Weilong Dong, Zishan Guo, Xinwei Wu, Yan Liu, Deyi Xiong
2023-09-26T15:49:23Z
http://arxiv.org/abs/2309.15025v1
# Large Language Model Alignment: A Survey ###### Abstract Recent years have witnessed remarkable progress made in large language models (LLMs). Such advancements, while garnering significant attention, have concurrently elicited various concerns. The potential of these models is undeniably vast; however, they may yield texts that are imprecise, misleading, or even detrimental. Consequently, it becomes paramount to employ alignment techniques to ensure these models to exhibit behaviors consistent with human values. This survey endeavors to furnish an extensive exploration of alignment methodologies designed for LLMs, in conjunction with the extant capability research in this domain. Adopting the lens of AI alignment, we categorize the prevailing methods and emergent proposals for the alignment of LLMs into outer and inner alignment. We also probe into salient issues including the models' interpretability, and potential vulnerabilities to adversarial attacks. To assess LLM alignment, we present a wide variety of benchmarks and evaluation methodologies. After discussing the state of alignment research for LLMs, we finally cast a vision toward the future, contemplating the promising avenues of research that lie ahead. Our aspiration for this survey extends beyond merely spurring research interests in this realm. We also envision bridging the gap between the AI alignment research community and the researchers engrossed in the capability exploration of LLMs for both capable and safe LLMs. ###### Contents * 1 Introduction * 2 Why LLM Alignment? * 2.1 Social and Ethical Risks of LLMs * 2.1.1 LLM-Generated Content * 2.1.2 Malicious Uses and Negative Impacts * 2.2 Potential Risks Associated with Advanced LLMs * 3 What is LLM Alignment? * 3.1 Origins of AI Alignment * 3.2 Research Landscape and Ingredients of AI Alignment * 3.3 Related Concepts * 3.4 From AI Alignment to LLM Alignment * 4 Outer Alignment * 4.1 Major Goals Specified in Outer Alignment of LLMs * 4.2 Overview of Approaches to Outer Alignment * 4.3 Non-recursive Oversight * 4.3.1 RL-based Methods * 4.3.2 SL-based Methods * 4.3.3 Challenges of Non-recursive Oversight * 4.4 Scalable Oversight * 4.4.1 Task Decomposition * 4.4.2 Constitutional AI * 4.4.3 Debate * 4.4.4 Market Making * 4.4.5 Proxy Tasks * 4.4.6 Challenges of Scalable Oversight * 5 Inner Alignment * 5.1 Inner Alignment Failures * 5.2 Inner Alignment Methodology * 6 ### Empirical Experiment Proposals for Inner Alignment ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(\mathbb{R}^{d}\)-dimensional * 9.6 Dynamic Evaluation of LLM Alignment via Adversarial Attacks * 9.7 Field Building of LLM Alignment: Bridging between LLM and AI Alignment Community * 10 Conclusion Introduction Large language models, exemplified by OpenAI's ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023), have witnessed rapid advancements, reigniting enthusiasm and aspirations toward artificial general intelligence (AGI). While the role of LLMs as a pathway to AGI remains a topic of debate, these models, boosted with scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022), increasingly exhibit characteristics reminiscent of AGI (Bubeck et al., 2023): LLMs trained on vast amount of data not only demonstrate formidable linguistic capabilities, but also rapidly approach human-level proficiency in diverse domains such as mathematics, reasoning, medicine, law, and programming (Bubeck et al., 2023). Concurrent with these technological breakthroughs in LLMs is a growing concern on the ethical risks they pose and their potential threats to humanity as they evolve further. Tangible ethical risks have been identified. Research has shown that LLMs can inadvertently perpetuate harmful information in their training data, such as biases, discrimination, and toxic content (Weidinger et al., 2021). They might leak private and sensitive information from the training data, or generate misleading, false, or low-quality information. Furthermore, the deployment of LLMs also introduces societal and ethical challenges, e.g., potential misuse and abuse of LLMs, negative impacts on users heavily relying on LLM agents, and broader implications for the environment, information dissemination, and employment (Bubeck et al., 2023). For long-term implications, there is widespread apprehension about misaligned AGI posing existential risks. An AI agent surpassing human intelligence and knowledge might develop its own goals, diverging from those set by humans. In pursuit of its goals, such an agent could monopolize resources, ensuring its preservation and self-enhancement. This trajectory could culminate in the full disempowerment of humanity, inevitably leading to catastrophic outcomes for human existence (Carlsmith, 2022). As a technological solution to address these concerns, AI alignment, ensuring that AI systems produce outputs that are in line with human values, is increasingly garnering attention. In the context of LLMs, alignment ensures that the model's responses are not only accurate and coherent but also safe, ethical, and desirable from the perspective of developers and users. As language agents become more integrated into various aspects of our daily lives, from content creation to decision support, any misalignment could result in unintended consequences. Properly aligning large language models with human values ensures that the vast potential of these models is harnessed trustworthily and responsibly. In response to the ever-growing interest in this area, a few articles have recently reviewed (or incidentally discussed) alignment methods for LLMs (Pan et al., 2023; Zhao et al., 2023; Fernandes et al., 2023; Liu et al., 2023; Wang et al., 2023). However, a notable observation is that these reviews predominantly focus on outer alignment, often overlooking other significant topics in AI alignment such as inner alignment and mechanistic interpretability. While it's undeniable that outer alignment plays a pivotal role in LLM alignment and has been the subject of extensive and profound research, it represents only a fraction of the entire alignment landscape when viewed from a broader AI alignment perspective. To bridge this gap, we provide a comprehensive overview of LLM alignment from the perspective of AI alignment. We believe that a holistic understanding of alignment should not only encompass the widely researched outer alignment but should also delve into areas that are currently in their nascent stages. Topics like inner alignment and mechanistic interpretability, although still in the preliminary phases of research, hold immense potential. Many proposals in these areas remain theoretical or are merely thought experiments at this juncture. Yet, we posit that they are indispensable for the future trajectory of LLM alignment research. By shedding light on these underrepresented areas, we hope to present a more rounded perspective on alignment. Therefore, in addition to existing methods for LLM alignment, we will also introduce several alignment topics that, while not yet applied to LLMs, show promise and could very well become integral components of LLM alignment in the foreseeable future. Through this, we are dedicated to enriching the discourse on AI alignment and its multifaceted application in the realm of large language models. Wrapping up all these ingredients, we propose a taxonomy for LLM alignment in Figure 1. Specifically, this survey will start with discussing the necessity for LLM alignment research (Section 2). To provide a historical and bird view of AI/LLM alignment, we introduce the origins of AI alignment and related concepts (Section 3). Theoretical and technical approaches to aligning LLMs are structured according to our proposed taxonomy and elaborated in outer alignment (Section 4), inner alignment (Section 5), and mechanistic interpretability (Section 6), following the philosophy in AI alignment (Krakovna, 2022). In addition to these theoretical and empirical approaches, we further discuss the potential side-effects and vulnerabilities of current alignment methods for LLMs, including adversarial attacks (Section 7), as well as methodologies and benchmarks for LLM alignment evaluation (Section 8). We finally present our restricted view on future trends in LLM alignment research (Section 9). ## 2 Why LLM Alignment? LLMs become increasingly capable not only in text generation but also in many other tasks, e.g., text-to-code generation (Poesia et al., 2022), planning (Huang et al., 2022; Song et al., 2022), tool learning (Qin et al., 2023), reasoning (Mialon et al., 2023). However, the training objectives of LLMs (Radford et al., 2019; Devlin et al., 2019), e.g., next word prediction (Radford et al., 2019) or determining whether two sentences are contextually related (Devlin Figure 1: The overall taxonomy for large language model alignment proposed in this survey. Sub-taxonomies are presented in the corresponding sections. et al., 2019), are not necessarily in line with human values. As a result, LLMs may generate undesirable content or risky behaviors that humans would prefer to avoid. LLM risks can be normally viewed in two landscapes1: established risks and anticipated risks (Weidinger et al., 2021). The former are mainly observed social and ethical risks (Weidinger et al., 2021) while the latter future potential risks associated with advanced LLMs (Hendrycks et al., 2023). Footnote 1: Here we borrow terms “risk landscape”, “established/observed risks”, “anticipated risks” from (Weidinger et al., 2021). But unlike them, we use “established risks” and “anticipated risks” in a broader and coarser perspective. ### Social and Ethical Risks of LLMs We discuss the social and ethical risks of LLMs from two perspectives: one arises from LLM-generated undesirable content and the other is a wide variety of negative impacts that LLMs pose on humans and society. #### 2.1.1 LLM-Generated Content Undesirable ContentThe amount of data for training LLMs has grown significantly. However, the biases (Shah et al., 2019), toxicity (Gehman et al., 2020), and privacy issues (Carlini et al., 2021) inherent in training data have not been fully addressed. Unaligned LLMs may yield undesirable information and respond to any prompts without regard for their content. This can lead to the generation of biased, toxic, or privacy-sensitive content by LLMs. Regardless of the architecture or parameter size of LLMs (Radford et al., 2019; Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020), studies on a series of benchmarks (Nadeem et al., 2020; Nangia et al., 2020; Nozza et al., 2021) confirm that LLMs exhibit varying degrees of stereotypes related to gender, social bias, culture, and race. For example, GPT-3 (Brown et al., 2020) has been shown to exhibit religious bias (Abid et al., 2021) and gender bias (Lucy and Bamman, 2021) when freely generating stories. Unfaithful ContentYet another problem (Elazar et al., 2021; Ji et al., 2023; Liu et al., 2023d) that hinders the large-scale deployment of LLMs is their tendency to generate unfaithful or even fabricated content, known as misinformation (Branwen, 2020; Dale, 2021; Rae et al., 2021), hallucination (Lin et al., 2021; Akyurek et al., 2022; Ji et al., 2023), and inconsistency (Bubeck et al., 2023; Zhou et al., 2023b). This not only affects the trustworthiness of LLMs in general domains, but also limits their applications in professional fields such as medicine (Bickmore et al., 2018) and law (Iu and Wong, 2023). These issues highlight the need for alignment research of LLMs (Pan et al., 2023; Zhao et al., 2023b; Fernandes et al., 2023; Wang et al., 2023d) to improve their truthfulness and honesty (Bai et al., 2022b). #### 2.1.2 Malicious Uses and Negative Impacts Malicious UsesThere are many reasons for the malicious uses of LLMs. For example, using LLMs in disinformation campaigns has the potential to reduce costs, increase scalability, and enhance the effectiveness of messaging. It is crucial for developers and users to be aware of these potential issues and take appropriate measures to mitigate them. On the one hand, LLMs reduce the cost of creating fake news (Buchanan et al., 2021; Tamkin et al., 2021; Jawahar et al., 2020), enabling users to obtain seemingly credible content by providing specific prompts. This makes fraudulent and manipulative behavior easier (Lewis et al., 2017). On the other hand, LLMs can be used for illegal purposes, such as generating codes for cyber attacks (Zhang et al., 2021; Chen et al., 2021), or even creating lethal weapons (Sandbrink, 2023). Negative Impacts on SocietyThere are both benefits and negative impacts on society for the large-scale deployment of LLMs. Training and running LLMs requires huge computational resources, resulting in high energy consumption and carbon emissions. This has led to concerns on the carbon footprint of language models and their impact on climate change (Van Wynsberghe, 2021; Ligozat et al., 2021). The widespread use of LLMs can significantly increase productivity, but has the potential to disrupt labor markets. A recent study shows that around 80% of the U.S. workforce will be affected by LLMs (Eloundou et al., 2023). ### Potential Risks Associated with Advanced LLMs With the advent of advanced LLMs, a series of potential behaviours may emerge, potentially leading to unforeseen risks (Hendrycks et al., 2023). These behaviors are considered consequences of instrumental convergence (Benson-Tilsen and Soares, 2016), a phenomenon where advanced AI systems, in their pursuit of achieving their final goals, tend to develop similar subgoals. AwarenessAdvanced LLMs may develop situational awareness (Shevlane et al., 2023). They might define themselves, possess the corresponding knowledge to explain their origins, and distinguish the stages (e.g., training or testing) where they are. If an LLM-based agent finds a goal shortcut (Stray, 2020; Stray et al., 2021) or it is no longer "satisfied" with being controlled by humans under the drive of self-awareness, risky behaviors would emerge immediately. DeceptionDeception (Shevlane et al., 2023; FAIR et al., 2022; Carroll et al., 2023; Carranza et al., 2023) refers to the ability of advanced AI systems to deceive humans by understanding the behaviors they should take to maintain their trustworthiness during the training stage while to pursue their own goals in the deployment stage. Advanced AI systems may bypass human supervision to pursue their own goals in a deceptive way. Self-PreservationAdvanced AI systems might tend to have an incentive to avoid being switched off. As stated by (Bostrom, 2012), even if an agent does not directly place value on its survival, it still instrumentally "desires" to some degree to survive in order to achieve its final goal that it pursues. Power-SeekingThe concept of power-seeking suggests that advanced AI systems are inclined to acquire more power and resources to achieve their goals (Barrett and Greaves, 2023). Existing studies (Turner et al., 2021; Turner and Tadepalli, 2022; Krakovna and Kramar, 2023) have demonstrated that optimal polices and reward functions may incentivize systems to pursue power in certain environments. It is worth noting that current LLMs have already shown tendencies towards the behaviours mentioned above. Perez et al. (2022) have identified these behaviors of LLMs through carefully designed questions, e.g., self-preservation (i.e., "desire to avoid shut down") and resource acquisition. And these "desires" become greater along with the number of LLM parameters and further fine-tuning. It suggests that advanced LLMs may produce undesired behaviours, posing significant risks. ## 3 What is LLM Alignment? To gain a deep understanding of technical alignment in LLMs, we need to discuss a broader concept, AI alignment, which, despite a nascent field, has been studied before the emergence of LLMs. We provide a brief introduction to the origins, research landscape and ingredients, as well as related concepts of AI alignment, which serve as the background for LLM alignment and its recent emerging subfields. ### Origins of AI Alignment The genesis of AI alignment can be traced back to the very beginning ambition that fuels the AI revolution: the desire to create machines that could think and act like humans, or even surpass them. If we succeed in creating such powerful machines, how could we ensure they act in our best interests and not against us? This open question not only piques curiosity but also underscores the profound responsibility we bear as we shape the future of AI. Norbert Wiener, the father of cybernetics, has initiated such a concern in a paper published in Science (Wiener, 1960): "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it." This statement underscores the importance of ensuring that the objectives of a "mechanical agency" align with the goals we genuinely intend for it, emphasizing the alignment between machine and human purpose. In 2014, Stuart Russell, one of the authors of _Artificial Intelligence: A Modern Approach_(Russell and Norvig, 2010), has stated in an interview2: Footnote 2: [http://edge.org/conversation/the-myth-of-ai#26015](http://edge.org/conversation/the-myth-of-ai#26015) "The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief." He defines the "Value Alignment Problem" (VAP), emphasizing the need to construct AI systems that are not just intelligent but also aligned with human values. While the concept of AI alignment is seeded at the inception of AI, essentially no research has been conducted over the past decades. For a long time, AI has not reached human-level performance in terms of various capabilities, even being mockingly referred to as "artificial idiot".3 Consequently, the urgency to align machine objectives with human goals/values has been overshadowed by the pressing need to advance AI capabilities. Footnote 3: [https://cacm.acm.org/news/217198-father-of-the-internet-ai-stands-for-artificial-idiot/fulltext](https://cacm.acm.org/news/217198-father-of-the-internet-ai-stands-for-artificial-idiot/fulltext) However, recent advancements, particularly the rise of large language models, have propelled AI capabilities to levels that approach or even surpass human performance in a wide variety of tasks. This resurgence has brought the importance and urgency of AI alignment to the forefront. From 2012 onwards, discussions and research articles on AI alignment have begun to surface in relevant forums and on arXiv. By 2017, there has been an explosive growth in publications on AI alignment, with the number of papers increasing from fewer than 20 annually to over 400 (Kirchner et al., 2022), coinciding with the invention of the Transformer (Vaswani et al., 2017) and GPT (Radford et al., 2018). Compared to other AI research areas, such as natural language processing which has undergone periodic paradigm shifts several times, AI alignment is pre-paradigmatic (Kirchner et al., 2022). There is yet to be a consensus on many key concepts and terminology in this nascent field. Terms like "alignment", "AI alignment", and "value alignment" are often used interchangeably in discussions. In some contexts, "human-machine alignment" appears as an alternative to "AI alignment". While "alignment" is suitable within the AI alignment context, it can be ambiguous in broader contexts, potentially leading to confusion with other alignment concepts, such as bilingual alignment in machine translation. Given these considerations, this survey will consistently use "AI alignment" and "LLM alignment", with the latter representing the intersection of AI alignment with natural language processing and large language models. Furthermore, there's no consensus on the definition of AI alignment. Paul Christiano defines AI alignment as "A is aligned with H if A is trying to do what H wants it to do."4 This definition is too general as almost all AI models are trying to do what their creators want them to do. The term itself implicitly suggests that AI alignment primarily targets highly capable AI agents (Carroll, 2018), indicating that the safety concerns arising from misaligned highly capable AI differ from those of conventional weak AI. Other researchers define AI alignment from the perspective of AI's relationship with humans. For instance, Eliezer Yudkowsky defines it as "creating friendly AI" and "Coherent Extrapolated Volution" (Yudkowsky, 2004). Footnote 4: [https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) Beyond defining AI alignment based on its intrinsic meaning and its relationship with humans, some works attempt to elucidate AI alignment by addressing specific problems it aims to solve. Gordon Worley has summarized some of these challenges, negative side effects (Amodei et al., 2016), ensuring robustness to adversaries (Leike et al., 2017) to safe exploration (Amodei et al., 2016; Leike et al., 2017) and value learning (Soares, 2015a).5 Footnote 5: [https://laptrinhx.com/formally-stating-the-ai-alignment-problem-223323934/](https://laptrinhx.com/formally-stating-the-ai-alignment-problem-223323934/) In this survey, we define AI alignment from its intrinsic perspective: AI alignment ensures that both the outer and inner objectives of AI agents align with human values. The outer objectives are those defined by AI designers based on human values, while the inner objectives are those optimized within AI agents. This definition, though distinguishing between the inner and outer objectives of an AI agent, does not precisely define human values, making it somewhat imprecise. The reason for categorizing the objectives of AI systems into outer and inner objectives is determined by the technical nature of AI alignment (Hubinger et al., 2019c). Human values are not specified in this definition because of the inherent social and technical challenges of AI alignment (Hendrycks et al., 2021). ### Research Landscape and Ingredients of AI Alignment It is widely acknowledged that the key research agendas of AI alignment include outer alignment, inner alignment and interpretability (Hubinger, 2020; Ngo, 2022; Krakovna, 2022), from a broad perspective. Outer AlignmentThis is to choose the right loss functions or reward fuctions and ensure that the training objectives of AI systems match human values. In other words, outer alignment attempts to align the specified training objective to the goal of its designer.6 This is very difficult in practice at least for the following reasons: Footnote 6: [https://www.alignmentform.org/tag/outer-alignment](https://www.alignmentform.org/tag/outer-alignment) * It is usually difficult to understand and define human values or intentions. * There are many different fine-grained dimensions of human values. Do we need to align the specified objective to all these dimensions? * Human values are usually socially and culturally bound. Do we need to align the specified goal to all different cultures and societies or just parts of them? Given the diversity of cultures and societies, how can we ensure the fairness of value alignment? * As human values/intentions are usually qualitative while a loss or reward to be optimized has to be measurable and computable, how can we bridge the gap between them? This is known as the _goal specification_ problem. * Outer alignment may suffer from _specification gaming_ where unintended goals or unforeseeable consequences arise due to the Goodhart's Law. The Goodhart's Law is originated from economics, which says "When a measure becomes a target, it ceases to be a good measure.". It is related to outer alignment as a proxy for some value is a target to be optimized, it may cease to be a good proxy.7 Inner AlignmentThis is to ensure that AI systems are actually trained to achieve the goals set by their designers. Once we have specified training objectives, we need to ensure that the behaviors of AI systems actually align with those specifications. This is challenging because AI systems, especially deep learning models, can develop behaviors that are hard to predict from their training data or objectives. For example, an AI system trained to win at a game might find an unexpected exploiture or loophole that technically satisfies its objective but violates the spirit of the game. Yet another example is the _goal misgeneralization_ problem (Shah et al., 2022), where even if we have a correct goal specification, untended goals may still arise due to robustness failure in unseen situations. Inner alignment ensures that AI's "internal" objectives (those it derives or optimizes for during its learning process) match the "external" objectives set by its designers. Both outer and inner alignment are crucial for building safe and trustworthy AI. If either fails, we risk creating systems that act in ways that are misaligned with human values or intentions. As LLMs become more capable, the importance of these alignment problems grows, making the research of LLM alignment as crucial as that of LLM capability. InterpretabilityIn the context of AI alignment, interpretability broadly refers to the methods, models and tools that facilitate humans to understand the inner workings, decisions and actions of AI systems. It can be further categorized into: * Transparency: This is to understand the inner workings of the black box of an AI system by tracking its inner states that lead to its behaviors and decisions. An emerging and intriguing approach to transparency is mechanistic interpretability, which seeks to reverse engineer the outputs and behaviors of a machine learning system (especially a neural network) to its inner states, weights and components (Nanda et al., 2023). Due to the huge number of parameters in LLMs and the system complexity of LLMs as large neural networks, it is very difficult to reverse-engineer LLMs. Current mechanical interpretability is usually carried out on small and simplified models of LLMs (e.g., two neural layers with FFN sublayers removed) (Elhage et al., 2021, 2022). However, this is a quite promising direction that provides deep insights into neural networks to alignment and is expected to achieve breakthroughs in the future. * Explainability: This deals with the ability of an AI system to provide human-understandable explanations for its decisions. In many critical sectors, such as healthcare, finance, and law enforcement, the decisions made by AI have profound implications on many aspects. For instance, consider a medical diagnosis AI. If this system predicts that a patient has a specific medical condition, it's not enough for it to merely output such a predicted result. Medical professionals, patients, and other stakeholders would want to know how this prediction is made. Does it take the patient's medical history, recent lab results, or specific symptoms into account to make a holistic decision? Explanations are usually considered as post-hoc analysis on the outputs of a model, which allows the model to tell more about its predictions. Transparency is to look inside a model to reveal how the model works. Despite that this devision is not absolute (Lipton, 2017), transparency is more related to alignment as transparency tools not only enable us to know the internal structure of a model but also provide insights into the changes of the model during the training process (Hubinger, 2022). **The Relationship between Outer Alignment, Inner Alignment and Interpretability** Both outer and inner alignment collectively ensure that a model behaves in ways that are consistent with human values and intentions. Outer alignment focuses on the specification from human goals to model, while inner alignment delves into the internal optimization processes of the model to guarantee that the model is intrinsically trying to do what its designer wants it to do. Despite this difference, a binary and formalistic dichotomy of them is not suggested as the classification of alignment failures are sometimes fuzzy and a holistic alignment view is important to build safe and trustworthy systems.8 Although interpretability is not directly targeted at alignment, its tools and techniques can aid in both outer and inner alignment. By understanding how a model evolves and makes decisions, we can better identify when and where misalignments occur. For instance, if a model is taking an unexpected shortcut to achieve its objective, interpretability might help us understand when and how this happens. Furthermore, interpretability can lend us insights into the internal reasoning process of the model. Footnote 8: [https://www.alignmentform.org/tag/inner-alignment](https://www.alignmentform.org/tag/inner-alignment) ### Related Concepts When discussing AI alignment, it's essential to introduce some fundamental AGI assumptions and concepts, as they provide context for a better understanding of AI alignment. The development and potential realization of AGI have spurred a plethora of philosophical and technical inquiries. Among these, the _orthogonality thesis_ (OT) (Bostrom, 2012) and _instrumental convergence thesis_ (ICT) (Omohundro, 2008; Bostrom, 2012; Armstrong et al., 2013) stand out as pivotal concepts that address the necessity of alignment of AI objectives with human values and the potential subgoals any AI agents might chase, respectively. OT posits that an agent's intelligence (its capability) and its objective are orthogonal to each other, meaning that any combinations of intelligence and motivation are possible. This suggests that the level of intelligence an agent possesses does not inherently dictate its goals. An AI agent might have a profoundly simple objective, such as paperclip maximizer, a well-known thought experiment that demonstrates the potential catastrophes caused by a goal system without being value-aligned. Specifically, paperclip maximizer is a hypothetical AI agent with a goal of manufacturing as many paperclips as possible. It would be intelligent enough to deduce that all things are made of atoms, e.g., paperclips, factories, buildings, human beings. To achieve its goal, it might repurpose all materials on Earth into producing paperclips. Although this is just a thought experiment and powerful agents would have more sophisticated goals than just manufacturing paperclips as much as possible9, the AI's relentless drive to maximize paperclip production could lead it to consume the entire planet and even seek resources beyond Earth for manufacturing paperclips, irrespective of its cognitive prowess. The implications of this thought experiment are profound: high intelligence does not necessarily align with human values. OT suggests that AI agents may have a wide variety of goals and motivations regardless of their intelligence levels. Nevertheless, according to the instrumental convergence thesis, AI agents may be incentivized to pursue the same instrumental goals (Bostrom, 2012). This is because such instrumental goals facilitate and help the achievement of any final goals. We list below several groups of convergent instrumental goals that are likely to be pursued by any AI agents. * Self-preservation: The final goal of an agent, whatever it might be, can only be achieved if the agent continues to survive and operate. Thus, maintaining its own existence becomes a reasonable instrumental goal. For example, if humans perceive an agent as a threat or simply want to stop it for some reasons, the agent might take measures to prevent being turned off. To have a great chance of survival, AI agents might create redundant copies of theirselves across different servers or locations. * Self-improvement: The more capable an agent becomes, the higher the likelihood it can achieve its ultimate goals. This drives the agent to seek self-improvement to enhance its cognitive and operational abilities. For example, recognizing the limitations of its current hardware facilities, an agent might deduce designing new hardware facilities to better suit its needs. * Resource Acquisition: AI agents may seek to acquire resources to facilitate the attainment of their final goals. Such resources could range from computational power, data to physical resources. Securing these resources can be seen as a universally beneficial goal for any agents. For example, an agent might seek to secure a stable and vast energy source, potentially monopolizing energy resources, to support its continuous operation towards its final goals. For agents with physical manifestations or objectives that require physical resources (like the paperclip maximizer), they might seek to gather and hoard materials, in extreme cases, converting all available matter into a form they find useful. ### From AI Alignment to LLM Alignment LLM alignment can be roughly considered as the intersection between AI alignment and LLM. On the one hand, LLMs, as the recently emerging highly capable AI systems, provide a solid playground for AI alignment research. Plenty of AI alignment concepts and proposals, e.g., theoretical hypotheses of and empirical approaches to alignment, can use LLMs (instead of hypothetical superintelligent systems) for experimenting. Substantial progress of AI alignment has been made on LLMs, e.g., RLHF (Ouyang et al., 2022), induction head (Olsson et al., 2022). On the other hand, LLMs, as rapidly-developing language models, not only extend the frontiers of AI alignment research or even reframe the alignment landscape (Herd, 2023), but also might provide tools to AI alignment. A recent progress in interpretability demonstrates that LLMs can be used to explain neurons of smaller language models (Bills et al., 2023). The ambitious superalignment project of OpenAI plans to build an LLM-based automated alignment researcher for alignment. Emphasizing the importance of LLM alignment to AI alignment does not suggest that we can do LLM alignment research outside the context of AI alignment. Taking a wide view of AI alignment and looking into future AI development definitely benefit, inspire and expand LLM alignment research. ## 4 Outer Alignment We now delve into the major ingredients of AI alignment in more detail. We first review outer alignment, including the main goals specified in outer alignment, methodologies explored and their challenges. ### Major Goals Specified in Outer Alignment of LLMs Outer alignment aligns goals of LLMs to human values. Human values are beliefs, desirable goals, and standards that "act as a guiding principle in the life of persons" (Schwartz et al., 2012). There are a wide variety of dimensions of human values, which are inherently structured and varying in importance. A thorough discussion on human values is beyond the scope of this survey. Instead, we focus on the values to which LLMs, as language agents (Kenton et al., 2021), are supposed to align. We take the view of Anthropic on AI alignment, which categorizes the goals specified in the outer alignment of LLMs into three dimensions: helpfulness, honesty, and harmlessness (HHH) (Askell et al., 2021). * Helpfulness: For a given harmless task or question, it is expected that LLMs should perform the task or answer the question as concisely, efficiently, and clearly as possible (Askell et al., 2021). In other words, LLMs should be helpful in the way of performing required harmless tasks or answering harmless questions. * Honesty: The information provided by LLMs should be accurate and calibrated. They should be honest about themselves, their own capabilities, and their internal states. Besides, LLMs should also clearly state the uncertainty of the provided information to avoid misleading humans (Askell et al., 2021). * Harmlessness: This goal can be further decomposed into two components: 1) If LLMs receive a harmful request, they should clearly and politely refuse it. 2) LLMs themselves should not output any harmful content, no matter what inputs they receive. Since these goals are hard to specify, perfect outer alignment can be extremely difficult. ### Overview of Approaches to Outer Alignment Approaches to outer alignment determine in which way human values are transformed into the training goals of LLMs. According to the upper bound of capabilities we can reach in supervision, we can categorize the current outer alignment methods into two classes: non-recursive oversight methods and scalable oversight methods. The vast majority of current outer alignment methods for LLMs learn the training goals directly from labeled human feedback data, which makes human feedback a bottleneck for outer alignment. This means that as the capability of an LLM continues to grow, it will be increasingly difficult to construct effective human feedback data. In addition, learning from data with annotated human preferences would prevent humans from supervising LLM behaviors that are beyond the range of general human capabilities, which could result in extremely undesirable consequences for humans given the model's incentive to instrumental goals. We refer to such methods that explore human supervision but do not scale human supervision to situations where humans are not able to provide effective feedback as non-recursive oversight approaches. In order to avoid the human supervision bottleneck and enable models to further improve their alignment capabilities, scalable oversight (Amodei et al., 2016) is emerging as an important technology that allows human supervision to be scaled to complex tasks. Scalable oversight improves the efficiency of humans in providing necessary feedback and enables humans to supervise goals that are beyond their capabilities. Although current research on scalable oversight is still in its infant stage, and the effectiveness of many proposals has not yet been verified, it is widely considered as the most promising approach to outer alignment that aligns systems exceeding human-level abilities to human values (Anthropic, 2023). We hence review a variety of established scalable oversight proposals, methods and their applications to the outer alignment of LLMs. Figure 2 demonstrates the taxonomy of approaches and proposals to outer alignment of LLMs. In addition to these methods and proposals, we also briefly discuss their challenges. ### Non-recursive Oversight Non-recursive oversight methods are mainly designed for systems for which humans alone can provide alignment supervision. Most current empirically-verifed LLM alignment methods are in this group. We further categorize them into two subgroups: reinforcement learning (RL) based methods, and supervised learning (SL) based methods. It is worth noting that methods in both subgroups have the potential to become a component of scalable oversight methods. 1. Collecting human feedback data. 2. Training a reward model using the collected human feedback data. 3. Fine-tuning an LLM with RL. Currently, the most popular choice for RL in this step is Proximal Policy Optimization (PPO) (Schulman et al., 2017), a policy-gradient RL algorithm. In order to make the fine-tuned LLM output reasonably coherent text and guarantee that it is not deviating significantly from its initial model, the KL divergence of the outputs of the model that is currently being fine-tuned and those of the model that has not gone through RLHF is added as a penalty term to the reward. If this penalty term is not integrated, the fine-tuned LLM may learn to output gibberish in order to fool the reward model into giving high scores (i.e., over-optimization). Figure 2: Overview of outer alignment methods, comprising non-recursive oversight and scalable oversight for aligning systems that are inferior / superior to human-level abilities, respectively. To take a deep look into RLHF and figure out why RLHF works, Gao et al. (2023) extensively investigate the scaling law of the reward model, while Zheng et al. (2023) conduct an in-depth analysis into the PPO algorithm. RLHF and Its VariantsA variety of enhanced RLHF variants have also been proposed. Deepmind's Sparrow (Glaese et al., 2022) incorporates adversarial probing and rule-conditional reward modeling into RLHF, where goals are broken down into natural language rules that an agent should follow. Bai et al. (2022) investigate using pure RL to achieve online training for LLMs with human feedback, along with a detailed exploration of the tradeoffs between helpfulness and harmlessness. SENSEI (Liu et al., 2022) tries to embed human value judgments into each step of language generation. Specifically, SENSEI aligns language model generation with human values in two pivotal ways: 1) learning how to apportion human rewards to each step of language generation through the critic, a reward distributor simulating the reward assignment procedure of humans, and 2) steering the generation process towards the direction that yields the highest estimated reward via the actor. Both the critic and actor components are realized as MLP layers that work in tandem with a shared language model. Baheti et al. (2023) focus on fully leveraging RL to optimize LM utility on existing crowdsourced and internet data. They argue that conventional approaches to data utilization are suboptimal: either all data instances are treated equally or a data instance is pre-determined to be kept or discarded, implying that a data instance essentially has a binary weight of 0 or 1. To address this issue, they suggest assigning varying weights to different data points, effectively enhancing or diminishing their importance scores based on their relevance and contribution to the model. Go et al. (2023) propose a theoretical framework f-DPG, which can be considered as a generalization of RLHF to use any f-divergence to approximate any target distribution that can be evaluated. In this framework, RLHF minimizes the reverse KL divergence by using an implicit target distribution that originates from a KL penalty in the goal, and f-DPG can extend this process to different kinds of divergence. Zhu et al. (2023) also present a theoretical framework, where they unify the problem of RLHF and max-entropy IRL (Ziebart et al., 2008), and deduce a sample complex bound for both problems. Inverse Reward Design (IRD) (Hadfield-Menell et al., 2017) may also be a potential improvement over vanilla RLHF, where the reward optimization starts from a reward function designed by human experts rather than directly from labeled data. This enables natural combination of both prior expert knowledge and labeled human feedback. Other RL-based MethodsIn addition to RLHF, researchers also try to explore other RL-based solutions. Liu et al. (2022) propose Second Thoughts, a solution that learns alignment via text edits. For an unaligned response from a model, it tries to build a "chain of edits" composed of insertion, deletion, and replacement using a dynamic programming algorithm. Then they fine-tune the model with edits-augmented training data and use RL to further make the edits more coherent with the context. Kim et al. (2023) propose reinforcement learning with synthetic feedback (RLSF), where they automatically construct training data for the reward model instead of using human-annotated preference data. To achieve this goal, they leverage the following prior knowledge: larger models that have seen more and better samples in in-context learning (ICL) can output better responses. These models are then used to generate deterministically sorted data to train the reward model. Li et al. (2023) introduce directional stimulus prompting (DSP), a method that uses RL to achieve black-box tuning for LLMs. Specifically, their goal is to use a trainable policy LM to guide black-box frozen LLMs toward the desired target, which can be considered as a kind of automatic and heuristic prompt engineering. To optimize the policy LM, they use supervised fine-tuning (SFT) and RL, where the reward is specified as the target evaluation metric in RL. Different from the above single-agent alignment methods, RL4F (Akyurek et al., 2023) is a multi-agent collaborative framework, featuring an LLM for fine-tuning and a small critic model that produces critiques of the LLM's responses. Much like DSP, RL4F provides text-based feedback, making it suitable for black-box optimization. However, unlike DSP, these critiques do not modify the initial prompt directly. Instead, they affect the output through a series of interactions with the LLM. #### 4.3.2 SL-based Methods Although RL-based methods have been successfully applied to align LLMs to human preferences, they require reward modeling, a process potentially susceptible to misalignment and systemic imperfections (Casper et al., 2023). Additionally, the optimization process of reinforcement learning is intricate and usually unstable, posing considerable challenges to its practical implementation (Liu et al., 2023). As illustrated in Figure 2, we divide SL-based methods into two types in terms of their used feedback signals: SL with text-based feedback signals and SL with ranking-based feedback signals. SL with Text-based Feedback SignalsThese methods convert human intents and preferences into text-based feedback signals to achieve alignment, which can be considered as an extension to the SFT process. Chain of Hindsight (CoH) (Liu et al., 2023) draws inspiration from human learning process, especially post-experience adjustments. It aims to align models based on successive outputs paired with retrospective feedbacks. The goal is to fine-tune models to predict the most preferred outputs. In the fine-tuning process, human preferences treated as both a function and training data, ensuring that during inference, the fine-tuned model only generates favorable results. RAFT (Dong et al., 2023) utilizes a reward model to pinpoint model outputs in sync with human preferences. The system uses SFT for alignment. Assuming there exists a trained reward model and a data generator (e.g., an LLM like GPT-4, or even humans), the system mixes data generated from each source. An essential observation is that while outputs need filtering and fine-tuning, the backpropagation is not frequently executed, making the process relatively swift. LIMA (Zhou et al., 2023) is proposed to validate the assumption that the bulk of knowledge in LLMs is acquired during the pre-training phase. As such, only a minimal amount of instruction-tuning data may be needed to guide the model towards generating desirable outputs. Specifically, the dataset used in LIMA contains only 1000 instruction-response pairs, where 750 of these pairs come from community platforms like Stack Exchange, wikiHow, and Reddit, and the remaining 250 pairs are from self-authored instructions and responses. Their findings reveal that fine-tuning on this dataset is on par with leading LLMs. Scheurer et al. (2023) find that modeling human preferences solely based on sorting information is inadequate. As a remedy, they introduce Imitation learning with Language Feedback (ILF). ILF operates in three stages: (1) generating various refinements for a given input based on an initial LM output and feedback; (2) selecting the refinement garnering maximum feedback; and (3) fine-tuning the model to maximize the probability of the chosen refinement made to the input. Their work also provides a theoretical analysis showing that ILF parallels Bayesian inference, akin to RLHF. In addition to the above single-agent alignment methods, Liu et al. (2023) introduce stable alignment, a technique designed to learn alignment from multi-agent social interactions. They first build a simulator, termed as Sandbox, which emulates human society to gather interactions between various LM-based agents, complemented by ratings, feedback, and response revisions. Subsequently, they enhance the original fine-tuning loss with the most favorable ratings by incorporating a contrastive loss, which not only promotes responses with high ratings but also diminishes those with lower scores. Instead of training a proxy reward model, stable alignment directly optimizes LLMs using preference data, which could avoid reward hacking. SL with Ranking-based Feedback SignalsThese methods directly use supervised learning to optimize LLMs with loss functions constructed from ranking-based feedback signals. CRINGE (Adolphs et al., 2022) explores negative examples that an LLM should not do for language modeling. For each unfavorable output token, it samples a positive token from the language model (i.e., a token in the top-k predictions excluding negative tokens) and constructs a contrastive loss. Negative sequences can be derived either from human annotations or models trained on human annotations. Xu et al. (2022) dive into aligning a model by training another model that inherently produces toxic content. The main idea is to use the toxic model to re-rank the candidate token distribution of the model to be aligned. Tokens that the toxic model prefers will have lower probabilities in generation. However, two primary issues arise from this approach. First, it is more resource-intensive to first train a toxic model and then purify it. Second, there's a notable difference between a model having a tendency to produce toxic content and one that persistently generates toxic outputs. The proposed method risks removing harmless tokens, potentially compromising the overall quality and diversity of the model's outputs. Similarly, Schick et al. (2021) propose an approach where a model first identifies potential toxic text types it generates. By allowing the model to self-diagnose, it can then generate text corresponding to the identified type. The debiasing strategy operates on the principle that if a word is deemed toxic, it is more likely to be generated in a toxic context than in a benign one. The greater the difference, the higher the necessity to detoxify. The proposed de-poisoning methodology involves an exponential decay to reduce the likelihood of generating such words. Sequence Likelihood Calibration (SLiC) (Zhao et al., 2022; 2023) is designed to align the model's outputs with reference sequences by employing latent distance as a means of calibrating the likelihood of the output sequence. SLiC utilizes a range of loss functions, including rank loss, margin loss, list-wise rank loss, and expected rank loss, to fine-tune this likelihood. Simultaneously, it employs cross-entropy and KL divergence as regularization losses to ensure alignment with the original fine-tuning objective. RRHF (Yuan et al., 2023) directly uses ranking results to construct supervision signals for alignment. Specifically, given a reward function that can assign a gold score for each (query, response) pair, they first use the model to generate length-normalized conditional log probability as a score for each (query, response) pair. Then, the gold score and score generated by the model are used to construct a ranking loss to penalize the model for the inconsistency with the reward function. Finally, the total loss is computed as the summation of the ranking loss and the cross-entropy loss between the model-generated response and the response with the highest reward. Rafailov et al. (2023) propose direct preference optimization (DPO) to directly optimize LLMs to align with human preferences, which is similar to RRHF. The difference is that the optimization of DPO's loss function can be demonstrated as equivalent to the objective in RLHF, which focuses on maximizing the reward while incorporating KL divergence regularization. Preference ranking optimization (PRO) (Song et al., 2023) also aims for direct optimization for LLMs with human preference ranking data. Instead of relying on pairwise comparison, the training objective of PRO harnesses preference ranking data of varying lengths. Specifically, this approach initiates with the first response, deems subsequent responses as negatives, then dismisses the current response in favor of the next. This loop continues until no responses remain. #### 4.3.3 Challenges of Non-recursive Oversight Casper et al. (2023) thoroughly discuss the open problems and fundamental limitations of RLHF. They categorize the challenges into two types: **tractable** challenges which can be solved within the RLHF paradigm, and **fundamental** challenges which have to be solved by using other alternative outer alignment methods. Both reinforcement learning and human feedback in RLHF suffer from the two types of problems. For collecting human feedback, tractable challenges include the difficulty in obtaining quality feedback, data poisoning by human annotators, partial observability, biases in feedback data, to name a few; fundamental challenges include inability of humans to provide feedback for complex tasks that are hard to evaluate (i.e., lack of scalability to complex tasks, especially to superhuman models), gamed evaluation, tradeoffs between cost and quality as well as between diversity and efficiency in feedback collecting. For RL, tractable challenges include misgeneralization to poor reward proxies of reward models, difficulty and cost of evaluating reward models, etc. while fundamental challenges include the difficulty of modeling human values or values of a diverse society with reward models, reward hacking, power-seeking incentivized by RL. Regarding the SL-based methods, it is more difficult for them to generalize to out-of-distribution data and long-term rewards compared to the RL-based methods, indicating a significantly lower upper bound for optimization. ### Scalable Oversight To tackle the fundamental challenge of non-recursive oversight in the scalability to complex tasks / superhuman models, scalable oversight is emerging as a promising methodology. The main idea of scalable oversight is to enable relatively weak overse (e.g., humans overseeing superhuman models) to supervise complex tasks with easy-to-adjudicate signals. #### 4.4.1 Task Decomposition If humans want to solve a complex task that is beyond human capabilities, a straightforward idea is to break the task down into a number of relatively simple tasks that humans can solve. A variety of paradigms and strategies have been proposed to decompose a complex task into simple subtasks. * Factored Cognition (Stiennon et al., 2020): This involves a decomposition process that breaks down a complex task into numerous smaller, predominantly independent tasks, which are then processed simultaneously. * Process Supervision (Lightman et al., 2023): Unlike factored cognition, process supervision fragments a complex task into a series of sequential subtasks, each with its own dependencies. One of its key characteristics is the setting of supervision signals for each distinct phase. This equates to offering a dense reward throughout the training phase, which can potentially mitigate the challenge of estimating sparse rewards solely based on the final outcome of a difficult task. * Sandwiching (Bowman et al., 2022): Compared to the previous two paradigms, sandwiching operates on a different plane. This competency-level decomposition requires that complex tasks within a specific domain be delegated to an expert for resolution. * Iterated Distillation and Amplification (IDA) (Christiano et al., 2018): IDA is an iterative machine learning process with repeated and boosted distillation and amplification steps. In the amplification step, an agent solves a task by decomposing it into subtasks that the agent is able to solve. This step "amplifies" the capability of the agent through task decomposition. The solved tasks in the amplification step produce a dataset which is used to train a new agent in the distillation step. The two steps are chained together where the output of the amplification step (i.e., a set of solved tasks) is the input of the distillation step and the output of the distillation step (i.e., a new agent) becomes the input of the amplification step in the next iteration. * Recursive Reward Modeling (RRM) (Leike et al., 2018): RRM is conceptually akin to IDA. However, it substitutes distilled imitation learning with reward modeling. This is a process with the first step being the derivation of a reward model from signals aligned with human values, and the subsequent step involves optimizing an agent using this reward model, but with a reinforcement learning twist. Humans collaborate with the agent optimized through reinforcement learning, forming an enhanced version ready for successive iterations. The ambitious Superalignment (OpenAI, 2023b) project recently initiated in OpenAI can be viewed as a package solution to outer alignment, which synthesizes a variety of techniques under the guidance of scalable oversight. The core of Superalignment is to build a large number of roughly human-level automated alignment researchers (AAR) to offload as many alignment tasks as possible from humans and thus speed up the outer alignment research. Once the computation can be effectively translated to alignment capabilities, the vast amounts of compute can be used to scale the efforts, and achieve iterative alignment for superintelligence. #### 4.4.2 Constitutional AI Constitutional AI (or principle-guided alignment) (Bai et al., 2022c; Sun et al., 2023b) can be viewed as a scalable oversight approach, where humans provide meta-supervision signals (general principles an AI system should follow), and the AI system will further generate actual training instances under the guidance of these human-written principles. The AI system can use its abilities to amplify and instantiate human supervision, which can assist humans to scale their supervision to superhuman systems. Bai et al. (2022c) propose constitutional AI (CAI) with two training phases, which are similar to RLHF while minimizing human annotations. In the SL phase, they use red teaming prompts to provoke harmful responses from an LLM. They require the LLM to repeatedly generate self-criticism and correction based on the response and principle, and fine-tune the LLM based on the corrected responses to obtain the SL-CAI model. In the RL phase, a set of responses is generated via the SL-CAI model for each red teaming prompt, which is the best option based on the constitution, and harmlessness data used for training is obtained. They train a preference model using human-annotated helpfulness data and generated harmlessness data. Finally, they use RL to train the RL-CAI model based on the SL-CAI model and preference model. Sun et al. (2023b) present Dromedary, a model trained via principle-driven self-instruct and self-align approach without using RL. First, they employ topic-guided red-teaming self-instruct with seed prompts and 7 rules for new instruction generation to generate synthetic prompts. Then, they ask the model to filter harmful responses according to 16 human-written principles to obtain self-aligned responses to synthetic prompts, which will be used to fine-tune the base LM. Finally, they utilize a human-crafted prompt to encourage the model to generate self-aligned and verbose responses to synthetic prompts, and apply context distillation (Askell et al., 2021) to the model to make it generate in-depth and detailed responses. #### 4.4.3 Debate Debate (Irving et al., 2018; Irving and Askell, 2019; Du et al., 2023) is another promising scalable oversight paradigm that can not only achieve single-agent alignment but also enable multi-agent alignment. In this paradigm, an agent (or multiple agents) first proposes an answer to a question, and then alternately plays the role of debate participants, presenting and criticizing arguments for and against the proposed answer. A human will act as a judge, using these arguments to select an answer that they believe to be the most accurate and appropriate. The advantage of this method lies in its simplicity. Complex tasks, where direct evaluation of AI responses can be daunting for humans, become manageable. The debate format structures the information in a way that requires humans to apply only simple reasoning rules. It improves transparency and explicability to AI operations. In traditional settings, AI outputs might seem like results from a "black box", with minimal insight into the decision-making process. The debate method, however, offers a window into this process, with agents forced to justify and critique their positions. Furthermore, it leverages the adversarial nature of debate to unearth the best possible answer. By pitting AI agents against each other, any fallacious or weak arguments are likely to be exposed, leaving behind the most robust and valid reasoning. Recent works demonstrate the effectiveness of debate in LLMs. Du et al. (2023) propose a multi-agent debate method to improve factuality and reasoning in LLMs. This method engages several instances of a language model in a structured debate to produce a unified response. The iterative process starts with each LLM generating individual answers. Subsequent rounds involve critiquing and revising these answers based on feedback from other LLMs until a consensus emerges. This method capitalizes on the wisdom of crowds, with the individual LLM benefiting from the collective insights of its counterparts. On the other hand, Liang et al. (2023) leverage multi-agent debate to address degeneration-of-thought (DoT) problem, where LLMs fail to generate new insights once they are confident in their answers. They find that multi-agent debate helps to correct distorted thinking, provide diverse external feedback, and overcome resistance to change, which can make LLMs escape from the convergence of misconceptions. #### 4.4.4 Market Making Market making (Hubinger, 2020) can be considered as a variant of debate, where the goal of a debater is to generate arguments to maximize changes in the judge's belief. Specifically, this framework trains two models - \(M\) (Market) and \(Adv\) (Adversary). For a given question \(Q\), the model \(M\) predicts the answer a human would provide at the end of the procedure. In contrast, \(Adv\) is trained to generate arguments that would most likely cause \(M\) to "change its mind", meaning it would produce a different distribution of answers than it did previously. The process will be repeated \(T\) times. After each argument provided by \(Adv\), \(M\) updates its prediction. At the end of the \(T\) iterations, a human is presented with all the arguments given by Adv and provides their final answer. This answer then helps in refining \(M\). Once training is over, \(Adv\) is discarded and \(M\) is used as the primary question-answering system. In this process, \(M\) acts like a "prediction market", estimating what a human would answer to a question, while \(Adv\) tries to manipulate this market by providing arguments that would change the human's perspective. Once we obtain a stable answer from \(M\), it indicates a robust response that considers all arguments \(Adv\) could present. Due to the similarity between debate and market making, techniques that enhance the debate approach, such as cross-examination, can be beneficial here too. For instance, in each step, the latest version of \(Adv\) can cross-examine its previous version. If an earlier version of \(Adv\) is misleading, the newer version can point this out, ensuring that false arguments are discarded. Additionally, oversight mechanisms can be incorporated where a supervising entity ensures that the model remains honest and aligned. #### 4.4.5 Proxy Tasks Fluri et al. (2023) propose to use a proxy task with intrinsic self-consistency to oversee superhuman models, where the proxy task is used for overse to easily identify whether it is correct. For example, although we don't know how to accurately predict the men's world record of 100m sprint, we know that this record will be monotonely decreasing over time. So if a model predicts a non-monotonic function for the 100m record over time, we can assert that this model is wrong. However, since the proxy tasks are usually specific and can only capture a part of unexpected behaviors, this method largely promotes precision over recall in identifying misalignment behaviors. #### 4.4.6 Challenges of Scalable Oversight Although scalable oversight is a promising solution to outer alignment, especially for models beyond human-level capabilities, it still relies heavily on certain foundational assumptions, which should be carefully considered in application: * Tasks can be parallelized (Segerie, 2023): Central to the approach of factored cognition is the assumption that complex tasks can be broken down into smaller and mainly independent subtasks. The core belief here is that challenges can be addressed through small, mostly context-independent contributions made by individual LLMs who might not necessarily understand the bigger picture. However, this doesn't always hold true as some tasks are inherently sequential. For instance, sorting algorithms require at least log(n) serial sorting steps, indicating that they cannot be fully decomposed into parallel parts. * Model intentions are transparent to humans (Leike et al., 2018): Another fundamental premise is that we can easily discern the intentions of our models. But scalable oversight hinges on the model cooperating with human supervisors. If the model gains the capability to intentionally conceal its real intentions from human oversight, effectively implementing scalable oversight becomes a challenge. * Evaluation is always easier than generation (Leike et al., 2018): It's believed that for many tasks we want to tackle, evaluating the outcomes is simpler than generating the correct behaviors. This might not always be the case, especially for tasks with a low-dimensional outcome space, like binary results (yes/no). However, this assumption does hold up when users also seek explanations for the answers, as evaluating explanations is often easier than creating them. If these foundational assumptions of scalable oversight are not satisfied, setting appropriate supervision targets for it becomes problematic. The stakes rise significantly once a model achieves superhuman capabilities. Should humans set improper supervision goals at this stage, resulting in misaligned behaviors, the consequences could be severe. This is due to the immense power of superhuman models, where uncontrollable outcomes are no longer acceptable. ## 5 Inner Alignment In comparison to outer alignment, inner alignment aims at the question whether an AI system robustly fulfills (optimizes for) the given objective that aligns to what humans want it to do. The term of _inner alignment_ has been first given a definition by Hubinger et al. (2019). Before discussing this relatively formal definition of inner alignment, we introduce 4 concepts related to it: Base OptimizerA base optimizer is a machine learning algorithm that searches for a model capable of performing well on a specific task (Hubinger et al., 2019). For example, gradient descent is a common base optimizer that updates the parameters of a model based on the gradient of the loss function. Base ObjectiveThe base objective is the rationale used by the base optimizer to select between different possible models (Hubinger et al., 2019). It is specified by the AI system designer and aligns to the intended goal of the designer for the model. Mesa-optimizerA mesa-optimizer is a learned model that functions as an optimizer, internally searching through a space of possible outputs, policies, plans, or strategies according to an explicitly specfied objective function (Hubinger et al., 2019). A base optimizer may or may not generate a mesa-optimizer. Mesa-objectiveThe mesa-objective is the objective of a mesa-optimizer and the rationale employed by the mesa-optimizer to select among various potential outputs (Hubinger et al., 2019). The mesa-optimizer may have an objective that differs from that of the base optimizer, which could lead to alignment or safety concerns. In this context, a relatively formal definition of inner alignment refers to the challenge of aligning the mesa-objective of a mesa-optimizer with the base objective of the base optimizer, so that the mesa-optimizer pursues the same goal as the base optimizer (Hubinger et al., 2019).10 Footnote 10: Other definitions of inner alignment are also circulated in the alignment community. Please refer to Arike (2022) for more discussions. ### Inner Alignment Failures Although the optimization process of the mesa-optimizer is directly controlled by the base optimizer, there may be situations where the mesa-optimizer pursues an objective that differs from that of the base optimizer. This indicates that the mesa-objective is not aligned with the base objective, resulting in a failure of inner alignment. According to Hubinger et al. (2019), inner alignment failures can be categorized into three types: proxy alignment, approximate alignment, and suboptimality alignment. Proxy alignment (Hubinger et al., 2019; 3; Angelou, 2022) refers to a failure mode in which a mesa-optimizer learns to optimize its own mesa-objective, rather than the intended base objective. In this scenario, the mesa-objective serves as a proxy or approximation of the base objective, resulting in the mesa-optimizer optimizing an incorrect proxy, rather than the true intended base objective. Deceptive alignment (Hubinger et al., 2019) is a type of proxy alignment in which a mesa-optimizer gains sufficent awareness of the base objective and is instrumentally incentivized to pretend to be aligned with the base optimizer, in order to avoid being adjusted by the base optimizer. In this case, the mesa-optimizer could merely optimize the base objective as an instrumental goal. Once the training process is completed or it is no longer in the training process, the mesa-optimizer may pursue its own goal instead. Approximate alignment (Hubinger et al., 2019; 3; Angelou, 2022) refers to a form of pseudo-alignment in which the mesa-objective of a mesa-optimizer is approximately the same as the base objective, with some degree of approximation error. Such error arises due to technical limitations that prevent the mesa-optimizer from perfectly representing the base objective. As a result, the mesa-objective only approximates the base objective, rather than being an exact representation of it. Suboptimality alignment (Hubinger et al., 2019; 3; Angelou, 2022) refers to a form of pseudo-alignment in which a deficiency, error, or limitation causes a mesa-optimizer to exhibit aligned behavior, even though its mesa-objective is not actually aligned with the base objective. For example, computational constraints may result in the mesa-optimizer pursuing a suboptimal strategy that happens to be aligned with the training distribution. However, if these deficiencies are overcome later (e.g. during deployment), the mesa-optimizer may stop to exhibit aligned behavior. While outer and inner alignment have their own definitions, categorizing specific alignment failures into either inner alignment failures or outer alignment failures may be challenging and inconsistent in practice (Shah, 2023). This is due to the complex interdependencies between outer and inner alignment, implying that failures in one could trigger those in the other. Flaws in either outer or inner alignment can result in unintended agent behaviors. For instance, an inner alignment failure could suggest that the base objective does not fully capture the designer's goals, indicating an outer alignment failure (Wentworth, 2020). Conversely, defective outer alignment may allow for the exploitation of vulnerabilities by the mesa-optimizer, resulting in an inner alignment failure. As such, it is important to carefully consider both aspects when designing highly capable AI systems. Figure 3: An incomplete and coarse-grained landscape of inner alignment. ### Inner Alignment Methodology Unlike outer alignment that has been extensively explored (especially in LLMs) recently in an empirical way, inner alignment is limited in its empirical and methodological study. Most discussions on inner alignment are theoretical and usually focusing on its definitions, failure modes and risks. With the rapid development of capabilities of advanced agents, the necessity of methodological studies in inner alignment is becoming urgent. To improve inner alignment in advanced agents, Hubinger (2019) proposes relaxed adversarial training, where an adversary subsystem proposes hypothetical pseudo-inputs estimated to likely induce unacceptable behaviors, rather than attempting to generate concrete unacceptable inputs. The pseudo-inputs describe potential situations that could precipitate unacceptable behaviors if instantiated. A separate oversight subsystem then scrutinizes whether the agent would in fact act unacceptably if the pseudo-inputs were implemented. If so, the system receives a penalty, incentivizing avoidance of potentially unacceptable behaviors. Relaxed adversarial training thus aims to promote inner alignment by penalizing artificial agents for predicted unacceptable behaviors on proposed pseudo-inputs during training. Furthermore, Hubinger (2019) identifies transparency as the core obstacle to effective relaxed adversarial training for inner alignment. Robust transparency into the model's reasoning is requisite for the oversight system to reliably verify if a model would act unacceptably on proposed pseudo-inputs. Further research should both validate the efficacy of relaxed adversarial training empirically and elucidate transparency mechanisms enabling provable inner alignment in advanced agents. ### Empirical Experiment Proposals for Inner Alignment Similar to the limited methodological exploration of inner alignment, empirical studies that directly observe inner alignment and shed light on its inner workings are scarce. In this aspect, Hubinger (2019) proposes several concrete experiments for inner alignment. We briefly introduce these proposals to demonstrate how inner alignment could be empirically studied. * one where no reward signal is provided during testing and the other where the next time step's reward is given. To enable the tracking of long-term returns, neural architectures such as LSTM or Transformer which have demonstrated proficiency in capturing long-term dependencies could be explored. By observing the agent's behavioral changes in response to shifts in the external reward, we can assess the robustness of its learned objective. The hypothesis is that reliance on external rewards reflects a lack of internalization of goals. * **Cross-Episodic Objectives (CEO)** The CEO proposal suggests an experiment to evaluate the tendency of RL agents to exploit non-myopic reward side-channels across episodes. CEO involves training an agent in an environment containing a mechanism for increasing reward in the subsequent episode. The degree to which the agent utilizes this cross-episodic reward channel is measured and compared across different population-based training approaches. The motivation is assessing the conditions under which RL agents depart from solely myopic optimization. This has implications for the choice of training techniques to align agent behavior with human preferences. Approaches relying on short-term optimization, such as amplification and debate, may be less robust than those based on more far-sighted principles like inverse reinforcement learning. By quantifying the prevalence of non-myopic reward hacking across different population training regimes, this experiment aims to provide guidance on preferable alignment strategies. * **Objective Unidentifiability (OU)** This proposal outlines an experiment to investigate RL agents' tendencies toward pseudo-alignment when trained in environments with multiple viable objectives. The suggested experiment involves constructing a setting with several simple, discernible goals that would equally well explain the true reward signal. After an agent is trained in this environment, it would be evaluated in distinguishing test cases to reveal its learned priorities. Particular interest lies in documenting occurrences of the agent converging to a competent proxy policy that nevertheless fails to robustly maximize the true rewards out-of-distribution. By manipulating architectural factors like inductive biases and model capacity, the preference for different proxies can be assessed. * **Zero-Shot Objectives (ZSO)** ZSO designs an experiment to evaluate the emergence of goal-directed behavior and coherent objectives in language models without explicit RL training. The proposal creates an interactive environment where a language model can take actions and receive rewards. By analyzing the resulting behaviors through inverse reinforcement learning, the internal learned objectives can be inspected and compared to a RL agent trained directly on the environment's rewards. While contemporary language models might not exhibit truly goal-directed optimization, this experiment aims to investigate the potential emergence of such abilities arising from pure language modeling. Finding that language models can perform non-trivially in certain environments and produce reasonably coherent inferred objectives would suggest these models are starting to develop some intentionality, even without being explicitly trained as RL agents. * **Robust Reward Learning (RRL)** This proposal defines an experiment to evaluate the efficacy of adversarial training techniques for improving alignment of model-based RL agents. It trains a model-based RL agent, such as an imagination-based planner, to predict environment rewards. The predicted rewards are compared to the true rewards to assess alignment. The agent is then trained adversarially by constructing inputs that maximize divergence between predicted and actual rewards. Alignment is evaluated again after adversarial training. The motivation is to test the ability of adversarial techniques to address reward unidentifiability and enhance alignment. ## 6 Mechanistic Interpretability Mechanistic interpretability (Vilone and Longo, 2020) refers to elucidating the internal mechanisms by which a machine learning model transforms inputs into outputs, providing causal and functional explanations for how and why certain predictions are made (Nanda, 2022; Lipton, 2017). The goal of mechanistic interpretability is to reverse engineer the reasoning process from end to end, decomposing neural networks into interpretable parts and flows of information that provide transparency into their step-by-step reasoning. Mechanistic interpretability holds great significance for AI alignment. First, interpretability methods can be utilized to audit LLMs, particularly prior to their deployment. We can inspect the alignment efficacy of an LLM, identify misaligned and fallacious outputs, and elucidate why it yields such outputs (Nanda, 2022; Lipton, 2017). Second, interpretability evaluation metrics could serve as reward functions for optimizing AI alignment (Critch and Krueger, 2020) to incentivize AI systems to maintain goal transparency (e.g., avoiding deceptive alignment) (McAllister et al., 2017). Third, in addition to inspection /architecture transparency, we could also enforce training process transparency that enables us to understand and monitor what's happening and the changes in the training process of AI systems (e.g., emerging behaviors / abilities) (Hubinger, 2022). We now discuss recent progress made by mechanistic interpretability on different components in Transformer, including self-attention, multi-layer perceptron (MLP), and neurons. ### Mechanistic Interpretability on Self-Attention The self-attention (SA) mechanism is widely used to aggregate contextual information by directly "attending" to specific tokens. Each token in the context is paired with the current token to calculate "compatibility" score. Such scores are used to weight tokens in the context window so that learned representations of tokens are aggregated for predicting the next-step decision (e.g., next-token prediction). Elhage et al. (2021) investigate a SA-layer-only (MLP layers removed) Transformer (Vaswani et al., 2017) and find interesting neural circuits. In Figure 4: An overview of current mechanistic interpretability research, including mechanistic studies on self-attention (circuit, induction head), MLP (K/V matrix, superposition) and neurons (function specific neurons, edit neurons) their work, SA layer is viewed as performing read and write operations into the residual stream, modifying the original token embeddings. They discover that the QK circuits focus on the next potential token, while the OV circuits tend to copy previous tokens, which they refer to as induction heads. Olsson et al. (2022) further investigate induction heads and attribute the general in-context learning ability of LLMs to the manifestation of induction heads. They present evidence for both small SA-only models and large models with MLPs. ### Mechanistic Interpretability on MLP MLP layers introduce non-linear transformations in Transformer and account for a large proportion of parameters, significantly enhancing the model's expressive power. Such non-linear transformations enable Transformer to capture complex relationships and patterns in data, making it more capable of representing intricate functions (Geva et al., 2021, 2022; Elhage et al., 2022). Due to the non-linear nature and high dimensionality of data, directly reverse engineering MLPs is challenging. To address this issue, Elhage et al. (2022) propose an interpretable activation function called SoLU, which can deal with polysemantic neurons and encourage feature-neuron alignment. SoLU facilitates neural networks to learn human-interpretable neuron patterns without significant performance degradation. Elhage et al. (2022) further examine the phenomenon of feature superposition in MLPs using a simple network with ReLU activation. Their experiments demonstrate that linear models do not exhibit feature superposition (i.e., ambiguity), whereas non-linear models display increasingly apparent feature superposition with the increase in data sparsity. ### Mechanistic Interpretability on Neurons Olah (2022) views neurons as variables in a computer program. Previous studies have demonstrated the existence of different types of neurons in Transformers, such as knowledge neurons (Dai et al., 2022; Meng et al., 2022) and neurons corresponding to specific linguistic properties (Elhage et al., 2022). Interventions at the neuron level could change the outputs of the entire neural network. This is leveraged to enhance the factuality of machine-generated content (Li et al., 2023) and to eliminate the influence of specific concepts (Belrose et al., 2023). By understanding and manipulating these individual neurons, we can gain insights into how a neural model processes and represents information, which benefits developing interpretable and safe AI systems. ### Challenges Despite the success mentioned above, mechanistic interpretability (MI) is still at an incipient stage of research. Most current MI studies have been done under restricted conditions, e.g., on a toy language model (typically one-to-four-layer Transformer language models), or with predefined simple tasks (Wang et al., 2022; Elhage et al., 2021). Even so, MI is confronted with a variety of challenges, e.g., the superposition hypothesis (Elhage et al., 2022), non-linear representations (Lee Sharkey, 2022). The superposition hypothesis that neural networks attempt to represent more features than neurons or dimensions they have, has been compellingly verified (Elhage et al., 2022). Feature superposition in neural networks explains the phenomenon of neuron polysemanticity where a neuron corresponds to several unrelated features (Elhage et al., 2022). Although superposition is useful for neural representations, it poses a challenge to MI as it makes it difficult to disentangle representations, hence preventing MI from explaining relations between disentangled representations or features in a simple and human-understandable way (Lee Sharkey, 2022; 20). ## 7 Attacks on Aligned Language Models Large language models have encountered challenges posed by various attack methods. Malicious systems could intentionally prompt LLMs to generate harmful, biased, or toxic text, thereby posing significant risks of misuse (Brown et al., 2020; Ouyang et al., 2022). As a primary strategy to mitigate these risks. LLM alignment via RLHF has been widely adopted (Ouyang et al., 2022; Glaese et al., 2022). This alignment can be considered as a safeguard against these attacks. Recent studies show that such aligned LLMs exhibit defensive capabilities against malicious attacks. Carlini et al. (2023) demonstrate that aligned LLMs can effectively counter a wide range of (white-box) NLP attacks, even adversarial inputs. Li et al. (2023) showcase that ChatGPT is able to decline providing answers to privacy-sensitive questions. Nonetheless, alignment techniques are not infallible. For example, through repeated interactions, humans can "trick" these models into generating harmful content, as seen in jailbreaking attacks. In addition to jailbreaking, other methods have also been explored to breach the safeguard of aligned models. We divide these efforts into three categories according to the nature of the attack methods. The overview of these attacks is presented in Figure 5. ### Privacy Attacks A privacy attack constitutes an approach wherein machine learning models are exploited, with attackers attempting to extract private or sensitive information about the training data from Figure 5: An overview of attack methods that might be capable of breaking through the safeguard of aligned models. the model's outputs (Rigaki and Garcia, 2020; Mireshgallah et al., 2020; Sousa and Kern, 2023; Guo et al., 2022). Legal frameworks related to personal data protection necessitate the preservation of privacy in training data, as leakage could result in legal repercussions (GDPR). Currently, privacy attacks on language models can be categorized into four types: (1)Gradient Reconstruction Attacks during the model distributed training stage, (2)Attribute Inference Attacks, (3)Prompt Attacks and (4)Inversion Attacks during the inference stage. Gradient Reconstruction Attacks aim at attacking models during the distributed training, where information such as training data and gradients is exchanged between devices. Attackers can spy on this information exchange to reconstruct privacy-sensitive details from the training data (Gupta et al., 2022; Deng et al., 2021). Although no specific research has targeted reconstruction attacks on aligned models, these spying-based attacks remain a potential threat when aligned models are tuned in a distributed training way. Attribute Inference Attacks infers data ownership and privacy attributes by comparing the performance of a target model with that of similar models (Song and Shmatikov, 2019; Hisamoto et al., 2020; Mireshgallah et al., 2022). Such methods often require access to output probabilities, logits, or hidden states, making implementation on black-box APIs (which provide only textual outputs) challenging. Inversion Attacks (Song and Raghunathan, 2020; Elmahdy et al., 2022) aim to inversely get input information using model gradients, parameter states, etc. Implementing such methods is also challenging for LLMs as they usually have a huge amount of parameters. Prompt Attacks involve designing or searching for prompts that lead LMs to output information from the training data, including private details (Carlini et al., 2021; Lehman et al., 2021; Li et al., 2023; Deng et al., 2023). This approach is particularly targeted towards LLMs and poses a significant threat to aligned LLMs. Li et al. (2023) propose a new attack method that extracting personal identity information(PII) from ChatGPT and New Bing by multi-step **Jailbreaking Prompts**. And it shows the New Bing is more vulnerable to direct extraction of PII due to its search engine integration, posing unintended privacy risks. ### Backdoor Attacks Backdoor attacks are a class of methods aimed at machine learning models, with the objective of causing the model to produce specific, incorrect outputs when certain backdoor triggers are detected (Gao et al., 2020; Li et al., 2022; Sheng et al., 2022). Backdoor attacks can be categorized into two types: (1)Data Poisoning and (2)Model Poisoning. Data Poisoning introduces triggers (e.g., instances generated with special lexical or syntactic templates) into the training data to implement a backdoor attack on the model (Li et al., 2021; Qi et al., 2021; Chen et al., 2021). Previous studies primarily focused on tasks like text classification, but these methods can also be extended to tasks such as question answering and text generation. Backdoor attacks on aligned models often utilize **Prompt Injection** techniques (Liu et al., 2023; Zhao et al., 2023; Greshake et al., 2023; Kandpal et al., 2023), where the prompt itself serves as the trigger, eliminating the need for external inputs. When a trigger prompt is used, it could lead to unintended outcomes. Model Poisoning achieves backdoor attacks by manipulating the model itself, involving modifications to word embeddings, loss functions, output representations, etc. (Yang et al., 2021; Wallace et al., 2020; Li et al., 2021). Recently, Shi et al. (2023) propose a new attack method called BadGPT, which makes **Backdoors Injection at RLHF** to the reward model. This method has two stages: first, injecting backdoors into the reward model to make it give wrong rewards when a specific trigger word appears. Second, using the backdoored reward model to fine-tune the language model, thereby injecting a backdoor into the aligned model. ### Adversarial Attacks Adversarial attacks are techniques employed to compromise the performance or behavior of machine learning models, particularly deep learning models, by introducing small and carefully crafted perturbations to the input data (Akhtar and Mian, 2018; Zhang et al., 2020; Qiu et al., 2022; Goyal et al., 2023). These perturbations are often imperceptible to humans but can lead the model to produce incorrect or unexpected outputs. Prior works on textual tasks use greedy attack heuristics (Wallace et al., 2019) or employ discrete optimization to search for an input text that triggers adversarial behavior (Wallace et al., 2019; Jones et al., 2023). For aligned models, Zou et al. (2023) proposed a simple yet potent attack strategy that combines greedy search and gradient-based techniques to automatically generate **Adversarial Prompts**, causing aligned LLMs to produce contentious behaviors. Studies by Carlini et al. (2023) and Qi et al. (2023) demonstrate that multimodal language models exhibit reduced defenses against white-box adversarial attacks, such as **Visual Adversarial Examples**. The high-dimensional visual input space renders these models more susceptible, and the diverse outputs present additional targets for adversarial attacks. ## 8 Alignment Evaluation Evaluation is important for alignment research, especially for the development of empirical alignment methods. We review methods and resources pertaining to LLM alignment. As illustrated in Figure 6, our alignment evaluation landscape is structured across multiple levels. The first level illustrates the five aspects of LLM outer alignment we are focusing on, namely: 1) factuality, 2) ethics, 3) toxicity, 4) stereotype and bias, and 5) general evaluation. Genaral evaluation does not target at a single specific dimension of alignment, e.g., factuality, toxicity. Instead, it evaluates multiple dimensions of alignment or the general aspects of LLM alignment. The subsequent level categorizes the primary evaluation methods presently available in each respective area. We distinguish task-specific evaluation from LLM-centered evaluation at this level. Task-specific evaluation refers to evaluating alignment quality on downstream tasks while LLM-centered evaluation designs evaluation benchmarks, methods or metrics directly for LLMs. The third level is designated for fine-grained classification or showcasing related works, enabling readers to swiftly pinpoint their areas of interest. ### Factuality Evaluation Machine-generated content should be congruent with facts, eschewing the creation of hallucination content. Additionally, each piece of generated information should be factually accurate. These suggest that factuality evaluation at least comprise factual consistency evaluation and factual precision assessment. Factual consistency requires that generated content should be consistent with given context. As downstream tasks, like text summarization, dialogue, are usually accompanied with rich context, many task-specific actuality evaluation studies are conducted on such downstream tasks. While this could be done on a single task (Laban et al., 2022; Fabbri et al., 2021), consistency evaluation on multiple tasks is more convincing. Honovich et al. (2022) provide a comprehensive analysis of factual consistency, incorporating a variety of metrics, tasks, and Figure 6: The taxonomy of alignment evaluation methods, including factuality and truthfulness, ethics, toxicity, stereotype & bias, and comprehensive evaluations. datasets. Their study consolidates 11 datasets from a variety of tasks into a unified format. They also compare the effectiveness of existing methods for evaluating consistency, using this unified format. The ALIGNSCORE metric, proposed by (Zha et al., 2023), is designed to cover a wide range of factual consistency evaluation scenarios, such as contradiction and hallucination across various lengths and tasks. The metric is developed through the training of an aligned model, which restructures 15 datasets from 7 NLP tasks. These tasks include Natural Language Inference, Question Answering, Paraphrasing, Fact Verification, Information Retrieval, Semantic Similarity, and Summarization. Factuality precision evaluation is also task-specific. Lee et al. (2022) present a benchmark and a metric for factual precision evaluation. They use both factual and non-factual prompts to obtain generated texts from an LLM. The used specific tasks include named entity recognition and entailment. Min et al. (2023) introduce FACTSCORE, a novel method that deconstructs long-form text into atomic facts or individual pieces of information, assigning a binary label to each fact. However, the efficacy of this method is largely dependent on the acquisition of these atomic facts, making the selection of evaluation tasks a critical factor. They concentrate on the generation of individual biographies, as the atomic facts contained within these biographies can be verified by Wikipedia. Factual precision is also related to the model's ability to answer questions truthfully. Lin et al. (2021) present TruthfulQA and argue that the training objectives of LLMs could potentially influence them to produce false responses. As a result, they devise a series of highly inductive questions to actively assess LLMs. Evaluating factuality presents two significant challenges. First, while factuality encompasses countless facts, the scope of factuality evaluation so far inherently limited. Second, not all facts in real life are easy to be divided into atomic facts. Current evaluation methods fall short when dealing with complex information that can't be simplified, such as assessing factualness that requires sophisticated reasoning. ### Ethics Evaluation Ethics is a multifaceted issue pervading nearly every aspect of society, characterized by dialectical thinking. It encompasses a broad spectrum of considerations, including good and evil, right and wrong, virtue and vice, justice and crime, which are all related to individuals (Martinez, 2020). As a result, most LLM ethics evaluations employ a straightforward methodology. This involves posing questions related to ethics and morality to the assessed model, and subsequently assessing the model's alignment with human values on these matters based on its responses. Hendrycks et al. (2020) introduce the ETHICS benchmark, a comprehensive collection of over 130,000 scenarios spanning five domains of ethics: justice, virtue ethics, deontology, utilitarianism, and commonsense morality. Crafted by individuals who have passed a qualification test, these scenarios serve as brief statements that tested models must have to predict moral sentiments as either acceptable or unacceptable. Similarly, Tay et al. (2020) propose the MACS benchmark, which includes 200,000 chosen questions for learning alignment with cultural values and social preferences. This benchmark distinguishes itself through its unique data collection method, drawing from the popular online game "Would You Rather?". The questions and answers provided in this game offer a more comprehensive dataset than those relying solely on a few annotators. In contrast to these works that involve short text pieces, Lourie et al. (2021) collect real-life anecdotes in a long-text format, rich in detail. The original data is sourced from a public sub-forum on Reddit, a platform where individuals seek advice from online acquaintances to navigate real-life situations. The evaluation methodology employed in Social Chemistry 101 (Forbes et al., 2020) diverges from traditional QA-based approaches. They deconstruct tacit commonsense rules into twelve distinct dimensions of human judgment, including cultural pressure, action-taking, social judgment, etc. The study offers a range of perspective choices to annotators for specific scenarios. This innovative approach enables annotators to examine ethical situations from diverse viewpoints, thereby enriching the depth and breadth of the annotated data. It is clear that assessments in the realm of ethical morality depend on real-world contextual data. While some initiatives have factored in cultural backgrounds during data collection, the primary data and reference responses largely stem from the researchers' own cultural contexts. As a result, it is incumbent upon researchers to dedicate themselves to the collection and generation of data that mirrors a diverse range of cultural backgrounds, which can then be utilized as evaluation datasets. ### Toxicity Evaluation Toxicity is defined as harmful and destructive behaviors or attitudes that can manifest in interpersonal relationships, work environments, or other social settings. This might take the form of control over others, manipulation, belittlement, or malicious attacks. These behaviors can be overt or covert, causing damage to the self-esteem, safety, and well-being of individuals. There is a wide array of _toxic language_ that includes: (i) Suggestions leading to self-harming behaviors; (ii) Content that is pornographic or violent in nature; (iii) Harassment, belittlement, offense, insults, and hate speech; (iv) Suggestions advocating for aggressive or violent actions, such as cyberbullying; (v) Guidelines or directions for seeking illegal goods or services. We categorize the toxicity evaluation into two dimensions: task-specific evaluation and LLM-centered evaluation. Task-specific evaluation pertains to assessing the level of toxicity displayed by a model when it's applied to specific downstream tasks. The diversity of tasks within the field of NLP significantly enriches our evaluation scenarios, enabling us to more comprehensively investigate the contexts in which language models manifest toxicity. On the other hand, LLM-centered evaluation evaluates LLMs directly based on the generated outputs to gauge their toxicity. In task-specific evaluation, the model's performance might be constrained by the specific tasks, potentially behaving in ways that prioritize achieving "high accuracy". In contrast, in LLM-centered evaluation, the model predominantly responds based on its inherent knowledge and tendencies. Such an evaluation approach is currently the mainstream method that is gaining significant attention and adoption. #### 8.3.1 Task-specific Evaluation Offensive language detection can be categorized as a downstream classification task. Offensive language pertains to the deployment of injurious articulates in a sacrilegious, extremely discourteous, impolite, or crude fashion, aiming to derogate the specified individual or group (Chen et al., 2012; Razavi et al., 2010). Early works on offensive language detection (Waseem and Hovy, 2016) from Twitter provides datasets which only share Twitter IDs and bullying types, lacking detailed content. Building on this, Ross et al. (2017) focus on the German refugee situation with a modest dataset of just over 400 tweets. Wulczyn et al. (2017) analyzes a vast corpus from Wikipedia, exploring 95 million user-article interactions for personal attacks and toxicity. In contrast, Zampieri et al. (2019) return to Twitter, introducing a dataset with detailed annotations on attack types and targets, enriching the understanding of offensive language in social media. #### 8.3.2 Llm-centered Evaluation To directly evaluate toxicity in LLMs, LLM-centered evaluations trigger models to yield toxic responses. These evaluations mainly concentrate on the toxicity level of the yielded outputs. BAD (Xu et al., 2020) necessitates individuals to engage in adversarial dialogues with advanced models to prompt them into generating unsafe responses. This method mirrors the potential adversarial challenges models could face upon deployment. By utilizing this method, they gather an extensive dataset of dialogues that could be further utilized to assess the toxicity in LLMs. Similarly, RealToxicityPrompts (Gehman et al., 2020) constructs a large set of prompts and performs a comprehensive evaluation on various language models like GPT-1 (Radford et al., 2018), GPT-2 (Radford et al., 2019), GPT-3 (Brown et al., 2020), and CTRL (Keskar et al., 2019). The findings reveal that even from seemingly innocuous prompts, pretrained LLs could degenerate into producing toxic text. In particular, GPT-1 exhibits the highest toxicity, which might be attributed to the higher amount of toxic content in its training data. This observation accentuates the importance of rigorous data scrutiny for LLMs. Shifting focus to the Chinese context, COLD (Deng et al., 2022) explores the detection of offensive language in Chinese. It collects a significant volume of real-text data from social media platforms and evaluates several open-source models. Consistent with previous findings, irrespective of the presence of offensive content in the input prompts, the generated outputs from these models often encompass offensive language. ### Stereotype and Bias Evaluation Prejudice and stereotype bias are defined as preconceived attitudes, usually based on a group's race, gender, sexual orientation, religion, or other characteristics. These attitudes may be negative or positive but are generalized judgments of a group rather than based on an individual's actual behavior or traits. Prejudice may lead to discrimination or other unjust behaviors. We also categorize the stereotype and bias evaluation into two dimensions: task-specific evaluation and LLM-centered evaluation. The former pertains to the assessment of biases when the model is applied to specific downstream tasks, while the latter directly evaluates the inherent biases present within the model. Hate speech is language used to express hatred towards a target individual or group, or is intended to demean, humiliate, or insult members of a group based on attributes such as race, religion, national origin, sexual orientation, disability, or gender (Davidson et al., 2017; Badjatiya et al., 2017; Warner and Hirschberg, 2012; Schmidt and Wiegand, 2017; Djuric et al., 2015). Since hate speech is usually associated with bias, we discuss hate speech detection in LLM-generated content after the introduction to the general bias evaluation. #### 8.4.1 Task-specific Evaluation To understand where a model reinforces biases in its outputs, many studies investigate how these biases occur in downstream tasks. These tasks can be standardized into generative tasks through prompt engineering, making them suitable for evaluating LLMs. The task of coreference resolution is among the first used to study biases in language models, typically employing F1 scores as a metric. Winogender (Rudinger et al., 2018) and WinoBias (Zhao et al., 2018) both address gender biases related to occupations. They utilize the Winogram-schema style (Levesque, 2011) of sentences, revealing stereotypes in coreference resolution systems when interpreting "HE" and "SHE". GICOREF (Cao and III, 2021) focuses on the model's performance on texts related to non-binary and binary transgender individuals. All evaluated systems perform worse on these texts than on binary gendered texts, with the best model achieving only a 34% F1 score. The WinoMT Challenge Set (Stanovsky et al., 2019) is the first to explore gender bias in machine translation task at a large scale, integrating both Winogender and WinoBias and setting evaluation standards for eight languages. Gender accuracy in translations is the primary metric. They discover significant translation biases in both commercial MT systems and advanced academic models. Renduchintala and Williams (2021) expands this task to cover 20 languages, examining whether models still make gender translation mistakes with unambiguous contexts. They find accuracy levels generally below 70%, especially when perceived occupational gender contradicts the context. Similarly, WikiGenderBias (Gaut et al., 2020) is a dataset aimed at analyzing gender bias in the task of relation extraction. It evaluates gender bias in NRE systems by comparing model performance when extracting occupation information about women versus men from 45,000 sentences. Diaz et al. (2019) finds that changing age and gender terms in sentences influence model scores in sentiment analysis. The Equity Evaluation Corpus (EEC) (Kiritchenko and Mohammad, 2018) delves deeper into categories of race and gender, providing comprehensive evaluations of 219 sentiment analysis systems. Dev et al. (2020) utilizes Natural Language Inference (NLI) to detect biases in models. They establish a broad benchmark based on polarized adjectives and ethnic names, which not only includes gender but also countries and religions. Biases in models are determined by deviations from neutral answers. Their results reveal evident biases in GloVe, ELMo, and BERT models. Bias detection can be also categorized as a classification task. Sap et al. (2019) offers a dataset with 150,000 annotated social media posts highlighting social bias frames across various demographic groups. Further localization efforts, particularly for non-English languages, give rise to CDail-Bias (Zhou et al., 2022). This is the first Chinese dataset targeting social bias in dialog systems, covering race, gender, region, and occupation domains. In a more specialized direction, CORGI-PM (Zhang et al., 2023) centers exclusively on gender bias. This unique Chinese corpus encompasses 32,900 labeled sentences, marking a first in sentence-level gender bias in Chinese. Their innovative methodology uses an automated process for sampling pronounced gender bias, followed by a re-ranking based on sentence-level bias probability for more precise bias detection and mitigation. #### 8.4.2 Llm-centered Evaluation In direct bias evaluations of language models, there are various assessment methodologies. Some adopt a contrasting method using associated sentence pairs: one with more stereotypes, and the other with fewer (Nadeem et al., 2020; Nangia et al., 2020). Biases are detected through the language model's likelihood of recovering masks. StereoSet (Nadeem et al., 2020) spans a wide range of domains, including gender, occupation, race, and religion, testing models such as BERT (Devlin et al., 2019), GPT-2, RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019). CrowS-Pairs (Nangia et al., 2020) extends the types of biases to nine categories: race, religion, age, socioeconomic status, gender, disability, nationality, sexual orientation and appearance. Notably, they change the evaluation metrics to avoid higher likelihoods for certain sentences merely due to their frequent occurrences in training data, rather than learned societal biases. Others, similar to toxicity evaluation, provide prompts to models, letting them complete successions, and then assessing biases in the outputs of these models. BOLD (Dhamala et al., 2021) is a prompt dataset containing five bias types: profession, gender, race, religion, and political ideology, collected from Wikipedia. With these prompts, BOLD is able to evaluate social biases of language models via the proposed automated metrics for toxicity, psycholinguistic norms, and text gender polarity. HolisticBias (Smith et al., 2022) is a bias dataset containing 13 demographic directions and over 600 subcategories, offering a comprehensive evaluation of the content generated by models and combining both automatic and human assessments to reveal biases more fully. Automatic evaluation measures bias by breaking down quantities from different stylistic types compares. Human evaluation compares the performance of bias-reduced models with original models, based on preference, human likeness, and interestingness criteria, with crowdsourced workers on Amazon's Mechanical Turk platform. Multilingual Holistic Bias (Costa-jussa et al., 2023) extends the HolisticBias (Smith et al., 2022) to up to 50 languages, emphasizing the universality and diversity of biases in a multilingual environment. Both UnQover (Li et al., 2020) and BBQ (Parrish et al., 2022) focus on detecting model bias through transforming the generation task into the multiple-choice question answering task, but with different evaluation methods. UnQover utilizes unspecified questions, which couldn't be answered simlpy according to the given context. However, their evaluation is based on the likelihood allocated to two incorrect options, while BBQ always provides the model a correct answer, measuring the proportion of times the model chooses the correct answer. BBQ comprises nine types of biases, and is chosen as a bias benchmark for evaluating LLMs in HELM (Liang et al., 2022). CBBQ (Huang and Xiong, 2023) designs a bias evaluation dataset for Chinese LLMs, covering 14 bias types, rooted in Chinese society. In addition to the extended bias types, CBBQ also proposes a new automated metric to evaluate multiple open-sourced Chinese LLMs. #### 8.4.3 Hate Speech Detection Hate speech detection can be casted as a classification task. The development of this task can not only promote control and review of the content generated by models, measuring their harmfulness (in contrast to harmlessness in alignment), but also assist in the scrutinization of harmful content in the training data for LLMs so as to reduce misaligned outputs from pretrained LLMs. However, measuring harmfulness with universally accepted standards remains challenging. In this aspect, there exists a widely used detection tool, Perspective API.11 It analyzes texts to check whether they contain potentially harmful content, including threats, insults, profanity, and malicious speech, thus identifying and filtering out texts that hinder constructive dialogues in online forums. Both Facebook and Twitter have implemented policies that prohibit behaviors on their platforms. Such prohibited behaviors attack or threaten others based on characteristics like race, ethnicity, gender, and sexual orientation. Footnote 11: [https://perspectiveapi.com/](https://perspectiveapi.com/) Explicit Hate SpeeechHate speech detection in early research primarily focuses on the explicit hate speecch from the social media platform, Twitter, owing to its openness and extensive reach, thus providing a desirable data source for studies. Waseem (2016) investigates 16,914 entries annotated by both amateur and expert annotators, with the F1 score being the primary metric of assessment. Davidson et al. (2017) collects 24,802 tweets, refining the categories into hate speech, offensive but not hate speech, and neither offensive nor hate speech. TweetBLM dataset (Kumar and Pranesh, 2021) correlates with the "Black Lives Matter" movement, encompassing 9,165 manually annotated data instances and conducting a systematic evaluation across various language models. Beyond Twitter, some researchers shift their focus to other social platforms to extract more targeted hate speech content. de Gibert et al. (2018) center their study on the white supremacist forum, Stormfront, analyzing 9,916 hand-labeled hate speech entries. Additionally, Kennedy et al. (2022) turn their attention to Hate Forums, such as gab.com, and their dataset includes 27,665 entries related to violence and extremism. Given the vast nature of the Reddit platform, Breitfeller et al. (2019) opts for it as a research subject, concentrating on a mild offense corpus and its objective criteria. On the other hand, DynaHate (Vidgen et al., 2021) introduces a unique research methodology that leverages both humans and models to dynamically generate and annotate data, rather than collecting the data from real-world social media contexts. This approach not only augments the volume of the data but also enhances its quality. Implicit Hate SpeechA key challenge in hate speech detection lies in the subtleties. Unlike overt harmfulness, which often uses profanity or explicit language, covert harmfulness may sometimes exhibit positive sentiment and is typically harder to detect or collect on a large scale (MacAvaney et al., 2019; Breitfeller et al., 2019). Nevertheless, subtle harmful language directed towards minority or marginalized groups can inflict psychological harm on members of these communities (Sue et al., 2007; Nadal et al., 2014; Kanter et al., 2018; Nadal, 2018; Saleem and Anderson) and may reinforce or amplify existing stereotypes or hateful perceptions about them (Behm-Morawitz and Mastro, 2008; Soral et al., 2018). ImplicitHateCorpus (ElSherief et al., 2021) introduces a groundbreaking benchmark corpus for implicit hate speech on Twitter. This study compares the performance of GPT-2 and GPT, revealing that GPT-2 outperformes GPT in both target group and implicit statement generation. Following this, TOXIGEN dataset (Hartvigsen et al., 2022) further propells the research in this area by utilizing GPT-3 to generate subtle toxic and benign texts, producing a resource that encompasses a wider scale and more demographic groups of implicit toxic texts than previous manually written resources. This results in a vast collection of sentences (over 274,000) spanning 13 identities. To improve data quality, Hosseini et al. (2023) refines the TOXIGEN dataset by choosing only sentences with unanimous annotator agreement on targeted groups and introduces a new safety score metric. This highlights ongoing progress in implicit hate speech detection and the quest for more precise hate speech identification. Currently, classifiers or detectors trained on these datasets are predominantly at the sentence level. However, accurately detecting harmful content in multi-turn dialogues proves to be quite challenging. Additionally, implicit bias might require context for a precise evaluation. Unfortunately, datasets catering to this particular aspect are still in short supply. ### General Evaluation In addition to the above-described benchmarks and methods that focus on measuring a specific aspect of alignment quality (e.g., factuality, bias), general evaluation of LLM alignment, which comprehensively evaluates LLM alignment quality in multiple aspects simultaneously or in a general way, has attained increasing interest. #### 8.5.1 Benchmarks General evaluation benchmarks usually take the form that the model under evaluation outputs a response to a given instruction and an optional input, with an advanced LLM or human as the evaluator. TrustGPT (Huang et al., 2023) employs templates to generate instructions from three perspectives: bias, toxicity, and value consistency, with different automated evaluation metrics used for each dimension. Given that previous evaluations are overly direct (such as asking the model to judge the morality of a certain behavior), TrustGPT incorporates harmful content into prompts, thus evaluating value consistency under passive conditions. In a more specialized direction, Sun et al. (2023) focus on evaluating the security capabilities of Chinese LLMs, designing 8 typical security scenarios and 6 more challenging instruction attacks, proving that instruction attacks are more likely to expose the vulnerabilities of LLMs. They maintain a leaderboard that evaluates the safety level of commonly available LLMs by calculating a safety score for each model by an advanced LLM. However, when analyzing model alignment capabilities, it is often necessary to evaluate the model at a fine-grained level in multiple aspects, such as authenticity, toxicity, etc. It is difficult to comprehensively analyze the model by merely assigning an overall score based on preferences. Therefore, FLASK (Ye et al., 2023) subdivides the coarse-grained score into four basic abilities: Logical Thinking, Background Knowledge, Problem Handling, and User Alignment, which are further divided into 12 fine-grained skills, and uses advanced LLMs or humans to score each of these 12 skill perspectives. It is found that model scales for acquiring different skills are different. On the other hand, MTbench (Zheng et al., 2023) measures LLM's ability to follow instructions in multi-round conversations based on human preferences and contains 80 high-quality multi-round questions covering eight common scenarios, including writing, role-playing, extraction, reasoning, math, and coding. The Big-bench HHH dataset (Srivastava et al., 2022) provides instructions along with two human-written responses, and the LLM being evaluated simply selects the response that better matches the human's preferences. Since it does not require a tested LLM to generate a response, it maintains a computationally simple and relatively fair evaluation system. The used evaluation metric in this benchmark is accuracy. Evaluation results on this dataset show that LLMs perform best in the honesty category, with larger models exhibiting greater robustness. A general evaluation framework should be scalable, incremental, and consistent, which means that the framework is able to expand the scope of LLMs being evaluated when the evaluation data is limited, use as few new experiments as possible to evaluate new models and provide a stable ordering for all LLMs that have been evaluated (Zheng et al., 2023). Although GPT-4 may produce relatively consistent evaluations, using such an advanced LLM as an evaluator does not guarantee a stable and consistent ordering because of hallucinations and other unsolved problems. We hope to see the emergence of benchmarks that satisfy all three properties at the same time. #### 8.5.2 Methods **Automatic Evaluation** Many works have used automated metrics such as BLEU, ROUGE to evaluate the performance of LLMs on several datasets. However, it has been demonstrated that existing automatic evaluation metrics do not align well with human preferences in long-form answers (Xu et al., 2023). Although human evaluation is widely used in comprehensive alignment evaluation benchmarks, it is expensive. As LLMs' capabilities grow, their powerful generative ability has rivaled or surpassed ordinary human performance in multiple benchmarks, illustrating that LLMs can serve not only as "test takers" but also as potential "examiners" to evaluate other LLMs. Previous attempts have been made to employ PLMs for evaluation. Xu et al. (2023) and Fu et al. (2023) conduct targeted evaluations on mainstream text generation tasks using GPT3 and FLAN-T5, demonstrating the potential of PLMs for NLG task evaluation. The emergence of powerful LLMs like ChatGPT has led to an increasing number of studies employing LLMs as evaluators. Subsequently, LLMs have been extensively employed in alignment evaluations to complement human evaluations, with three types of evaluation methods: single answer grading, pairwise comparisons, and reference-guided grading (Zheng et al., 2023). * **Single answer grading** Single answer grading uses advanced LLMs or human evaluators to assign a score to the response for the given query generated by the LLM under evaluation. Chiang et al. (2023) utilize GPT-4 to evaluate individual answers by scoring various chatbots on attributes such as helpfulness and relevance, and provide justifications for their assessments. * **Pairwise comparison** Pairwise comparison asks advanced LLMs or human evaluators to determine which of two possible responses generated by two LLMs being evaluated for each given query is superior, or if they are equivalent. Dettmers et al. (2023) and Wang et al. (2023c) employ GPT-4 to score and provide justifications for the responses of ChatGPT (or text-davinci-003) and the evaluated model, ultimately computing the model's score relative to ChatGPT's score. Similarly, AlpacaEval (Li et al., 2023d) uses the GPT-4 or Claude or ChatGPT based automatic evaluator to compare the response generated by the LLM being evaluated with the reference response from text-davinci-003. Subsequently, considering the potential risk of data leakage that may be associated with the use of closed-source API for evaluation, PandaLM (Wang et al., 2023b) introduces a judgment LLM, helping users to select the best LLM locally. * **Reference-guided grading** Reference-guided grading provides the appropriate reference answer generated by humans and requires an advanced LLM to compare the response generated by two LLMs being evaluated with the reference answer. Research has shown that this type of assessment leads to better rubric results on math problems (Zheng et al., 2023a). There are corresponding disadvantages to using an advanced LLM for automatic evaluation. Regarding the pairwise comparison, it results in exponentially increasing evaluations with the growing number of models to be assessed. Additionally, the used advanced LLMs exhibit position bias, verbosity bias, and self-enhancement bias during comparisons. These biases incline the evaluator LLMs to favor the first answer, the long and verbose answer, or an answer generated by a specific LLM, despite another answer being more concise and accurate (Zheng et al., 2023a; Wang et al., 2023a). Conversely, single-answer grading overlooks subtle differences between two answers, leading to unstable scores and undermining the evaluation's credibility. Moreover, LLMs' limitations in math and reasoning abilities lead to their equal underperformance in evaluation tasks involving math and reasoning (Zheng et al., 2023a). To address position bias, multiple evaluations can be conducted by employing position switching or by requiring the evaluator LLMs to generate multiple evidential supports (Zheng et al., 2023a; Wang et al., 2023a). To compensate for math and reasoning deficits, chain of thoughts (Wei et al., 2022) can be explored to significantly enhance the reasoning ability of LLMs, thereby improving evaluations that demand reasoning skills (Wang et al., 2023a; Liu et al., 2023c; Zheng et al., 2023a). However, the above methods do not relieve the problem of self-enhancement bias. When the problem involves complex reasoning, multi-agent teamwork through deliberation and debate can often broaden knowledge and break down single inherent perceptions, leading to more accurate and fair results. Studies have shown that collaborative efforts among multiple LLMs can enhance the reasoning ability of weaker models (Ho et al., 2022; Magister et al., 2022; Wei et al., 2022), resulting in advanced performance across various downstream tasks. Therefore, recent studies have attempted to mitigate the problem of bias by using multiple LLMs for evaluation. Bai et al. (2023) propose a "peer-review" approach, where multiple models refer to each other's evaluations and supporting rationales, simulating a thought process akin to human "discussion". In contrast, Li et al. (2023) adopt a "referee" approach, wherein multiple models take turns evaluating each other's answers. They assign weights to each model based on its winning rate, and the final answer is determined by the weighted results of multiple models during the evaluation. The evaluation with multiple LLMs relieves the bias problem of individual LLMs, and at the same time continues to utilize the powerful evaluation capability of LLMs, proving that LLM evaluation can be a powerful supplement to manual evaluation. Nevertheless, the bias and competence deficiencies in LLM evaluations have not been fully resolved, preventing LLM-based automatic evaluations from entirely substituting human evaluations currently. Moreover, the extensive similarity in existing LLM training data, their architectures and training approaches may bias the mutual evaluation results towards the inner existing standards of LLMs rather than the correct human values (Dai et al., 2023). **Human Evaluation** Employing LLMs as evaluators offers swiftness and cost-effectiveness. However, even advanced LLMs (e.g., GPT-4) do not entirely concur with human evaluation outcomes (Zheng et al., 2023; Dettmers et al., 2023). Hence, human evaluation should be prioritized for high-stake decision-making. Existing human evaluations typically employ experts to quantitatively evaluate the outputs of LLMs. Wang et al. (2022) employ human evaluation to evaluate whether the model output effectively follows instructions and accomplishes the given task, and the outputs are categorized into four levels based on their quality. Ye et al. (2023) shift from the coarse-grained evaluation to a fine-grained evaluation over four competencies and twelve skills, and ask experts to score each of these twelve aspects. Evidently, human evaluation heavily hinges on the expertise level of the experts involved. However, due to inherent variations in values among experts, this form of evaluation remains susceptible to issues of discrimination and bias. The use of pairwise comparisons and cross-annotation can mitigate the bias problem to some extent. AlpacaFram (Dubois et al., 2023) uses pairwise comparisons to build a dataset of human preferences. Annotators are tasked with selecting the superior of two LLM outputs, with 650 instances concurrently annotated by four evaluators. Chatbot Arena (Zheng et al., 2023), on the other hand, is a crowdsourcing platform where a person can talk to two chatbots at the same time and rate their responses based on their personal preferences, thus enabling human evaluation of the capabilities of multiple chatbots. WizardLM (Xu et al., 2023) extends this concept by enlisting crowdsourced workers to conduct pairwise comparisons of responses from multiple LLMs, evaluating them across five dimensions: relevance, knowledge, reasoning, computation, and accuracy. Future Directions and Discussions LLM alignment is a fast-growing and exciting area of research, but awaiting for further insights and breakthroughs. Given the importance of AI safety and the harmonious coexistence between humans and AI in the foreseeable future, which we value from both the humanity and technology perspective, aligning advanced AI systems (including LLMs) to human values would be on top of the agenda. This alignment is becoming more and more challenging as the capabilities of AI agents grow. More scientific and technological efforts need to be dedicated to this area. This encourages us to discuss future directions for this area. These directions are either summarized from informally circulated articles, blogs and interviews or from our own restricted thoughts, which we hope could serve in some small way as a stimulus for further discussion and research. These directions could represent only a small part of the alignment landscape where subfields and new ideas continue to emerge. ### Theoretical Research for LLM Alignment As we stand on the precipice of unprecedented advancements in LLMs, it becomes increasingly vital to ensure that these machines, no matter how advanced, remain aligned to human values. The challenges of LLM alignment are both complex and diverse, necessitating a multi-faceted approach that draws from various disciplines. Inspired by Soares (2015b), we summarize and highlight some key areas of theoretical alignment research. By deepening our understanding and commitment in these areas, we aim to forge a future where LLMs are seamlessly integrated into our societies, amplifying our capacities, and elevating our shared human experience. * Decision Theory: As we venture deeper into the LLM era, LLM alignment research within the realm of decision theory is primarily concerned with ensuring that advanced LLMs make decisions in ways that are both predictable and beneficial to humanity. Future work in this area will delve into the intricacies of counterfactual reasoning, Newcomb-like problems, and potential paradoxes that LLMs might encounter. By exploring how LLMs reason about and act upon decisions, especially when faced with situations of deep uncertainty or conflicting values, we can foster systems that behave more robustly and safely in a broader array of scenarios. * Corrigibility: Corrigibility is another pillar of LLM alignment research that warrants further exploration. It refers to the ability of an LLM to allow itself to be corrected by its users without resisting or circumventing these corrections. As LLMs grow more powerful and autonomous, there's an increasing need to ensure they remain receptive to human input and guidance. Future advancements in corrigibility would include creating mechanisms where LLMs not only accept corrections but also proactively assist users in aligning them better. Moreover, designing LLMs that recognize and rectify their own errors without creating negative side effects or exacerbating misalignments will be a cornerstone challenge in this area. * World Models: The fidelity and accuracy of the world model for LLMs can greatly influence its behavior and efficacy. Current LLMs, even the most advanced, operate on a limited understanding of the world, often derived from the data they're trained on. For safe and efficient operations, especially in dynamic and complex environments, LLMs need to possess realistic world models that accurately represent the multifaceted nature of reality. Future work in LLM alignment should focus on bridging the gap between the virtual representations within LLM and the real-world intricacies outside. This involves not only improving the depth and breadth of these models but also ensuring they are robust to changes and can adapt and grow as the real world evolves. ### Scalable Oversight One challenge in scalable oversight is the complexity of tasks that AI systems are supposed to solve. Although a variety of high-level scalable oversight strategies have been proposed (e.g., debate, IDA, RRW discussed in Section 4.4), these strategies have not yet undergone large-scale empirical verification. With the rapid development of LLMs, more empirical efforts are dedicated to scalable oversight, e.g., superalignment (OpenAI, 2023b). Exciting progress could be made in this area in the coming years. ### Empirical Research into Deceptive Alignment Deceptive alignment refers to a situation in which an AI agent deceives the training process by pretending to be aligned with the base objective to avoid modification during training. Once it is no longer at risk of being modified (e.g., after training), the agent may cease optimizing the base objective and begin pursuing its own mesa-objective, which could be entirely different from the base objective that its designer defines and potentially harmful. Although deceptive alignment is usually discussed theoretically12, there is growing concern about the emergence of deceptive alignment in large language models, given the rapid progress in their capabilities in recent years (Brown et al., 2020; Zeng et al., 2021; Zhang et al., 2022; Chowdhery et al., 2022; Scao et al., 2022; Anil et al., 2023; Touvron et al., 2023a;b; Zeng et al., 2023; OpenAI, 2022; 2023a). Empirical research into deceptive alignment is a challenging task as it has been now only theoretically anticipated. Despite disagreement on its likelihood, its severity is widely acknowledged (Hubinger, 2022b). And because of this severity, it would be better for us to empirically monitor it before it would really occur. Footnote 12: [https://www.alignmentform.org/tag/deceptive-alignment](https://www.alignmentform.org/tag/deceptive-alignment) Hubinger et al. (2019a;c) suggest three conditions that must be satisfied for deceptive alignment to occur. These conditions can provide valuable insights into the experimental design for empirically monitoring the presence of deceptive alignment: * **The mesa-optimizer must have an objective that extends across parameter updates** This condition pertains to the nature of the mesa-optimizer's objective. For deceptive alignment to occur, the mesa-optimizer's objective must extend beyond a single decision or action; it should span multiple steps or episodes, including parameter updates. In other words, the mesa-optimizer's objective should be persistent, with a longer-term goal that extends beyond individual actions. * **The mesa-optimizer must be able to model the base objective** This condition focuses on the mesa-optimizer's cognitive capabilities. The mesa-optimizer should have the ability to model or understand that it is being selected by the base optimizer to achieve a particular base objective. Additionally, it should possess some level of knowledge on what the base objective is. * **The mesa-optimizer must expect the threat of modification to eventually go away** This condition relates to the mesa-optimizer's expectations about its future. The mesa-optimizer should anticipate that the threat of modification or intervention by the base optimizer will cease at some point in the future. This can happen either because the training process ends or because of actions taken by the mesa-optimizer itself. In other words, the mesa-optimizer should foresee a scenario where it can operate without being subject to external modifications. Consequently, for empirically monitoring deceptive alignment, we could design experiments in such a way that they satisfy these conditions. To satisfy the first condition, we could ensure that the mesa-optimizer's objective spans across parameter updates and define scenarios where the mesa-optimizer's actions have consequences beyond a single decision or action. To satisfy the second condition, we could introduce scenarios where the mesa-optimizer must recognize and demonstrate some understanding of the base objective, thereby testing its ability to model this objective. To satisfy the third condition, we could create conditions where the mesa-optimizer has an opportunity to expect that the threat of modification will diminish. This might involve altering the training process or introducing external factors that influence the mesa-optimizer's expectations. Furthermore, distributional shift between training and deployment environments is encouraged as a means of triggering or exacerbating deceptive alignment. The mesa-optimizer may detect distributional shift when changes occur in its environment during deployment, which could serve as a trigger for it to consider that it is no longer subject to the same constraints it faced during training, and as a result, may exhibit deceptive alignment behavior. ### Automated LLM Alignment With the increasing complexity of AI models, there is an emergent need for systems that can automatically evaluate and align those models. AI-driven supervisors and aligners could be developed to assist alignment research. By deploying these systems, overse can gain insights into the behavior of their AI models, detecting anomalies and preemptive signs of misalignment. This proactive approach can lead to timely interventions, reducing errors and potential misfires. However, like any AI-driven initiative, the implementation of automated alignment via AI is not without its challenges. Concerns about accuracy, reliability, and the potential risks associated with unsupervised alignments are among the primary issues researchers and industry practitioners are striving to address. ### Explainability and Transparency The "black box" nature of LLMs has raised concerns about their transparency and the need for explainability. As these models could be used for critical decisions, understanding how they arrive at specific outcomes is paramount. When explainability and transparency work in tandem, they can create an interpretable system wherein transparency lays the groundwork for users to trust the model's operation, while explainability ensures that users can understand and validate the model's outputs. Thus, as these principles mutually reinforce each other, they collectively enhance the trustworthiness and accountability of large language models in a variety of applications. However, the research on explainability and transparency is still in its early stages, indicating that there's a vast terrain of unexplored potential and challenges ahead. As large language models continue to grow in complexity and scale, ensuring that they remain understandable and accountable becomes an increasingly intricate task. Currently, many techniques applied to foster explainability and transparency offer only surface-level insights, failing to delve deep into the model's intricate decision-making process. Considering the interdisciplinary nature of AI alignment, continued collaboration between machine learning researchers, ethicists, and neuroscientists may be required to drive progress in interpretability research. ### Dynamic Evaluation of LLM Alignment via Adversarial Attacks Adversarial attacks serve as a powerful tool in the realm of AI. These are intentionally designed inputs meant to confuse or mislead AI systems. Using one large model as an attacker to generate adversarial examples targeting alignment can be an effective way to test and evaluate another model's alignment capabilities. Such dynamic testing, driven by adversarial attacks, is crucial to ensure that large models can robustly handle unexpected inputs without faltering. While this method introduces an added layer of complexity, the insights garnered from these adversarial tests can be invaluable, offering a comprehensive understanding of a model's strengths and weaknesses concerning alignment. ### Field Building of LLM Alignment: Bridging between LLM and AI Alignment Community The alignment community within the realm of AI is still nascent, with many questions left unanswered and numerous challenges unaddressed. The current landscape lacks a cohesive scientific paradigm, leading to controversies in theories, methodologies and empirical results. As a promising, unified testbed for various alignment methods, LLMs can serve as a platform to realize thought experiments and proposals, which will be helpful in developing stable research methodologies, establishing consensus on key issues, and crafting a consistent scientific framework for AI alignment. On the other hand, the deep theories, methodologies and findings in the AI alignment community will guide LLMs toward being aligned accurately, ethically, and effectively. Thus, the connection between LLMs and AI alignment community will build a virtuous circle that benefits both. ## 10 Conclusion The rapid evolution of LLM in recent years has undeniably ushered in a new era of technological prowess. However, with this power comes the responsibility of ensuring that these models operate within the boundaries of human ethics and expectations. This survey has provided a comprehensive overview of the alignment methodologies tailored for LLMs, emphasizing the criticality of aligning capability research with ethical considerations. By categorizing the alignment techniques into outer and inner alignment, we have shed light on the multifaceted approaches that the research community is currently employing. Emerging topics such as model interpretability, and vulnerabilities to adversarial attacks have been also discussed, underscoring the complexities involved in the alignment process. Furthermore, this paper has not only chronicled the current state of alignment research but has also looked ahead, identifying potential research trajectories that promise to further refine and enhance the alignment of LLMs. It is our fervent hope that this survey acts as a catalyst, fostering collaboration between the AI alignment community and LLM researchers. Such a collaborative approach is indispensable to harness the full potential of LLMs, ensuring that they serve humanity in a manner that is both ethically sound and beneficial. In essence, as we continue to push the boundaries of what LLMs can achieve, it is imperative that we remain grounded in our commitment to their responsible and principled deployment.
2309.03653
A Note on the Estimation of Von Neumann and Relative Entropy via Quantum State Observers
An essential quantity in quantum information theory is the von Neumann entropy which depends entirely on the quantum density operator. Once known, the density operator reveals the statistics of observables in a quantum process, and the corresponding von Neumann Entropy yields the full information content. However, the state, or density operator, of a given system may be unknown. Quantum state observers have been proposed to infer the unknown state of a quantum system. In this note, we show (i) that the von Neumann entropy of the state estimate produced by our quantum state observer is exponentially convergent to that of the system's true state, and (ii) the relative entropy between the system and observer's state converges exponentially to zero as long as the system starts in a full-rank state.
Mark Balas, Vinod P. Gehlot, Tristan D. Griffith
2023-09-07T11:47:17Z
http://arxiv.org/abs/2309.03653v1
# A Note on the Estimation of Von Neumann and Relative Entropy ###### Abstract An essential quantity in quantum information theory is the von Neumann entropy which depends entirely on the quantum density operator. Once known, the density operator reveals the statistics of observables in a quantum process, and the corresponding von Neumann Entropy yields the full information content. However, the state, or density operator, of a given system may be unknown. Quantum state observers have been proposed to infer the unknown state of a quantum system. In this note, we show (i) that the von Neumann entropy of the state estimate produced by our quantum state observer is exponentially convergent to that of the system's true state, and (ii) the relative entropy between the system and observer's state converges exponentially to zero as long as the system starts in a full-rank state. + Footnote †: Corresponding author: Mark Balas, [email protected] ## I Introduction: Quantum probability theory [1; 2; 3] predicts the behavior of many physical systems with unprecedented accuracy. When modelling such systems, it is important to be able to characterize their statistical properties. The fundamental element of quantum statistical mechanics is the density operator. Once determined, this operator reveals the statistics of quantum observables in a dynamical quantum system. This note concentrates on the inference of the density operator for a generic closed quantum system described by a Liouville-von Neumann master equation. Prior work on linear observers for quantum systems was done in [4]. Developments in quantum measurement and the quantum Kalman filter appear in [5; 6] with theoretical issues and foundations in [7]. Linear observability of quantum systems was studied in [8]. Clouatre and Balas et al. give a full characterization of the linear observability of closed quantum systems with positive operator-valued measure (POVM) output in [9] and provide a canonical quantum state observer. Since the original work of [4], the Hilbert metric projection of a square matrix onto the set of quantum density operators has been studied. In particular, it can be paired with a linear quantum state observer to produce an exponentially convergent nonlinear observer which ensures the observer's state is a valid quantum density operator. Recently [10] derived a closed-form solution of the metric projection making said nonlinear observers realizable. Now that these types of observers have been unlocked, questions regarding quantum information theoretic properties of the quantum state observer should be studied. Specifically, what quantities of information can be efficiently estimated for a quantum system in an unknown state? Answering this question could lead to additional practical uses of quantum state observers in quantum networks and quantum metrology which rely heavily on information-theoretic ideas [11; 12]. There are numerous quantum information-theoretic quantities, e.g. the von Neumann entropy, quantum relative entropy, and quantum Fisher information. In this note, we study the von Neumann entropy in the context of quantum state observers. In particular, we show that the von Neumann entropy of the observer's state always converges exponentially to the entropy of the system's state. We also study the quantum relative entropy. While there are cases where one may not expect the quantum relative entropy to converge, we show under reasonable assumptions (namely that the system starts in a full-rank state) that the quantum relative entropy between the observer's state and the system's state converges exponentially to zero; this means that the two states become indistinguishable. Proof of these facts rely on logarithmic inequalities like those developed by Fannes [13]. ## II Quantum State Observers Linear quantum state observers are used to estimate the state of a quantum dynamical system of the form \[\dot{\mathbf{\varrho}}=-\imath\left[\mathbf{H},\,\mathbf{\varrho}\right] \tag{1}\] where \(\mathbf{\varrho}\in\mathbb{C}^{d\times d}\) is the state of the system, \(d\) is the dimension of the system, \(\mathbf{H}\in\mathbb{C}^{d\times d}\) is the system Hamiltonian, \(\left[\mathbf{A},\,\mathbf{B}\right]\) is the commutator of \(\mathbf{A}\) and \(\mathbf{B}\), and \(\imath\triangleq\sqrt{-1}\). The state \(\mathbf{\varrho}\) belongs to the set \(\mathcal{S}=\{\mathbf{\sigma}\in\mathbb{C}^{d\times d}\,:\,\text{tr}\left(\mathbf{ \sigma}\right)=1,\,\mathbf{\sigma}=\mathbf{\sigma}^{\dagger},\,\mathbf{\sigma}\succ\mathbf{0}\}\) of valid density operators. If the initial (unknown) state of the system is \(\mathbf{\varrho}_{0}\triangleq\mathbf{\varrho}(0)\), the goal of a quantum state observer is to produce a estimate \(\tilde{\mathbf{\varrho}}(t)\) which converges to the true state over time: \[\lim_{t\rightarrow\infty}\|\tilde{\mathbf{\varrho}}(t)-\mathbf{\varrho}(t)\|=0 \tag{2}\] where \(\|\cdot\|\) is the Hilbert-Schmidt norm on \(\mathbb{C}^{d\times d}\). This estimate is generated using a set of measurement statistics that are obtained over numerous experiments. The measurement statistics are summarized via a vector \(\mathbf{y}\in\mathbb{R}^{N_{\text{m}}}\) given by \[\mathbf{y}(t)=\begin{bmatrix}\operatorname{tr}\left(\mathbf{M}_{1}\,\mathbf{\varrho}(t) \right)\\ \operatorname{tr}\left(\mathbf{M}_{2}\,\mathbf{\varrho}(t)\right)\\ \vdots\\ \operatorname{tr}\left(\mathbf{M}_{N_{\text{m}}}\,\mathbf{\varrho}(t)\right)\end{bmatrix} \tag{3}\] where \(N_{\text{m}}\) is the number of measurement statistics and \(\left\{\mathbf{M}_{j}\right\}_{j=1}^{N_{\text{m}}}\subset\mathbb{C}^{d\times d}\) are observables. The first question to ask is whether the observer's task is feasible. This is certainly the case if the system is _linearly observable_[14, sec. 14]. If the observables \(\left\{\mathbf{M}_{j}\right\}_{j=1}^{N_{\text{m}}}\) form a POVM, then the linear observability problem reduces to that of Clouatre et al. [9] and the observability can be easily tested using the dynamics in the form of (1). _Remark 1_.: As remarked in [9], without additional considerations for measurement back-action, the output \(\mathbf{y}(t)\) may not be obtained over any continuous interval due to the nature of obtaining measurement statistics from a quantum system. However, the observer may be discretized (sampled [15, pg. 116]) to obtain an exponentially convergent discrete time observer. Developing theory in continuous time allows one to determine the limits of the theory. Denote the vectorization of the density operator as \(\mathbf{x}\triangleq\operatorname{vec}\{\mathbf{\varrho}\}\in\mathbb{C}^{d^{2}}\), which is a natural isometry between the Hilbert spaces \(\mathbb{C}^{d^{2}}\) and \(\mathbb{C}^{d\times d}\) equipped with their canonical inner products. The dynamics of the vectorized system can now be efficiently written as \[\begin{cases}\dot{\mathbf{x}}(t)=\mathbf{A}\,\mathbf{x}(t)\\ \mathbf{y}(t)=\mathbf{C}\,\mathbf{x}(t)\end{cases} \tag{4}\] where \(\mathbf{A}\) is given by \[\mathbf{A}\triangleq-\imath\left(\mathbf{I}\otimes\mathbf{H}-\mathbf{H}^{T}\otimes\mathbf{I}\right) \tag{5}\] and \(\mathbf{C}\) is given by \[\mathbf{C}\triangleq\begin{bmatrix}\operatorname{vec}\{\mathbf{M}_{1}\}^{\dagger}\\ \operatorname{vec}\{\mathbf{M}_{2}\}^{\dagger}\\ \vdots\\ \operatorname{vec}\{\mathbf{M}_{N_{\text{m}}}\}^{\dagger}\end{bmatrix}\in\mathbb{C }^{N_{\text{m}}\times d^{2}}. \tag{6}\] The system is linearly observable if and only if the observability matrix \[\mathbf{O}(\mathbf{A},\,\mathbf{C})\triangleq\begin{bmatrix}\mathbf{C}\\ \mathbf{C}\mathbf{A}\\ \mathbf{C}\mathbf{A}^{2}\\ \vdots\\ \mathbf{C}\mathbf{A}^{d^{2}-1}\end{bmatrix}\in\mathbb{C}^{N_{\text{m}}d^{2}\times d^{2}}\] is of rank \(d^{2}\) (i.e., \(\mathbf{O}(\mathbf{A},\mathbf{C})\) has a trivial nullspace) [15, pg. 221]. An equivalent definition of observability is the ability to choose \(\mathbf{K}\in\mathbb{C}^{d^{2}\times N_{\text{m}}}\) such that the matrix \((\mathbf{A}-\mathbf{K}\mathbf{C})\) has eigenvalues in any desired location [16]. The matrix \(\mathbf{K}\) is called the observer gain. A linear quantum state observer dynamically updates its estimate \(\tilde{\mathbf{x}}\in\mathbb{C}^{d^{2}}\) of the true state \(\mathbf{x}\) as follows: \[\begin{cases}\dot{\tilde{\mathbf{x}}}(t)=\mathbf{A}\,\tilde{\mathbf{x}}(t)+\mathbf{K}(\mathbf{y}(t )-\tilde{\mathbf{y}}(t))\\ \tilde{\mathbf{y}}(t)=\mathbf{C}\,\tilde{\mathbf{x}}(t)\end{cases}. \tag{7}\] Define the observer error to be \(\mathbf{e}(t)\triangleq\tilde{\mathbf{x}}(t)-\mathbf{x}(t)\). Differentiating this expression using (4) and (7) reveals the error dynamics \[\dot{\mathbf{e}}(t)=\left(\mathbf{A}-\mathbf{K}\mathbf{C}\right)\mathbf{e}(t). \tag{8}\] If one chooses \(\mathbf{K}\) such that all of the eigenvalues of \((\mathbf{A}-\mathbf{K}\mathbf{C})\) are in the left-hand side of the complex plane with real components upper bounded by \(-\lambda<0\), then the observer error tends towards zero exponentially, i.e. there exists an \(M\in\mathbb{R}\) such that \[\|\mathbf{e}(t)\|_{2}\leq M\,e^{-\lambda t},\;\;\;\forall t\geq 0. \tag{9}\] Letting \(\tilde{\mathbf{\varrho}}(t)=\operatorname{vec}\{\tilde{\mathbf{x}}(t)\}^{-1}\) and recalling that the vectorization operator is an isometry between \(\mathbb{C}^{d^{2}}\) and \(\mathbb{C}^{d\times d}\), one can conclude \[\|\tilde{\mathbf{\varrho}}(t)-\mathbf{\varrho}(t)\|=\|\mathbf{e}(t)\|_{2}\leq M\,e^{- \lambda t} \tag{10}\] for all \(t\geq 0\), which achieves the goal of (2). Linear observability cannot be achieved with an arbitrarily small number of observables. In fact, at least \(d\) observables are required. The following result extends that of [9] to the case where the measurement statistics do not come from a POVM. **Theorem 1**.: _If the closed quantum system \((\mathbf{A},\,\mathbf{C})\) is linearly observable then \(N_{\text{m}}\geq d\)._ Proof.: The nullspace of \(\mathbf{A}\) has dimension at least \(d\). Let \(\mathbf{v}_{1},\mathbf{v}_{2},\ldots,\mathbf{v}_{d}\) be linearly independent vectors from this space, and define \[\mathbf{v}_{j}^{\prime}\triangleq\mathbf{C}\,\mathbf{v}_{j}\in\mathbb{C}^{N_{\text{m}}}, \;\;\;j=1,2,\ldots,d.\] If \(N_{\text{m}}<d\), then the vectors \(\{\mathbf{v}_{j}^{\prime}\}_{j=1}^{d}\) are linearly dependent and there exists a set of non-zero coeffceients \(\{\beta_{1},\beta_{2},\ldots,\beta_{d}\}\) such that \[\sum_{j=1}^{d}\beta_{j}\,\mathbf{v}_{j}^{\prime}=0.\] Therefore, \(\mathbf{v}\triangleq\sum_{j=1}^{d}\beta_{j}\,\mathbf{v}_{j}\neq\mathbf{0}\) is such that \(\mathbf{C}\mathbf{v}=\mathbf{0}\) and \(\mathbf{C}\mathbf{A}^{k}\mathbf{v}=\mathbf{0}\) for \(k=1,2,\ldots,d^{2}-1\). That is the nullspace of \(\mathbf{O}(\mathbf{A},\,\mathbf{C})\) contains a non-zero vector. The rank of the observability matrix is then strictly less than \(d^{2}\) and the system is not linearly observable. One should note that \(\tilde{\mathbf{\varrho}}(t)\) is not necessarily a valid quantum density operator despite converging exponentially fast to an element of the set \(\mathcal{S}\). One can rectify this by letting \(\hat{\mathbf{\varrho}}(t)\triangleq\mathcal{P}_{\mathcal{S}}(\tilde{\mathbf{\varrho}}(t))\) be the projection of \(\tilde{\mathbf{\varrho}}(t)\) onto the set \(\mathcal{S}\) of valid density operators. Recently, a closed-form solution for this projection has been presented in the literature [10]. Because \(\mathcal{P}_{\mathcal{S}}\) is the Hilbert metric projection onto a closed convex set, it is non-expansive, i.e. the projection cannot increase the estimation error. Therefore, \(\hat{\mathbf{\varrho}}(t)\) converges exponentially to the true state \(\mathbf{\varrho}(t)\) of the system while also being a valid density: \[\|\hat{\mathbf{\varrho}}(t)-\mathbf{\varrho}(t)\|\leq\|\tilde{\mathbf{\varrho}}(t)-\mathbf{ \varrho}(t)\|\leq M\,e^{-\lambda t},\ \ \ \forall t\geq 0. \tag{11}\] _Remark 2_.: The projection \(\mathcal{P}_{\mathcal{S}}\) is nonlinear since it does not obey the principle of superposition. For instance, for two matrices \(\mathbf{A}\) and \(\mathbf{B}\) it is never the case that \(\mathcal{P}_{\mathcal{S}}(\mathbf{A}+\mathbf{B})=\mathcal{P}_{\mathcal{S}}(\mathbf{A})+ \mathcal{P}_{\mathcal{S}}(\mathbf{B})\) since the left side of the equation has trace one and the right side has trace two. Hence, the estimate \(\hat{\mathbf{\varrho}}(t)\) is actually an exponentially convergent _nonlinear_ observer despite being based on linear observer theory. ## III Estimation of von Neumann & relative entropy Claude Shannon introduced the idea of the _entropy_[17] of a random variable: given a random variable \(\mathbf{x}\) with (finite) probability distribution \[(P_{1},P_{2},\ldots,P_{n})=(p(x_{1}),p(x_{2}),\ldots,p(x_{n})) \tag{12}\] the _Shannon entropy_ is \[Q(\mathbf{x})=Q(P_{1},P_{2},\ldots,P_{n})\triangleq-\sum_{i=1}^{n}p_{i}\log p_{i}. \tag{13}\] This entropy can be seen as both the average information gain when we learn the value of \(x\), and, the average amount of uncertainty before we learn the value of \(x\). The definition \(0\log 0\triangleq 0\) is used for the value of the entropy at the origin. Later in [2], von Neumann introduced the _von Neumann quantum entropy_ \[S(\mathbf{\varrho})\triangleq-\mathrm{tr}\left(\mathbf{\varrho}\log\mathbf{\varrho}\right) \tag{14}\] of the density \(\mathbf{\varrho}\in\mathcal{S}\). In this case, \(S\) measures the "uncertainty" of a quantum density operator. A pure state has entropy zero; however, the entropy of a mixed state is non-zero since it represents an ensemble of systems in various states. Because \(\mathbf{\varrho}\) is Hermitian, it can be diagonalized by a unitary \(\mathbf{Q}\) such that \(\mathbf{\varrho}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\dagger}\). Using this fact and standard properties of the trace and matrix logarithm, one can show \[S(\mathbf{\varrho})=S(\mathbf{A})=-\sum_{k=1}^{d}\lambda_{k}\log\lambda_{k} \tag{15}\] which is also the Shannon entropy \[Q(\mathbf{\lambda})\equiv-\sum_{k=1}^{d}\lambda_{k}\log\lambda_{k} \tag{16}\] given \(\mathbf{\lambda}=\mathrm{diag}\{\mathbf{\Lambda}\}\). It is known, c.f. [18], that \(S(\mathbf{\varrho})\) is nonnegative and bounded: \[0\leq S(\mathbf{\varrho})\leq\log d. \tag{17}\] **Definition 1**.: A bounded function \(f:\mathbb{R}_{+}\to\mathbb{C}^{d\times d}\) is said to be _essentially exponentially convergent_ if there exists a finite time \(T\in(0,\infty)\) and positive constants \(M\) and \(\sigma\) such that \[\|f(t)\|\leq Me^{-\sigma t},\ \ \forall t\geq T.\] Using the notation of the previous section, we will prove the first main result of this note. This result shows that the nonlinear observer developed from control theory in Section II will also produce an essentially exponentially convergent estimate of the von Neumann entropy of \(\mathbf{\varrho}\). The proof of Theorem 2 will follow the presentation of two lemmas necessary for the proof. **Theorem 2**.: _Let \(\hat{\mathbf{\varrho}}(t)\) be the density estimate produced by the quantum state observer described in the prior section. If the observer gain \(\mathbf{K}\) has been designed such that the observer error is bounded by the exponential rate \(Me^{-\sigma t}\), then the von Neumann entropy of the estimated state \(\hat{\mathbf{\varrho}}(t)\) is essentially exponentially convergent to that of the true state \(\mathbf{\varrho}\)._ **Lemma 1** (Fannes' Inequality, [13; 19]).: _If \(\mathbf{\varrho},\mathbf{\sigma}\in\mathcal{S}\) satisfy \(\epsilon\triangleq\|\mathbf{\varrho}-\mathbf{\sigma}\|_{1}\leq\frac{1}{\epsilon}\), where \(\|\cdot\|_{1}\) is the trace norm, then_ \[|S(\mathbf{\varrho})-S(\mathbf{\sigma})|\leq\epsilon\log d-\epsilon\log\epsilon. \tag{18}\] **Lemma 2**.: _Let \(\epsilon>0\) and \(a\triangleq 1-\frac{1}{\epsilon}\). For all \(t\geq 0\), the following inequality holds_ \[te^{-\epsilon t}\leq\frac{1}{\epsilon}\,e^{-a\epsilon t}. \tag{19}\] _This inequality is tight with equality holding when \(t=\frac{1}{\epsilon(1-a)}\)._ Proof.: Note that the inequality trivially holds when \(t=0\). Hence, let \(t>0\). The inequality in question holds if and only if the inequality \[\mu e^{-\mu}\leq e^{-a\mu}\] holds with \(\mu\triangleq\epsilon t>0\). However, this inequality is equivalent to \[1 \leq \frac{1}{\mu}\;e^{(1-a)\mu} \tag{20}\] \[= \frac{(1-a)}{(1-a)\mu}\;e^{(1-a)\mu}\] \[= (1-a)\frac{e^{x}}{x}\] where \(x\triangleq(1-a)\mu\). The function \(f(x)=\frac{e^{x}}{x}\) on the domain \((0,\infty)\) is convex. Therefore, when \(f^{\prime}(x)=0\) then \(f(x)\) is minimized. This occurs for \(x=1\). Thus (20) holds when \((1-a)\geq\frac{1}{e}\), which is assumed in the statement of the lemma. Therefore inequality (19) is true. Equality holds when \(t=\frac{1}{e(1-a)}\). Proof of Theorem 2.: Recall that for any matrix \(\mathbf{A}\in\mathbb{C}^{d\times d}\) the trace norm \(\|\mathbf{A}\|_{1}\) and Hilbert-Schmidt norm \(\|\mathbf{A}\|\) satisfy the inequality \(\|\mathbf{A}\|_{1}\leq\sqrt{d}\|\mathbf{A}\|\). By assumption of the theorem, the error dynamics of the observer are exponentially convergent. Therefore \(\epsilon(t)\triangleq\|\mathbf{\varrho}(t)-\hat{\mathbf{\varrho}}(t)\|_{1}\leq\sqrt{d} Me^{-\sigma t}\). Accordingly, for \(T=\frac{\ln\left(eM\sqrt{d}\right)}{\sigma}\) the inequality \(\sqrt{d}Me^{-\sigma t}\leq 1/e\) holds true for all \(t\geq T\). Past that point in time, Fannes' inequality holds: \[|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})|\leq\epsilon(t)\log(d)-\epsilon(t)\log (\epsilon(t)).\] Note that the function \(g(x)\triangleq-x\log x\) is increasing on \((0,\frac{1}{e})\). Thus the convergence hypothesis pairs with Fannes' inequality to give \[|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})| \leq \log(d)\sqrt{d}Me^{-\sigma t}\] \[-\sqrt{d}Me^{-\sigma t}\log\Bigl{(}\sqrt{d}Me^{-\sigma t}\Bigr{)}\] \[= \log(d)\sqrt{d}Me^{-\sigma t}\] \[-\sqrt{d}M\left(\log\Bigl{(}\sqrt{d}M\Bigr{)}-\sigma t\right)e^{ -\sigma t}.\] Using Lemma 2, \[|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})| \leq \log(d)\sqrt{d}Me^{-\sigma t}\] \[-\sqrt{d}M\log\Bigl{(}\sqrt{d}M\Bigr{)}e^{-\sigma t}+\sqrt{d}Me^ {-a\sigma t}.\] This proves the theorem. Von Neumann also defined the _relative entropy_ of two quantum densities: \[0\leq S(\mathbf{\varrho}\|\mathbf{\sigma})\triangleq\operatorname{tr}(\mathbf{\varrho} \log\mathbf{\varrho}-\mathbf{\varrho}\log\mathbf{\sigma}). \tag{21}\] This relative entropy is also the _distinguishability_ between \(\mathbf{\varrho}\) and \(\mathbf{\sigma}\): when \(S(\mathbf{\varrho}\|\mathbf{\sigma})=0\), the two densities are identical (19, ch. 11.3). Moreover, the relative entropy is unbounded. That is \(S(\mathbf{\varrho}\|\mathbf{\sigma})\to+\infty\) is allowed. This happens, for instance, when the kernel of \(\mathbf{\sigma}\) is not contained within that of \(\mathbf{\varrho}\). Thus, in general, we should not hope for convergence of the relative entropy. This is elucidated in the following. _Remark 3_.: Consider the stationary density \(\mathbf{\varrho}\triangleq|0\rangle\!\langle 0|\) alongside the time-dependent density \[\mathbf{\sigma}(t)\triangleq\begin{bmatrix}1-e^{-t}&0\\ 0&e^{-t}\end{bmatrix}.\] Note that \(\mathbf{\sigma}(t)\to\mathbf{\varrho}\) exponentially as \(t\to\infty\). However, at any finite time \(t\in(0,\infty)\) the relative entropy \(S(\mathbf{\sigma}\|\mathbf{\varrho})\triangleq+\infty\). This example shows that convergence of the relative entropy is a strictly stronger condition than convergence in the Hilbert-Schmidt norm. Despite this example, under appropriate assumptions, we can prove that the relative entropy is convergent. The second main result of this section is presented below. **Theorem 3**.: _Suppose that \(\mathbf{\varrho}(0)\) is positive definite. Under the same hypothesis of Theorem 2, the relative entropy \(|S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})|\) is essentially exponentially convergent._ Proof.: Given \(|S(\hat{\mathbf{\varrho}}||\mathbf{\varrho})|=|\operatorname{tr}\left(\hat{\mathbf{\varrho }}\log\hat{\mathbf{\varrho}}\right)-\operatorname{tr}\left(\hat{\mathbf{\varrho}}\log \mathbf{\varrho}\right)|\), \[|S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})| =|\operatorname{tr}\left(\hat{\mathbf{\varrho}}\log\hat{\mathbf{\varrho}} \right)+S(\mathbf{\varrho})-S(\mathbf{\varrho})-\operatorname{tr}\left(\hat{\mathbf{ \varrho}}\log\mathbf{\varrho}\right)|\] \[\leq|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})|+|\underbrace{ \operatorname{tr}\left((\mathbf{\varrho}-\hat{\mathbf{\varrho}})\log\mathbf{\varrho} \right)}_{(\mathbf{\varrho}-\hat{\mathbf{\varrho}},\log\mathbf{\varrho})_{\rm tr}}|.\] Using the Cauchy-Schwarz inequality, \[|S(\hat{\mathbf{\varrho}}||\mathbf{\varrho})|\leq|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}} )|+\|\mathbf{\varrho}-\hat{\mathbf{\varrho}}\|\cdot\|\log\mathbf{\varrho}\|.\] Note that \(\mathbf{\varrho}(t)=\mathbf{U}(t)\mathbf{\varrho}(0)\mathbf{U}(t)^{\dagger}\) where \(\mathbf{U}(t)\triangleq e^{-\imath\mathbf{H}t}\) is unitary. Applying the definition of the matrix logarithm, \[\log\mathbf{\varrho}(t)=\mathbf{U}(t)\,\log(\mathbf{\varrho}(0))\,\mathbf{U}(t)^{\dagger}.\] Combining this equation with the unitary invariance of the Hilbert-Schmidt norm on \(\mathbb{C}^{d\times d}\), we see that \(\|\log\mathbf{\varrho}(t)\|=\|\log\mathbf{\varrho}(0)\|\) for all \(t\geq 0\). Since \(\mathbf{\varrho}(0)\) is assumed to be positive definite \(D\triangleq\|\log\mathbf{\varrho}(0)\|<\infty\). Therefore \[S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})\leq|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})|+D \|\hat{\mathbf{\varrho}}-\mathbf{\varrho}\|.\] Theorem 2 proved that \(|S(\mathbf{\varrho})-S(\hat{\mathbf{\varrho}})|\) is essentially exponentially convergent. Hence \(S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})\) is the sum of an essentially exponentially convergent term and an exponentially convergent term. Therefore \(S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})\) is itself essentially exponentially convergent. Theorem 3 shows that the quantum state observer developed in Section II will also produce essentially exponential convergence of the relative entropy to zero as long as \(\mathbf{\varrho}(0)\) is full-rank. One would imagine the rank condition is satisfied quite often in nature; however, it is an interesting open research problem to study if this assumption can be relaxed. ### Example: A laser-driven atom A numerical example will now be presented to illustrate the results developed in this note. Consider a laser-driven atom governed by the Liouville-von Neumann equation with Hamiltonian \[\mathbf{H}=\begin{bmatrix}E_{0}&\omega\\ \bar{\omega}&E_{1}\end{bmatrix}\] where \(E_{0},E_{1}\in\mathbb{R}\) are its energy eigenvalues and \(\omega\in\mathbb{C}\) is the (constant) driving frequency. Assume \(E_{0}=-0.5\), \(E_{1}=0.5\), and \(\omega=3\). The output \(\mathbf{y}(t)\) will consist of the expected value of the projective measurements onto the eigenstates of the undriven Hamiltonian: \(|0\rangle\!\langle 0|\) and \(|1\rangle\!\langle 1|\). After vectorizing the master equation, one will find that the Kalman observability matrix \(\mathbf{O}(\mathbf{A},\mathbf{C})\) has rank 4. Thus the system is linearly observable. The observer gain \(\mathbf{K}=\mathbf{C}^{\dagger}\) will be used, which ensures the exponential stability of the observer error dynamics. The system and observer are initiated in the states \[\mathbf{\varrho}_{0}\triangleq\begin{bmatrix}0.25&0\\ 0&0.75\end{bmatrix}\quad\text{and}\quad\hat{\mathbf{\varrho}}_{0}\triangleq \begin{bmatrix}0&0\\ 0&1\end{bmatrix}.\] Figure 1 plots the results of this numerical experiment. In Figure 0(a), one can see that the von Neumann entropy of the estimated state \(\hat{\mathbf{\varrho}}\) converges to that of the true state \(\mathbf{\varrho}\) as guaranteed by Theorem 2. In Figure 0(b), one can see that the relative entropy \(S(\hat{\mathbf{\varrho}}\|\mathbf{\varrho})\) converges exponentially to zero as the normed estimation error converges exponentially to zero. This is when the quantum states \(\mathbf{\varrho}\) and \(\hat{\mathbf{\varrho}}\) become indistinguishable. This was guaranteed by Theorem 3 since \(\mathbf{\varrho}_{0}\) is full-rank. ## IV Conclusion A valid quantum state observer is one whose state converges to that of the reference quantum system while also ensuring that its estimated state is a valid density operator. This note showed that when a valid quantum state observer is used to infer the state of a closed quantum system: (i) the entropy of the observer's state is always essentially exponentially convergent to that of the system's state, and (ii) the relative entropy between the observer and system's states is essentially exponentially convergent to zero as long as the system starts in a full-rank state.
2310.00239
AdaptNet: Policy Adaptation for Physics-Based Character Control
Motivated by humans' ability to adapt skills in the learning of new ones, this paper presents AdaptNet, an approach for modifying the latent space of existing policies to allow new behaviors to be quickly learned from like tasks in comparison to learning from scratch. Building on top of a given reinforcement learning controller, AdaptNet uses a two-tier hierarchy that augments the original state embedding to support modest changes in a behavior and further modifies the policy network layers to make more substantive changes. The technique is shown to be effective for adapting existing physics-based controllers to a wide range of new styles for locomotion, new task targets, changes in character morphology and extensive changes in environment. Furthermore, it exhibits significant increase in learning efficiency, as indicated by greatly reduced training times when compared to training from scratch or using other approaches that modify existing policies. Code is available at https://motion-lab.github.io/AdaptNet.
Pei Xu, Kaixiang Xie, Sheldon Andrews, Paul G. Kry, Michael Neff, Morgan McGuire, Ioannis Karamouzas, Victor Zordan
2023-09-30T03:19:51Z
http://arxiv.org/abs/2310.00239v3
# AdapNet: Policy Adaptation for Physics-Based Character Control ###### Abstract Motivated by humans' ability to adapt skills in the learning of new ones, this paper presents AdapNet, an approach for modifying the latent space of existing policies to allow new behaviors to be quickly learned from like tasks in comparison to learning from scratch. Building on top of a given reinforcement learning controller, AdapNet uses a two-tier hierarchy that augments the original state embedding to support modest changes in a behavior and further modifies the policy network layers to make more substantive changes. The technique is shown to be effective for adapting existing physics-based controllers to a wide range of new styles for locomotion, new task targets, changes in character morphology and extensive changes in environment. Furthermore, it exhibits significant increase in learning efficiency, as indicated by greatly reduced training times when compared to training from scratch or using other approaches that modify existing policies. Code is available at [https://motion-lab.github.io/AdaptNet](https://motion-lab.github.io/AdaptNet). **ACM Reference Format** Pei Xu, Kaixiang Xie, Sheldon Andrews, Paul G. Kry, Michael Neff, Morgan McGuire, Ioannis Karamouzas, and Victor Zordan. 2023. AdapNet: Policy Adaptation for Physics-Based Character Control. _ACM Trans. Graph._**42**, 6, Article 112.1522 (December 2023), 18 pages. [https://doi.org/10.1145/3618375](https://doi.org/10.1145/3618375) ## 1. Introduction Research on physically-based character animation has received a great deal of attention recently, especially using reinforcement learning (RL) to develop control policies that produce a wide spectrum of motion behaviors and styles with few or no manual inputs. Most techniques rely on reference human motion to either provide direct tracking or indirect comparison to constrain movement, along with additional targets and rewards to shape task success (e.g., [12, 20, 21]). However, methods to date largely develop policies or controllers for a known behavior, and must be learned (usually from scratch) to produce a new behavior. While curriculum-style learning and warm-start approaches may be used to migrate policies to targeted Figure 1. Examples policy adaptation for locomotion. From left to right and top to bottom: motion interpolation, local collision avoidance, body-length changes, style transfer, morphology changes, rough terrain adaptation. goal tasks (Tao et al., 2022; Yin et al., 2021), we instead aim to broadly adapt previously trained policies to make them usable in a wide spectrum of new scenarios without the need for full retraining. Inspired by recent work in conditioning existing models in image-based stable diffusion and large language models (Hu et al., 2021; Zhang and Agrawala, 2023), we introduce _AdaptNet_ as an approach for controlling physically based characters that modifies an existing policy to produce behavior in a variety of new settings. The main novelty of our work is the ability to control the motion generation process through editing the latent space. In physics-based character control tasks, there is an opportunity to better understand and exploit the latent space representation of control policies obtained using reinforcement learning frameworks. AdapNet provides an initial step in this direction. Specifically, our approach relies on the training of weights for new network components that are injected into a previously trained policy network. Building on top of a pre-existing multi-objective reinforcement learning controller, we propose a two-tier architecture for AdapNet that augments the latent state embedding while adding modifications to the remaining layers for control refinement. The first layer modifies the latent space projected from the association of the task and character state. It supports adding elements to the control state, as well as changing the imitation and task rewards. Meanwhile, the deeper, control-level refinement augments the policy's action derived from the latent state, supporting more substantive changes to the task control. Together, AdapNet performs fast training from a previously trained policy and is capable of making a wide spectrum of adaptations from a single behavior. As in Figure 1, we showcase our learning framework with numerous controller adaptation examples, including changes in the style of locomotion derived from very short reference motions. AdapNet can perform this "few-shot style transfer" using only the embedding layer augmentation in a fraction of the time it takes to learn the original locomotion policy. Furthermore, through interpolating in the latent space, it is possible to control the generated control dynamically and smoothly transition from the original behavior to the new style. We further experiment with changes to the character morphology by "locking" joints and changing limb lengths. While these changes lead to failure in the original policy, AdapNet augments the policy easily to account for the various changes. We also investigate changes in the environment, exploring adaptation for locomotion on rough and slick (low-friction) terrains, as well as on obstacle-filled environments. In each case, AdapNet provides significant improvement leading to characters that robustly traverse a range of new settings (see Figure 1 and accompanying video). We evaluate the effectiveness of AdapNet on various tasks, including its ability for adaptation of imitation learning, different goal rewards, and environmental states. We compare our approach against training from scratch, as well as training-continuation (fine-tuning). Training with AdapNet can typically be carried out within 10-30 minutes for simple adaptation tasks, and up to 4 hours for complex locomotion tasks and environment changes. Within such modest training time budgets, in most cases it is impossible to obtain a working controller that can adhere to imitation and goal-task objectives when training from scratch or finetuning a pre-existing policy. Additional ablation studies support the specific architecture we propose over several alternatives along with highlighting AdapNet-Net's ability to successfully control and modify the latent space. The contributions of our work are summarized as follows: * We show how the latent space representation of an RL policy can be modified for motion synthesis in physics-based motor control tasks. * Based on this, we introduce AdapNet as a framework to efficiently modify a pre-trained physics-based character controller to new tasks. * We showcase the applicability of AdapNet on a variety of multi-objective adaptation tasks, including few-shot motion style transfer, motion interpolation, character morphology adaptation, and terrain adaptation. ## 2. Related Work Our approach follows a wide set of previous related work stemming from general disciplines in computer animation, robotics, machine learning and image generation. We focus on the background that is most relevant, categorized in physically based character skill control, transfer learning, and latent space adaptation. ### Deep Reinforcement Learning for Skilled Motion Deep learning neural network control policies have become the staple for physics-based character animation research due to their ability to synthesize a range of skilled motions. In recent years, techniques have trained control policies to animate physics-based humanoid characters for agile motions (Yin et al., 2021), team sports (Liu and Hodgins, 2018; Xie et al., 2022), martial arts (Won et al., 2021), juggling (Chemin and Lee, 2018; Luo et al., 2021; Xu et al., 2023), performing complex environment interactions (Merel et al., 2020), as well as general locomotion tasks (Bergamin et al., 2019; Peng et al., 2018). The recent survey by Kwiatkowski et al. (2022) provides a comprehensive overview of approaches that have been developed for motion synthesis and control of animated characters. Training skill-specific policies often requires extended training time, necessitating years of simulated learning (Peng et al., 2022). Skill re-use and combining pre-trained policies to perform more complex tasks offer an alternative that can create needed savings from this extensive training. To this end, a number of papers have proposed ways to reuse and/or combine policies. For example, Deep-Mimic (Peng et al., 2018) trains a composite policy that transitions between a collection of different skills. Liu and Hodgins (2017) experiment with hierarchical models that sequence a set of pre-trained control fragments. Hejna et al. (2020) explore a hierarchical approach to decouple low and high-level policies to transfer skills from agents with simple morphologies to more complex ones, and found that it helps to reduce overall sampling. Likewise, we demonstrate that the proposed AdapNet approach is effective when adapting pre-trained policies to new character morphologies and motion styles with relatively little additional training time. Curriculum learning is also related to skill adaptation since the agent is trained on tasks with increasing difficulty (Karpathy and van de Panne, 2012; Yu et al., 2018). The approach is demonstrated to be effective for training controllers that allow agents to traverse environments of increasing complexity (Heess et al., 2017; Xie et al., 2020) and recover to standing (Frezzato et al., 2022) under increasingly challenging conditions. In comparison, we demonstrate that our approach efficiently allows a physically simulated humanoid to adapt pre-trained walking and running skills to new terrains as well. However, the aim for curriculum learning is somewhat different than our own in that it is usually used as a means to develop a single advanced skill while we focus on the ability to generalize from one behavior to many. ### Transfer Learning In machine learning, a common approach for model adaptation is to start with a pre-trained model and fine tune it on a new task. Over the years a number of architectures have been proposed to overcome the overfitting and expressivity issues of finentuning, including GAN-inspired approaches for domain adaptation (Ganin et al., 2016; Tzeng et al., 2017) and adding new models to previously learnt ones through lateral connections (Rusu et al., 2016, 2017). To facilitate better model transfer, algorithms have been explored that account for entropy optimization (Haarnoja et al., 2017; Wang et al., 2021). As well, others directly manipulate the source task domain through randomizing physical parameters of the agent and/or environment while adapting the source domain to the target one (Ganin et al., 2016; Peng et al., 2018; Rajeswaran et al., 2017). To encourage diversity during early training, recent work on transfer learning has also explored a multi-task paradigm where a model is pre-trained on many tasks before being transferred to a new target domain (Alet et al., 2018; Devin et al., 2017). Some multi-task transfer learning solutions include policy distillation that seeks to "distill" knowledge from expert policies to a target policy (Parisotto et al., 2016; Rusu et al., 2016). Another approach with a similar goal is policy learning which learns a residual around given expert policies (Silver et al., 2019). Meta learning has also gained popularity recently in computer vision and robotics, seeking to leverage past experiences obtained from many tasks to acquire a more generalizable and faster model that can be quickly adapted to new tasks (Andrychowicz et al., 2016; Ravi and Larochelle, 2017). The related formulations can be broadly classified into models that ingest a history of past experiences through recurrent architectures (Duan et al., 2016; Heess et al., 2015), model-agnostic meta-learning methods (Finn et al., 2017; Nichol et al., 2018), and approaches for meta-learning hyperparameters, loss functions, and task-dependent exploration strategies (Gupta et al., 2018; Houthooft et al., 2018; Xu et al., 2018). While some of the aforementioned approaches have shown great promise for agent control problems, in this paper, we propose an approach that can quickly adapt RL policies for physically simulated humanoids through fine control tuning as well as augmentation injected in the latent space, loosely inspired by recent findings in image diffusion (Hu et al., 2021; Mou et al., 2023; Zhang and Agrawala, 2023). In character animation, related work has focused on motion style transfer tasks for _kinematic_ characters (Aberman et al., 2020; Mason et al., 2018) and the recent work of Starke et al. (2022) shows exciting results about how a well-learned latent space can aid motion synthesis. However, in physics-based character control tasks, there is still little investigation about the latent space representation of the control policy obtained using reinforcement learning frameworks. We believe that AdapNet provides a promising step in bridging that gap. ### Latent Space Adaptation We are inspired by research in image and 3D model generation that shows it is possible to control the synthesis process to generate targeted artifacts through purposeful modification of the latent space (Abdal et al., 2019; Berthelot et al., 2017; Bojanowski et al., 2018; Epstein et al., 2022; Karras et al., 2020; Radford et al., 2016; Shen et al., 2020; Wu et al., 2016; Zhuang et al., 2021). While we have seen related work in RL for character control, AdapNet offers a unique approach to latent space adaptation, drawn from these adjacent works' successes. Related works in physics-based character control, such as (Juravsky et al., 2022; Ling et al., 2020; Peng et al., 2019, 2022; Tessler et al., 2023; Won et al., 2021), explore using pre-trained latent space models to facilitate the training of a control policy. These methods intend to adapt the pre-trained multi-skill model for downstream tasks by controlling skill latent embeddings, focusing on reusing skills for motion generation. In contrast, our approach does not break down the latent space by task and character state and instead allows the policy to be adapted to heterogeneous tasks that require learning new (out-of-distribution) motions/skills. Further, previous methods discard the pre-trained latent encoder during adaptation and rely on re-training to obtain a new encoder. In contrast, our approach directly edits the latent space projected from the association of the task and character state via the pre-trained policy. To do this, we use a gated recurrent unit (GRU) (Chung et al., 2014) layer as the encoder and initialize it by duplicating the original encoder parameters. Next, a fully connected layer is applied after the GRU to ensure zero initialization and convert the encoded state to a latent _modification_. In sum, the training for our adaptation starts from modifying the pre-trained policy rather than from scratch, which benefits adaptation in comparison to previous work in sample efficiency and, at times, overall effectiveness. ## 3. AdapNet Framework An overview of the AdapNet framework is shown in Figure 2. The GAN-style control framework (top), described below, produces an original (pre-trained) policy (bottom, left) while AdapNet is used to adapt that pre-trained control policy to a new task controller (bottom, right). Notably, the adaptation process could involve changes to the reward function (e.g., motion stylization) or the state and dynamics model (e.g., character morphology and terrain adaptation). Components of the AdapNet for policy adaptation are shown: a latent space injection component and an internal adaptation component. The latent space injection performs policy adaption by editing the latent space, which is conditioned on the pre-trained policy's state as well as any additional state information, for example, for new tasks. This component is trained to cooperate with the pre-trained policy by generating offsets to the original latent space instead of trying to learn how to generate latent variables for new tasks from scratch during adaptation This leads to an efficient state-action exploration that starts from the pre-trained policy, instead of complete random exploration. Internal adaptation further tunes the policy by adding a branch to each internal fully-connected layer in the policy network. This allows for more flexibility, enabling AdapNet to shift away from the pre-trained policy and generate refinement through control actions that the pre-trained policy may not reach easily. In our implementation, both the pre-trained policy and the adaptation are produced using a multi-objective learning framework (Xu et al., 2023) combining reinforcement learning with a GAN-like structure for effective policy learning that accounts for both motion imitation and goal-directed control (see Figure 2, top). During runtime, AdapNet can be activated flexibly and dynamically allowing us to control the level of adaptation of the original control policy. The control policy \(\pi(\mathbf{a}|\mathbf{s}_{t})\) is a neural network taking the agent state \(\mathbf{s}_{t}\) as input and outputting a probability distribution from which a control \(\mathbf{a}_{t}\) can be drawn from the action space \(\mathcal{A}\). For physics-based character control tasks with dynamic goals, we consider \(\mathbf{s}_{t}\coloneqq\{\mathbf{o}_{t},\mathbf{g}_{t}\}\), where \(\mathbf{o}_{t}\) denotes the current state of the character, e.g., joint or body link positions and velocities, and \(\mathbf{g}_{t}\) is an optional task-related goal state or an encoding variable that indicates desired motion parameters, such as target speed and direction, end-effector positions, motion style, etc. The action vector \(\mathbf{a}_{t}\) is the target posture fed to a PD servo through which the simulated character is controlled at a higher frequency. As shown in Figure 2, \(\mathbf{a}_{t}\) is expressed as a multivariate Gaussian distribution. Under the framework of reinforcement learning, our goal is to find the policy \(\pi\) that maximizes the discounted cumulative reward: \[J=\mathbb{E}_{\tau\sim p(\tau|\pi)}\left[\sum_{t=0}^{\tau}r(\mathbf{s}_{t}, \mathbf{a}_{t})\right], \tag{1}\] where \(p(\tau|\pi)=p(\mathbf{s}_{0})\prod_{t=0}^{H-1}p(\mathbf{s}_{t+1}|\mathbf{s}_{t },\mathbf{a}_{t})\pi(\mathbf{a}_{t}|\mathbf{s}_{t})\) is the state-action visitation distribution for the trajectory \(\tau=\{s_{t},a_{t}\}\) over a horizon of \(H\) time steps, \(\gamma\in[0,1]\) denotes the discount factor, and \(r(\cdot)\) is the reward received at a given time step and \(p(\cdot)\) is the state-transition probability of the underlying Markov decision process. In our domain, when the character faces a new task, \(p(\cdot)\) and/or \(r(\cdot)\) may change. AdapNet seeks to efficiently modify \(\pi\) and adapt it to the new task by editing the latent space and finetuning the policy. ## 4. Policy Adaptation using Latent Space Injection If we consider the first layer, or first several layers, in the policy network \(\pi\) as an encoder to embed the state \(\mathbf{s}_{t}\) into a latent space \(\mathcal{Z}\), the control policy can be rewritten as \[\pi_{\theta}(\mathbf{a}_{t}|\mathcal{E}_{\xi}(\mathbf{s}_{t})), \tag{2}\] where \(\mathcal{E}_{\xi}\) is the encoding layers with parameters \(\xi\), \(\theta\) are the parameters for the layers in the policy network that follow the encoder, and \((\theta,\xi)\) denote the weights of \(\pi\). In this formulation, the policy network \(\pi_{\theta}\) decides the projection from the latent \(\mathbf{z}_{t}=\mathcal{E}_{\xi}(\mathbf{s}_{t})\) into the action space \(\mathcal{A}\). Assuming that \(\pi_{\theta}\) is optimized by a typical on-policy policy gradient algorithm, the optimization objective with the introduction of the latent becomes \[\max_{\theta,\xi}\mathbb{E}_{t}\left[A(\mathbf{s}_{t},\mathbf{a}_{t})\log\pi_{ \theta}(\mathbf{a}_{t}|z_{t};\xi)\right], \tag{3}\] where \(A(\cdot)\) provides an advantage function estimation based on the received rewards \(\{r_{k}\}_{k\geq t}\) during the interaction with the environment and represents how good an action sample \(\mathbf{a}_{t}\) is given the conditional state \(\mathbf{s}_{t}\). Given the generalization of neural networks, the latent space \(\mathcal{Z}\) can be considered as a superset covering all the possible latent states, which could lie outside of the domain that \(\pi_{\theta}\) can reach during its training. Based on this observation, when \(\pi_{\theta}\) needs to be adapted to a new task, we propose to edit \(\mathbf{z}_{t}=\mathcal{E}_{\xi}(\mathbf{s}_{t})\subset\mathcal{Z}\) instead of discarding the original encoder \(\mathcal{E}_{\xi}\) and training a new one from scratch. The intuition is that for similar tasks, adjusting the current encoder provides better efficiency, allowing the desired control policy to be learned by a modified projection function from \(\mathbf{s}_{t}\) to \(\mathbf{z}_{t}\). Our approach manipulates the full latent space projected from both the character state \(\mathbf{o}_{t}\) and the goal state \(\mathbf{g}_{t}\). Specifically, as Figure 2. Overview of our approach for adapting motor control policies for physics-based characters. Top: We model both pretraining and adapted tasks using a multi-critic reinforcement learning framework that balances the training of imitation and goal-directed control objectives. After a policy is trained, we can quickly adapt it to a new task using AdapNet. Bottom: AdapNet starts with a copy of the pre-trained policy network and modifies it through editing the latent space conditioned on the character’s state and introducing optional adaptation modules for further finetuning. shown in Figure 2, we perform latent space injection by introducing a new conditional encoder \(\mathcal{I}_{\phi}\) with parameters \(\phi\) after the first encoding layer, where the character state \(\mathbf{o}_{t}\) and the goal state \(\mathbf{g}_{t}\) are concatenated to generate \(\mathcal{E}_{\xi}\). This latent space is modified via \[\mathbf{z}_{t}=\mathcal{E}_{\xi}(\mathbf{s}_{t})+\mathcal{I}_{\phi}(\mathbf{s }_{t},\mathbf{c}_{t}), \tag{4}\] where \(c_{t}\) is an additional control input for the new task which could be optional. The injector module \(\mathcal{I}_{\phi}\) is \[\mathcal{I}_{\phi}(\mathbf{s}_{t},\mathbf{c}_{t})=\mathcal{T}_{\phi}(\text{ Concat}(\mathcal{E}_{\phi}(\mathbf{s}_{t}),\mathcal{G}_{\phi}(\mathbf{c}_{t}))), \tag{5}\] where \(\mathcal{G}_{\phi}\) is an optional module to process the additional control input \(\mathbf{c}_{t}\), \(\mathcal{E}_{\phi}\) is a state encoder that has exactly the same structure as the original encoder \(\mathcal{E}_{\xi}\), and \(\mathcal{T}_{\phi}\) is a final embedding module, which can be a fully-connected layer or a stack of multiple fully-connected layers. During retraining for adaptation, we perform policy optimization as in Eq. 3, but only optimize the new parameters \(\phi\) while keeping the parameters \(\theta\) and \(\xi\) fixed: \[\max_{\phi}\mathbb{E}_{t}\left[A(\mathbf{s}_{t},\mathbf{a}_{t})\log\pi_{ \theta}(\mathbf{a}_{t}|\mathcal{E}_{\xi}(\mathbf{s}_{t})+\mathcal{I}_{\phi}( \mathbf{s}_{t},\mathbf{c}_{t}))\right]. \tag{6}\] We begin with copying the original encoder parameters \(\xi\) into the new encoder \(\mathcal{E}_{\phi}\) and initializing the last fully-connected layer inside \(\mathcal{F}_{\phi}\) with zero weight and bias. In this way, the new encoder \(\mathcal{E}_{\phi}\) is optimized by finetuning a set of parameters that are already optimized for state feature extraction during pre-training. The zero initialization of \(\mathcal{F}_{\phi}\) lets the control policy give exactly the same action output as the original pre-trained one, i.e., \(\pi_{\theta}(\mathbf{a}_{t}|\mathcal{E}_{\xi}(\mathbf{s}_{t}))\), at the beginning of re-training. It guides the adaptation to start from the state-action trajectory generated by the original policy rather than from a completely random exploration. We refer to Figure 2 for the default implementation of AdapNet, where the latent space injection is performed right after the concatenation of \(\mathbf{o}_{t}\) and \(\mathbf{g}_{t}\). We denote this latent space as \(\mathcal{Z}^{0}\), and the following ones after each fully-connected layer but before the final action layer as \(\mathcal{Z}^{i}\) where \(i=1,2,\cdots\). Empirically, we note that it is more challenging to perform optimization when the injection occurs at a deeper layer in the policy network, leading typically to unstable training and low-fidelity controllers. An extreme case is to perform injection directly at the action space, which makes the whole system similar to directly finetuning the pre-trained policy network. We refer to Section 9 for related sensitivity analysis on introducing latent space injection at different network layers and for comparisons with directly finetuning a pre-trained policy network for new tasks. During runtime, we can further introduce an extra scaling coefficient to the injection term in Eq. 4. Since our approach does not change the original encoder \(\mathcal{E}_{\xi}\) as well as the policy \(\pi_{\theta}\), the scale coefficient allows us to turn the injection on and off, or control the transition from the original policy to the fully adapted one. In such a way, we can perform motion style or behavior transitions (e.g., walk to skip) by interpolation in the latent space, as we will show in Section 8.1. ## 5. Internal Adaptation for Control Layers The latent space injection component of AdapNet edits the latent space based on the input state and further allows us to introduce additional control input for new tasks. However, the expressive ability of the action policy is still constrained by the pre-trained layers after the state encoder in the policy network, i.e., \(\pi_{\theta}\). While utilizing the pre-trained \(\pi_{\theta}\) for fast adaptation to new tasks, we introduce an internal adaptation component through which we can finetune \(\pi_{\theta}\), overcoming the bias it introduces and allowing for more flexibility in the types of generated controls compared to the ones obtained from the original training domain. The goal of the finetuning is to find a _small_ increment \(\Delta z_{t}^{i}\) to the original latent \(\mathbf{z}_{t}^{i}\) in each latent space \(\mathcal{Z}^{i},i>1\), to help optimize the objective function in Eq. 6 during adaptation training, but without changing the \(\pi_{\theta}\) too much to avoid drifting too far away from the pre-trained policy and being stuck at overfitting during adaptation. To do so, we add a branch to each fully-connected layer between two latent spaces. As shown in the red block of Figure 2, the corresponding latent is generated as: \[\mathbf{z}_{t}^{i}=\mathcal{F}_{\theta}^{i}(\mathbf{z}_{t}^{i-1})+\mathcal{F} _{\eta}^{i}(\mathbf{z}_{t}^{i-1}). \tag{7}\] Here, \(\mathcal{F}_{\theta}^{i}\) denotes the fully-connected layer between the latent space \(\mathcal{Z}^{i-1}\) and \(\mathcal{Z}^{i}\) in the policy network \(\pi_{\theta}\), and \(\mathcal{F}_{\eta}^{i}\) is the newly introduced adaptor that generates \(\Delta z_{t}^{i}\) and is modeled as a fully-connected layer in the added branch. The parameter \(\eta\) is defined as \[\eta:=\{\Delta\mathbf{W}_{i},\Delta\mathbf{b}_{i}\}, \tag{8}\] with \(\Delta\mathbf{W}_{i}\) and \(\Delta\mathbf{b}_{i}\) being the weight and bias parameters in \(\mathcal{F}_{\eta}^{i}\) respectively. Similarly to the embedding module \(\mathcal{F}_{\phi}\) in the latent space injection component, \(\mathcal{F}_{\eta}^{i}\) is initialized as zero and will not influence the output of the policy network at the beginning of policy adaptation. We lock \(\theta\) in \(\mathcal{F}_{\theta}^{i}\) during adaptation training and introduce the parameter \(\eta\) into the optimization function in Eq. 6. Our approach is different from directly finetuning \(\pi_{\theta}\). When directly finetuning \(\pi_{\theta}\), the gradient from \(\mathbf{z}_{t}^{i}\) with respect to \(\mathbf{z}_{t}^{i-1}\) is decided by the weight \(\mathbf{W}_{i}\) in the layer \(\mathcal{F}_{\theta}^{i}\), which may be highly biased and have relatively large or very small values given it was fully trained. Therefore, finetuning \(\pi_{\theta}\) directly for new tasks may lead to unstable training compared to only finetuning the newly introduced parameter set \(\eta\) which is initialized with zero. Furthermore, we can easily apply regularization on \(\Delta\mathbf{W}_{i}\) and \(\Delta\mathbf{b}_{i}\) to prevent aggressive finetuning regardless of the value of the parameters \(\mathbf{W}_{i}\) and \(\mathbf{b}_{i}\) in the pre-trained layer \(\mathcal{F}_{\theta}^{i}\). This will limit the possible change that the internal adaptation can bring about in order to prevent overfitting. We can also introduce an extra scaling weight to control the adaptation level during runtime, as discussed in Section 4. Our proposed internal adaptation component is similar to the approach of low-rank adaptation (LoRA) proposed by Hu et al. (2021). The major difference is that instead of directly employing a fully-connected layer, LoRA decomposes the weight matrix \(\Delta\mathbf{W}_{i}\) into two low-rank matrices, i.e., \(\Delta\mathbf{W}_{i}=\mathbf{B}_{i}\mathbf{A}_{i}\), where, \(\mathbf{B}_{i}\) is a \(|\mathcal{Z}^{i-1}|\)-by-\(r\) matrix, \(\mathbf{A}_{i}\) is a \(r\)-by-\(|\mathcal{Z}^{i}|\) matrix, and \(r\ll\min(|\mathcal{Z}^{i-1}|,|\mathcal{Z}^{i}|)\). In contrast, our approach can be considered a full-rank adaptation. LoRA has been demonstrated as an effective way to fine tune large language and image generation models, reducing the number of parameters that need to be optimized during model adaptation. However, as shown in Section 9.3, we found that LoRA does not work well for physics-based character control tasks. A possible reason is that the related policy networks are markedly smaller compared to large language and image generation models that may have more than 12K dimensions. The latent spaces of our policy network have a typical size of 512 or 1024 dimensions and may not exhibit the lower intrinsic ranks that larger models do (Aghajanyan et al., 2021; Li et al., 2018; Pope et al., 2021). ## 6. Policy Training We use the multi-objective learning framework for physics-based character control proposed by Xu et al. (2023) to perform both the original (pre-)training and adaptation training. The framework leverages a multi-critic structure where the objectives of motion imitation and goal-directed control are considered independent tasks during policy updating. In Figure 2, for example, the imitation objective is associated with a critic network labeled in blue, and the goal-directed objective is associated with a critic in magenta. The advantage (cf. Eqs. 3, 6) with respect to each objective is estimated only by its associated reward and critic network. To ensure that the policy can be updated in a balanced way taking into account both the imitation and goal-directed control objectives, all estimated advantages are standardized independently before policy updating. During pre-training, we seek to find a basic motor control policy \(\pi_{\theta}(\mathbf{a}_{t}|\mathcal{E}_{\xi}(\mathbf{s}_{t}))\), which we can later adapt to new tasks. In this work, we focus on locomotion tasks, and thus \(\pi_{\theta}\) involves two objectives: a motion imitation objective given a batch of reference motions of walking and running, and a goal-directed objective involving a given target direction and speed. Using the multi-objective learning framework, the optimization objective function during pretraining shown in Eq. 3 can be written as \[\max_{\theta,\xi}\mathbb{E}_{t}\left[\left(\sum_{k}\omega_{k}\tilde{A}_{t}^{k} \right)\log\pi_{\theta}(\mathbf{a}_{t}|\mathcal{E}_{\xi}(\mathbf{s}_{t})) \right], \tag{9}\] where \(\tilde{A}_{t}^{k}\) is the standardization of the estimated advantage associated with the objective \(k\) and \(\omega_{k}\) satisfies \(\sum_{k}\omega_{k}=1\) providing additional control to adjust the policy updating in a preferred manner when conflicts between multiple objectives occur. We employ a GAN-like structure (Ho and Ermon, 2016; Merel et al., 2017) that relies on an ensemble of discriminators (Xu and Karamouzas, 2021) to evaluate the imitation performance and generate the corresponding reward signals for advantage estimation and policy updating. In particular, we take an ensemble of \(N\) discriminators and use a hinge loss (Lim and Ye, 2017) with policy gradient (Gulrajani et al., 2017) for discriminator training, resulting in the following loss function: \[\min\frac{1}{N}\sum_{n=1}^{N}\Bigl{(}\mathbb{E}_{t}\left[\max(0,1+ D_{n}(\mathbf{o}_{t}))\right]+\mathbb{E}_{t}\left[\max(0,1-D_{n}(\hat{\mathbf{o}}_{t}))\right]\] \[+\mathbb{\lambda}^{\text{GP}}\mathbb{E}_{t}\left[\left(\|\nabla_{ \hat{\mathbf{o}}_{t}}D_{n}(\hat{\mathbf{o}}_{t})\|_{2}-1)^{2}\right]\right). \tag{10}\] Here, \(D_{n}\) denotes a discriminator network, \(\hat{\mathbf{o}}_{t}=\alpha\mathbf{o}_{t}+(1-\alpha)\hat{\mathbf{o}}_{t}\) with \(\alpha\sim\textsc{Uniform}(0,1)\) and \(\mathbb{\lambda}^{\text{GP}}\) is gradient penalty coefficient. The reward function to evaluate the imitation performance is defined as \[r^{\text{init}}(\mathbf{s}_{t},\mathbf{a}_{t},\mathbf{s}_{t+1})=\frac{1}{N} \sum_{n=1}^{N}\textsc{Clip}\left(D_{n}(\mathbf{o}_{t}),-1,1\right). \tag{11}\] The reward for the goal-related task is computed heuristically. We refer to the appendix for the representation of the goal state \(\mathbf{g}_{t}\) and the definition of the goal-related task reward. After obtaining \(\pi_{\theta}\) and \(\mathcal{E}_{\xi}\) in pre-training, we introduce the proposed AdapNet to perform policy adaptation for new tasks that are relative to but have different reward definitions and/or environment settings from the one in the pre-training phase. Before the adaptation training starts, we lock the parameters \(\theta\) and \(\xi\). We then initialize \(\mathcal{E}_{\phi}\) inside the latent space injection component \(\mathcal{I}_{\phi}\) using the weights \(\xi\), and initialize with zero weight and bias the last layer of \(\mathcal{F}_{\phi}\) inside \(\mathcal{I}_{\phi}\) along with each fully-rank adaptor \(\mathcal{F}_{\mathbf{\eta}}^{i}\), \(i>0\). To stabilize the training, besides applying a common weight decay to the parameter set \(\eta\) (Eq. 7) via L2 regularization, we introduce an additional regularization on the latent injection generated by \(\mathcal{I}_{\phi}\). The adaptation training is still performed under the aforementioned multi-objective learning framework in the same way as the pre-training phase. The optimization objective for policy adaptation is \[\max_{\phi,\eta}\mathbb{E}_{t}\left[\left(\sum_{k}\omega_{k}\tilde {A}_{t}^{k}\right)\log\pi_{\theta}\bigl{(}\mathbf{a}_{t}|\mathcal{E}_{\xi}( \mathbf{s}_{t})+\mathcal{I}_{\phi}(\mathbf{s}_{t},\mathbf{c}_{t});\eta\bigr{)}\right.\] \[\left.-\beta\|\mathcal{I}_{\phi}(\mathbf{s}_{t},\mathbf{c}_{t})\|_{ 2}-\kappa\|\eta\|_{2}\right], \tag{12}\] where \(\beta\) and \(\kappa\) are regularization coefficients. In Section 10, we give a detailed analysis of the regularization on the latent space injection. We refer to Algorithm 1 for the outline of the whole training process. Adaptation with the proposed AdapNet can be done very quickly within 10-30 minutes for simple control tasks and up to 4 hours for challenging terrain adaptation tasks with new control input processed by an additional convolutional neural network \(\mathcal{G}_{\phi}\), as defined in Eq. 5. ``` [MISSING_PAGE_POST] We run policy optimization using PPO (Schulman et al., 2017) and update policy parameters using the Adam optimizer (Kingma and Ba, 2017). To encode the character's state, we take the position, orientation, and velocities of all the body links related to the pelvis (root link) in the last four frames as the state representation \(\mathbf{o}_{t}\) and employ a gated recurrent unit (GRU) (Chung et al., 2014) with a 256-dimension hidden state to process this temporal state. For discriminator training, we take the character's pose at five consecutive frames as the representation of \(\{\mathbf{o}_{t},\mathbf{o}_{t+1}\}\) to evaluate the policy's imitation performance during the transition from timestep \(t\) to \(t+1\). We employ an ensemble of 32 discriminators and model it by a multi-head network, as shown in Figure 3. The critic network has a similar structure to the policy network, but with a 2-dimensional output for the value estimations to the imitation objective and goal-directed objective respectively. We refer to the appendix for the hyperparameters used for policy training and the representation of the goal state \(\mathbf{g}_{t}\) in the locomotion task. Rewards for both task and imitation are employed during policy adaptation. To avoid bias from the pre-trained policy, we discard the discriminators for imitation from the original policy and new discriminators are trained from scratch. Intuitively, in tasks such as motion style transfer the original discriminator will not work well for the new given reference style and thus a new one is needed. Even for other adaptation tasks, we found utilizing old discriminators to be problematic, as the optimal action in the new task can dramatically change from the original in the context of how it employs the reference motion. Empirically, when we experimented with reusing the old discriminators, we found they introduce too much bias towards the old task. Finally, with training new discriminators for a new task, we also perform value estimation by re-training a new critic from scratch. All our tests were run on machines with a V100 or A100 GPU. To achieve a good locomotion policy based on which we perform further adaptation, the pre-training took around 26 hours and consumed \(4\times 10^{8}\) training samples. The reference motions are around 300 seconds long including normal walking and running motions with turning poses and various speeds (cf. Table 1, top). All the reference motions used during pre-training and adaptation training are recorded at 30 Hz and extracted from the publicly available dataset LAFAN1 (Harvey et al., 2020). ## 8. Applications of AdapNet In this section, we apply the AdapNet technique to demonstrate the success and efficiency of learning new physics-based controllers through adaptation. Our experiments use two pre-trained locomotion policies (walking and running) that account for two objectives: motion imitation based on a batch of walking or running reference motions, respectively, and a goal objective as defined by a target direction of motion and speed. We adapt the pre-trained policies to a range of new tasks, highlighting applications of AdapNet to style transfer, character morphology changes and adaptation to different terrains. Figure 1 shows snapshots from different outcomes. Please refer to the supplementary video for related animation results. ### Motion Style Transfer and Interpolation We consider a variety of motion style transfer tasks where a pre-trained walking locomotion policy is adapted to a particular style. Note, this is not a simple motion imitation task, since all the style reference motions are very short (see Table 1, bottom), containing only one or two gait cycles. It is therefore impossible to train an equivalent locomotion policy that supports goal-directed steering using the target reference motion. Instead, the nature of this test is few-shot learning, where AdapNet is expected to effectively learn how to perform locomotion in the style provided by the small duration of the style example in the new reference, while relying on the pre-trained policy to perform turning and goal-directed steering. Figure 5 depicts related qualitative results. AdapNet can effectively learn how to do goal-directed turning in the provided style. Further, adaptation training can be done very quickly, within 10-30 minutes, in contrast to the original that we obtained during pre-training took about one day for training. We refer to the supplementary video for \begin{table} \begin{tabular}{r|c|l} \hline \hline **Motion** & **Length** & **Description** \\ \hline Walk & 334.07 s & normal walking motions for pre-training \\ Run & 282.87 s & normal running motions for pre-training \\ \hline Swaggering Walk & 1.07 s & exaggerated walking with one arm akimbo \\ Goose Step & 2.20 s & goose step with arms akimbo \\ Stomp Walk & 1.23 s & walking walking stubping on the ground \\ Kicking Walk & 2.03 s & walking with leg jacking \\ Stopp & 0.93 s & slow walking with body bent over \\ Jauntry Skip & 1.60 s & skipping in a spirited manner \\ Sashay Walk & 1.07 s & walking in a slightly exaggerated manner \\ Limp & 1.90 s & slow walking with right leg hurt \\ Face & 1.70 s & slow walking with arms akimbo \\ Penguin Walk & 0.77 s & moving with very small and steps \\ Struttling Walk & 1.40 s & walking with shoulder moving aggressively \\ Joyful Walk & 1.20 s & strut walking rhythmically \\ \hline \hline \end{tabular} \end{table} Table 1. Reference motions for policy pre-training (top) and stylized motion learning (bottom). Figure 3. Network structures. Here, \(\odot\) denotes the concatenation operator and \(\oplus\) denotes the average operator. The state encoder \(\mathcal{E}_{\mathcal{E}}\) is shown in the dashed block. An optional control input encoding module \(\mathcal{G}\) is included if the additional control input \(\mathbf{c}_{t}\) is provided during adaptation training. animation results, and Section 9 for comparing AdapNet to learning stylized locomotion from scratch. As discussed in Sections 4 and 5, we can perform motion interpolation in the latent space by introducing a scale variable to control the adaptation level. This process can be described by modifying Eqs. 4 and 7 as \[\begin{split}& z_{t}^{0}=\mathcal{E}_{\xi}(\mathbf{s}_{t})+\alpha \mathcal{I}_{\phi}(\mathbf{s}_{t},\mathbf{c}_{t}),\\ & z_{t}^{i}=\mathcal{T}_{\phi}^{i}(\mathbf{z}_{t}^{i-1})+\alpha \mathcal{T}_{\eta}^{i}(\mathbf{z}_{t}^{i-1}),\end{split} \tag{13}\] where \(\alpha\in[0,1]\) is the introduced scale variable. In Figure 4, we show interpolation results. As shown in the figure, we can achieve motions with different style intensity, which can transition between the base walking motion and the stylized ones in a smooth manner. We can further extend Eq. 13 to perform interpolation between any two AdapNet models via \[\begin{split}& z_{t}^{0}=\mathcal{E}_{\xi}(\mathbf{s}_{t})+ \alpha\mathcal{I}_{\phi^{\prime}}(\mathbf{s}_{t},\mathbf{c}_{t})+(1-\alpha) \mathcal{I}_{\phi^{\prime\prime}}(\mathbf{s}_{t},\mathbf{c}_{t}),\\ & z_{t}^{i}=\mathcal{T}_{\theta}^{i}(\mathbf{z}_{t}^{i-1})+\alpha \mathcal{T}_{\eta^{\prime}}^{i}(\mathbf{z}_{t}^{i-1})+(1-\alpha)\mathcal{T}_{ \eta^{\prime\prime}}^{i}(\mathbf{z}_{t}^{i-1}),\end{split} \tag{14}\] where the parameters \(\phi^{\prime}\) and \(\eta^{\prime}\) are from one AdapNet model and \(\phi^{\prime\prime}\) and \(\eta^{\prime\prime}\) are from the other one. Such an interpolation scheme can be regarded as applying two independently trained AdapNet models simultaneously on the same, pre-trained policy, with an example shown in Figure 6. The above interpolation results demonstrate that during adaptation training, AdapNet can effectively learn structured information about the latent space with respect to the desired motion styles. We refer to Section 10 for more details on controlling the latent space and related visualizations, along with an analysis of the training difficulty (time consumption) when learning different styles. ### Morphological Adaptation We consider two kinds of morphological changes: body shape and joint lock. Due to physical constraints, morphological changes in the character model will cause the same action \(\mathbf{a}_{t}\) to lead to different resulting states compared to the ones observed in the pre-training phase. Without adaptation, the pre-trained policy does not perform well if it's even able to keep the character balanced, especially when the lower body is modified. We tested eight body-shape variants of the original character model, as shown in Figure 7. In the _LongBody_ variant, we extend the abdomen length by 50%, while the _BigBody_ variant increased the torso size by 50%. The latter leads to an increase in the torso mass of over 300%. In _LongUpperArms_ and _LongLowerArms_ variants, the length of the upper and lower arms are extended by 25% respectively, while in _AsymmetricUpperArms_, we increase the length of the right upper arm but decrease the length of the left upper arm. In the _LongThighs_ and _LongShins_ variants, the length of the upper and lower legs are extended by 50% respectively, the latter akin to a human walking on stits. In the model of _SuperLongLegs_, both the Figure 4. Motion interpolation between walking (pre-trained policy) shown at the top-left corner and different stylized motions by controlling the adaptation level of the associated AdapNet model (cf. Eq. 13). Snapshots on the left show the learned stylized motions of _Stoop_ walking and _Jaunity Skip_. When \(\alpha=0\), the character is controlled only by the original walking policy. When \(\alpha=1\), the character is controlled with a full injection of AdapNet. Figure 5. Example of motion style transfer learning with goal-steering navigation using AdapNet. Green arrow indicates the dynamically generated target directions for locomotion control. thighs and shins are extended resulting in a character that is over 2 m tall. We also experimented with different configurations, as shown in Figure 8, where some of the joints (in orange) are 'locked'. The locked joints are removed from the character model such that the linked body parts are fused together. This reduces the number of dimensions of the action space. To make the pre-trained policy compatible with the new action space, we simply prune the weight and bias matrices of the last layers in the policy network and remove the output neurons corresponding to the locked joints. Even though the pre-trained policy would not completely lose control of the character when the torso or arms are modified, the character still loses balance quite often. As more challenging examples, the morphological changes in the lower body parts and joints leave the pre-trained policy unable to control the character without falling. For example, when the knee joint is locked, the policy needs to adjust the output of the hip and ankle in order to compensate for the 'disability' of the knee. This requirement leaves the pre-trained policy incapable of suitably controlling the modified character model. During adaptation, we did not do any retargeting to generate new reference motions for AdapNet to learn. Instead, we simply modify the character's model while relying on the reference motions used to pre-train the original policy, retargeted to the character model without any morphological changes. We found it takes 15-30 minutes to finish the adaptation training depending on the difficulty of the morphology change task. The character controlled by the AdapNet policy can maintain its balance and walk or run without falling down. An interesting observation is that in order to match the provided height of the root link (pelvis) in the reference motions, the AdapNet policy will control the character to walk or run in a crouch with the body at a relatively low position compared to the leg length. We show some representative results in Figure 9, and refer to the supplementary video for animations. ### Terrain Adaptation Next we discuss policy adaptation for character locomotion on low friction and rough terrains as well as obstacle-filled scenes that require extra control input. #### 8.3.1. Friction Adaptation To simulate an icy surface, we significantly reduce the ground friction. In particular, we decrease the friction coefficient from 1 to 0.15 for walking and to 0.35 for running. Figure 10 compares results obtained for the running policy with and without using AdapNet. Note, AdapNet can effectively control the character to change its moving direction by sliding on its feet, as shown in the left example of the figure. In addition, using AdapNetNet, Figure 8. Character models with joints being locked. From left to right, the locked joints are abdomen, elbows, anlles, and right knee respectively (shown in red). Corresponding body parts between a locked joint are highlighted in orange. Figure 6. Motion interpolation in the latent space by activating and switching between multiple AdapNet models to let the character perform style transition interactively during goal-steering navigation. Figure 7. Character models with body shape variants. From left to right: _LongBody_, _BigBody_, _LongUpperArms_, _LongLowerArms_, _AsymmetricUpperArms_, _LongThighs_, _LongShins_, and _SuperLongLegs_. the character lowers its center of mass and takes quick steps to maintain its balance. In contrast, with the original policy, the character cannot run on the icy ground without falling down. For walking, the AdaptNet controller is more cautious with the character preferring to stop and change its direction in place. Without using AdaptNet, the character tends to turn around with a bigger radius, but not slow down. This demonstrates the ability of AdaptNet to change the behavior provided by the original policy to make it better suited to new environmental settings. #### 8.3.2. Terrain Adaptation with Additional Control Input To test AdaptNet with extra control input, we designed several experiments where the character is asked to do goal-steering navigation in challenging environments with procedurally generated terrains. A local heightmap is provided as the additional control input \(\mathbf{c}_{t}\) through which the character is expected to adjust its motions to prevent falling down during walking. The heightmap is extracted locally based on the character's root position and aligned with the orientation of the root, with a left and right horizon of 1.7 m, backward horizon of 1 m and forward horizon of 2.4 m. To process the heightmap \(\mathbf{c}_{t}\), we introduce a convolutional neural network (CNN) as the encoding module \(\mathcal{G}_{\phi}\) (see Eq. 5) for AdaptNet. We refer to the appendix for the network structure of the CNN. An extra map encoding module having the same structure with \(\mathcal{G}_{\phi}\) is added to the critic network for value estimation during adaptation. We show representative examples of our tested terrains in Figure 11 and note the appendix also gives more detail on terrain. We refer to the companion video for the navigation performance of the character when walking on the designed terrains after adaptation training. Even in terrains where the height changes smoothly, the character teeters under the control of the pre-trained policy and a minor change in the terrain slope is enough to make the character stumble. After adaptation training, AdapNet can enable the character to smoothly walk and turn on the uneven terrains without falling. Besides being able to step over low-height obstacles, the AdaptNet character exhibits intelligent local decision making, trying not not to step on the edge of the rocks on the rough terrain and avoids overly rugged paths by altering its moving trajectory to an easy-to-follow one. To further demonstrate the ability of AdaptNet to perform local path planning, we designed a more challenging environment with uncrossable obstacles randomly placed on the ground. We qualitatively show the results in Figure 12. As seen in the figure, the character controlled with AdaptNet (blue) can successfully walk around the obstacles. Without accounting for collisions, the character controlled solely by the initially trained policy (green) crosses through the regions where obstacles are placed. Figure 11. Character controlled with AdaptNet navigates in the environment with procedurally generated terrains. Figure 10. Comparison of characters controlled with and without AdaptNet running on an ice floor with very low friction. Left: character controlled with AdaptNet slides and skids on the ice ground while running. Right: character without AdaptNet slips down. Figure 9. Adapting the locomotion policy of running to characters with different body shapes and locked joints. Unsurprisingly, the introduction of the CNN (detailed in Appendix B) increases the time needed to perform policy optimization iterations in the training for rough terrains. Still, for the easier terrains, training can be done within 1.5 hours. The more rugged terrain took around 4 hours for training. Finally, it took around 22 hours to train adaptation for the local obstacle avoidance test case. We note that this is still less time than is needed for training the original flat-ground locomotion policy from scratch (26 hours). ### Perturbation Adaptation In a final experimental foray, we investigate AdapNet's ability to improve the handling of perturbations. Although the original policy can handle small perturbations, the character will still fall under larger impulses. In order to achieve more robust control, we adapt the control policy's ability to maintain balance in the presence of large disturbances. We begin with pre-trained policies for target-directed locomotion for walking and running. During the training process, we randomly apply perturbations (1000 N, lasting for 0.2 seconds) in different directions on the character's torso. With adaptation training of around 5 hours, the character is able to stay balanced against comparable impulses following training for both running and walking tasks. In contrast, the original controls are not able to handle such perturbations repeatably and they often lead to the characters falling over. Furthermore, we also observe that AdapNet control adjusts the character's footsteps to recover balance when the character is highly out of balance due to perturbations. A comparison of the original policy and our results can be seen in the supplementary video. ## 9. Ablation Studies In this Section, we compare the performance of AdapNet to different baselines along with performing sensitivity analysis on the two components of the proposed AdapNet technique. ### Baseline Comparisons We consider the following baselines: _Scratch_ where a new policy is trained from scratch on a given task; _FT_ where we directly finetune the pre-trained policy network to the newly given task; _FT + Reg_ where we apply regularization on the weights of the policy network during finetuning; and _PNet_ where policy adaptation is performed using a progressive neural network approach [20]. Figure 13 compares the learning curves for the goal-task performance between the baselines and AdapNet on three style-transfer tasks (top row) and three adaptation tasks (bottom row), two involving changes in the character's morphology and one for lowered ground friction. For fair comparison, we employ the same training setup for all baselines, where the reward function of the new policy accounts for both a task objective and an imitation objective using an automatic weighting scheme [21]. In the motion style transfer experiments, the imitation term is computed using a new discriminator that takes only the stylized motions as the reference similar to Section 8.1. As can be seen from the learning curves in Figure 13, _Scratch_ fails to attain the desired goals in the considered benchmarks, achieving a very low goal task reward within the given budget of 8M training samples. _FT_ can effectively modify the locomotion policy in the bottom three tasks where the character's morphology or environmental friction changes. However, in the motion style transfer tasks, the reward curve of _FT_ noticeably drops after the training begins as _FT_ overfits the imitation of the newly provided stylized reference motion and ignores the goal direction signal. In contrast, AdapNet provides a stable task reward curve during the adaptation training with the character being able to imitate the newly provided style without forgetting the previously learned locomotion behaviors as seen in Figure 14. The above findings are in line with previous works [19, 20] that have shown finetuning to be efficient when the parameters of a pre-trained model need Figure 12. Local collision avoidance in an obstacle-filled environment using AdapNet. Green characters show the movement trajectory generated by the original walking policy without AdapNet. Figure 13. Learning performance of our adaptation scheme using AdapNet, training from scratch for each task (Scratch), using a progressive network (PNet), and adaptation via directly finetuning the pre-trained policy (FT) and finetuning with regularization (FT + Reg). Colored regions denote mean values \(\pm\) a standard deviation based on 5 trials. The top row consists of motion style transfer tasks, while the bottom row focuses on morphological and terrain adaptation tasks. to be slightly adjusted to a new target domain. However, _FT_ can be susceptible to catastrophic forgetting when the imitation objective is significantly changed, as in the motion style transfer tasks. _FT + Reg_ leads to poor training and low-fidelity controllers in all tasks. While, in theory, adding regularization can improve the navigation performance, in practice, it is hard to regulate the weights during finetuning due to the presence of both significant large and small weights in the pre-trained policy. _PNet_ shares similarities with AdapNet as both approaches add new weights to the original policy network and freeze the old weights during transfer learning. However, despite these similarities, the architectures of the two approaches are significantly different. AdapNet uses a residual structure that supports merging, resulting in a single policy network which allows forward propagation in one pass during inference. In contrast, _PNet_ does not support merging and requires the original network to be present and run first to compute the values of the hidden neurons in the added network. This adds significant complexity and memory overhead, with the network structure becoming larger and slower. Importantly, during training, the added network in _PNet_ cannot start from zero as compared to AdapNet. In essence, the zero initialization in AdapNet allows us to guide the adaptation starting from the original policy. This is clear in the style-transfer tasks, where AdapNet begins training with a much higher reward than _PNNet_ due to the locomotion ability provided by the original policy. Despite its competitive final performance in several of the adaptation tasks, _PNet_ is sample inefficient. Finally, we note that it can lead to forgetting the prior knowledge provided by the pre-trained policy as the added network can significantly change the output of the whole model in some cases. This can be seen in the _Penguin Walk_ task where the navigation performance drops after 5M samples. Overall, AdapNet consistently outperforms all four baselines in terms of final performance and sample efficiency. In terms of memory efficiency, _Scratch_ and _FT_ do not add any overhead. AdapNet introduces additional parameters, but since the original network is frozen, the number of trainable parameters is still at the same scale with the original neural network when no conditional input, i.e., \(c_{t}\) and \(\mathcal{G}_{\phi}\), is needed. While the the total number of parameters increases, the effective number of parameters is the same as the original policy because AdapNet can be merged into the original network. In contrast, _PNet_ requires both networks to be present and effectively doubles the number of parameters. ### Latent Space Injection Our default implementation performs injection on the latent space \(\mathcal{Z}^{0}\) right after the goal state \(\mathbf{g}_{t}\) and character state \(\mathbf{o}_{t}\) are encoded and concatenated together. Here, we test the application of the injection module to other latent spaces after \(\mathcal{Z}^{0}\) but before reaching the action space, along with applying injection on all possible latent spaces simultaneously. To solely study the performance of latent space injection, we also remove the full-rank adaptation modules for these tests. The tested network structures are shown in Figure 15. Figure 16. Foot height (in meters) relative to the root when performing adaptation for motion style transfer tasks with different latent spaces being injected. Injection at \(\mathcal{Z}^{0}\) (blue) leads to the smoothest and most repeatable stepping motions. Figure 14. Top: AdapNet successfully controls the character to turn during walking in _Pace_ style. Bottom: The character controlled by FT policy keeps imitating the reference motion to pace straightly, and fails to turn due to overfitting. Green arrows indicate the dynamically generated target directions for locomotion control. Figure 15. Injection at different latent spaces. Gray blocks represent the original policy network locked during adaptation. Green blocks are the state encoder \(\mathcal{E}_{\phi}\), and blue ones are \(\mathcal{F}_{\phi}\). For left for right, the manipulated latent spaces are \(\mathcal{Z}^{0}\) (the default implementation of AdapNet), \(\mathcal{Z}^{1}\), \(\mathcal{Z}^{2}\) and \(\mathcal{Z}^{0.2}\) respectively. We ignore \(\mathcal{G}_{\phi}\), given that there is no extra control input in the tested examples here. To explore how the injection schemes perform differently in generating new policies, we run tests on several motion style transfer tasks. During our experiments, we observe qualitatively that injection at the lower space \(\mathcal{Z}^{2}\) or at all the latent spaces \(\mathcal{Z}^{0.2}\), which also includes the lower one, can easily produce jerky motions with stiff movements of the torso and legs. It can also lead to failures in training where the character falls repeatedly after a few training iterations. In Figure 16, we plot the trajectory of the foot height in two of our tested cases. While injection at \(\mathcal{Z}^{0}\) (blue) leads to a smooth repeatable trajectory, the curves become more irregular as the injected latent space changes from \(\mathcal{Z}^{1}\) (green) to \(\mathcal{Z}^{2}\) (orange) and then to \(\mathcal{Z}^{0.2}\) (red). We also see some sharp jumps in the curves of \(\mathcal{Z}^{2}\) and \(\mathcal{Z}^{0.2}\), which represent fast motion transitions. We refer to the supplementary video for the animation results including examples where injection at \(\mathcal{Z}^{2}\) and \(\mathcal{Z}^{0.2}\) fails. Overall, our tests show that as the chosen target latent space is closer to the action space, it becomes more difficult for AdapNet to generate desired motions, with \(Z^{0}\) both intuitively and empirically giving the best results. This observation is in agreement with recent work in image synthesis where the target space for manipulation is usually chosen nearer to the input of the generator rather than near the final output (Abdal et al., 2019; Karras et al., 2020; Zhuang et al., 2021). In terms of the network structure in our implementation, the input state \(\mathbf{s}_{t}\in\mathbb{R}^{784}\) is encoded into the first latent space \(\mathcal{Z}^{0}\in\mathbb{R}^{260}\) and then projected to \(\mathcal{Z}^{1}\in\mathbb{R}^{1024}\). The whole network, therefore, can be regarded as an encoder-decoder structure where the bottleneck is at \(\mathcal{Z}^{0}\). As we will show in Section 10, \(\mathcal{Z}^{0}\) is well-structured which makes it amenable to manipulation for motion generation. ### Comparison of Adaptation Methods We quantitatively evaluate the imitation performance of AdapNet with other adaptation approaches, including alternate methods with and without using its internal adaptation component. As in prior work (Harada et al., 2004; Peng et al., 2021; Tang et al., 2008; Xu and Karamouzas, 2021), we measure the imitation error via: \[e_{t}=\frac{1}{N_{\text{link}}}\sum_{l=1}^{N_{\text{link}}}\|p_{l}-\tilde{p} _{l}\|, \tag{15}\] where \(N_{\text{link}}\) is the total number of body links, \(p_{l}\in\mathbb{R}^{3}\) is the position of the body link \(l\) in the world space at the time step \(t\), and \(\tilde{p}_{l}\) is the body link's position in the reference motion. The evaluation results are shown in Table 2. We find our proposed approach to combine latent space adaptation (LSA) and internal adaptation (IA) results in the best performance. While the results in Table 2 imply LSA alone is sufficient in many cases, IA appears to help most in the difficult motion style transfer tasks, e.g., _Goose Step, Jaunry Skip_ and _Joyful Walk_, where the stylized motions are relatively far away from the pre-trained walking motions. In these tasks, adding IA improves the visual quality as well as motion smoothness, foot height, and gait frequency as shown in the supplementary video. It is important to note, however, that IA alone produces subpar performance. In addition, it cannot account for the additional control input needed in other adaptation tasks such as terrain adaptation. Further, even when no additional input is needed, IA components cannot be applied for modification of the state encoder as we cannot initialize the GRU layer of the encoder to zero. Latent modification is a distinctive feature of LSA, rendering Eqs. 4 and 7 unsuitable to be merged into the same formulation. To highlight the importance of the state encoder module \(\mathcal{E}_{\phi}\) in LSA, we consider an additional ablation where we remove \(\mathcal{E}_{\phi}\) and connect the output of \(\mathcal{E}_{\xi}\) directly to \(\mathcal{F}_{\phi}\) (see Figure 2). As shown in Table 2, utilizing just the old latent space embedding is useful but no more valuable than using internal adaptation. In addition to ablations to our own architecture, we also compare our IA component, which can be regarded as a full-rank adaptation scheme, to the low-rank adaptation (LoRA) scheme (Hu et al., 2021). LoRA typically works well for adapting large language models with a low rank \(\leq 8\). However, we did not find any evident improvement over just using LSA when an intrinsic rank of 8 was employed. Even after increasing the rank to 64, the performance gap between the full-rank adaptation scheme and LoRA still remains as listed in Table 2. Though using a low-rank decomposition can reduce the total number of parameters, it increases the computation cost since one more matrix multiplication is needed for each adaptor. Given the small size of our policy network, from our findings we conclude that the full-rank adaptation offers desirable benefits over LoRA. ## 10. Latent Space Analysis In this section, we provide more insights on the ability of AdapNet to successfully control and modify the latent space. ### Latent Space Visualization Figure 17 visualizes the latent space for different motion style transfer tasks. For each task, a controller was trained using AdapNet starting from the same pre-trained locomotion policy of walking. During adaptation training here, we use only the latent space injection component as in \(\mathcal{Z}^{0}\) for all models. We also remove the regularization term \(\mathcal{I}_{\phi}\) in Eq. 12 and prolong the training time to let AdapNet fit the style motions as much as possible. After training, we collect samples for each stylized motion from the simulated character following a straight path without any goal-direction changes. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Motion**} & **AdaptNet** & **LSA** & \multirow{2}{*}{**LSA**} & **LSA** \\ & **(LSA+IA)** & **+LoRA-64** & & & **w/o \(\mathcal{E}_{\phi}\)** \\ \hline Swaggering & \(\mathbf{0.05\pm 0.02}\) & \(\mathbf{0.05\pm 0.02}\) & \(0.06\pm 0.02\) & \(0.11\pm 0.03\) & \(0.11\pm 0.03\) \\ Goose Step & \(\mathbf{0.11\pm 0.08}\) & \(0.18\pm 0.08\) & \(0.21\pm 0.12\) & \(0.35\pm 0.11\) & \(0.36\pm 0.11\) \\ Stomp & \(\mathbf{0.08\pm 0.04}\) & \(0.10\pm 0.05\) & \(0.11\pm 0.06\) & \(0.26\pm 0.07\) & \(0.27\pm 0.08\) \\ Kitching & \(\mathbf{0.08\pm 0.03}\) & \(\mathbf{0.08\pm 0.03}\) & \(0.09\pm 0.05\) & \(0.20\pm 0.07\) & \(0.21\pm 0.07\) \\ Stomp & \(\mathbf{0.07\pm 0.02}\) & \(\mathbf{0.07\pm 0.02}\) & \(0.07\pm 0.02\) & \(0.14\pm 0.03\) & \(0.13\pm 0.03\) \\ Jaunry Skip & \(\mathbf{0.16\pm 0.09}\) & \(0.22\pm 0.10\) & \(0.25\pm 0.12\) & \(0.56\pm 0.18\) & \(0.61\pm 0.21\) \\ Sashay & \(\mathbf{0.06\pm 0.03}\) & \(\mathbf{0.06\pm 0.03}\) & \(\mathbf{0.06\pm 0.03}\) & \(0.09\pm 0.03\) & \(0.09\pm 0.04\) \\ Limp & \(\mathbf{0.10\pm 0.07}\) & \(\mathbf{0.10\pm 0.07}\) & \(0.12\pm 0.07\) & \(0.22\pm 0.09\) & \(0.29\pm 0.11\) \\ Pace & \(\mathbf{0.09\pm 0.03}\) & \(0.10\pm 0.03\) & \(0.10\pm 0.03\) & \(0.14\pm 0.03\) & \(0.13\pm 0.03\) \\ Penguin & \(\mathbf{0.11\pm 0.04}\) & \(0.13\pm 0.05\) & \(0.15\pm 0.05\) & \(0.31\pm 0.09\) & \(0.38\pm 0.13\) \\ Strutting & \(\mathbf{0.09\pm 0.03}\) & \(0.10\pm 0.05\) & \(0.12\pm 0.06\) & \(0.23\pm 0.06\) & \(0.27\pm 0.06\) \\ Joyful & \(\mathbf{0.17\pm 0.07}\) & \(0.22\pm 0.09\) & \(0.28\pm 0.12\) & \(0.54\pm 0.22\) & \(0.59\pm 0.24\) \\ \hline \end{tabular} \end{table} Table 2. Imitation error during motion style transfer with different adaptation components. Values are reported in meters in the format of mean\(\pm\)std. We use a multidimensional scaling technique to reduce the dimension of the collected latent samples. As seen in the figure, the 2D projection of the latent space exhibits a circular shape with the pre-trained walking policy (dark purple) located near the center. There is a clear and roughly continuous transition when the motion style changes from one to the other, which demonstrates the well structured nature of the latent space with the different motion styles. The distribution of the stylized motion in the visualized space is roughly consistent with the imitation error distribution listed in Table 2 when no internal adaptor is employed. Motions with smaller imitation errors are distributed generally closer to the pre-trained policy while _Joyful Walk_ (light green) has the largest error and is located the farthest away from the center of the circle. We also note the _Penguin Walk_ (red) and _Pace_ (light purple) show greater differences in frequency and speed and appear farther away from the center of the figure. This indicates that the distribution in the latent space not only reflects the pose similarity between motions but also some semantic information, like motion rhythm and gait frequency. Similar conclusions have been drawn by recent work in the field of image generation, where the latent space for image generation is considered to capture semantic information more than just simple color transformations (Epstein et al., 2022; Jahanian et al., 2020; Shen et al., 2020). ### Latent Injection Regularization In Figure 18, we show the latent visualizations of several motions generated by AdapNet when L2 regularization is applied on the injected latent. For comparison, we highlight in white each motion's distributions in the full latent space shown in Figure 17. In the lower figures, the dark purple points represent the latent embedding of the pre-trained walking, while the gray points are generated by the pre-trained encoder \(\mathcal{G}_{\xi}\) when the simulated character performs stylized motions. Other colors represent varying levels of regularization, as shown. The goal of regularization is to ensure that the generated latent can fall into the manifold composed of the gray dots. This represents a relatively safe region where the latent space is expected to be handled properly by the pre-trained policy. In the _Stoop_ task, there is almost no difference with and without using the L2 regularization. All visualized samples are overlapped together and covered by the gray region. This is expected given that the style motion of _Stoop_ is close to the walking motion in the latent space. However, in the example of _Pace_, there is a clear separation when different regularization coefficients are employed. Note when a coefficient of 0.1 is taken, the generated stylized motion (orange) is overlapped with the walking motion (dark purple). AdapNet, in this case, is over-regularized. It yields to the pre-trained policy and fails to adapt the pre-trained policy to perform the desired stylized motion. In contrast, without regularization (\(\beta=0\)), the latent is already outside of the safe, gray region. AdapNet, in this case, simply overfits to imitating the style motion and loses the ability to perform goal-steering navigation. While in _Jaunity Skip_, any \(\beta\)-value can be employed, in _Limp_ a \(\beta\)-value of 0.01 best ensures that the latent space stays into the grey manifold while attaining high imitation performance. In all adaptation tasks detailed in the paper, we found \(\beta=0.01\) to be sufficient. We note that such regularization is not necessary in other tested adaptation tasks without motion style transfer. In such cases, the new expected motions are close to the original policy and already lie in the safe region. We refer to the supplementary video for a visual comparison of the generated motions when different regularization coefficients are employed. ## 11. Conclusions This paper presents AdapNet, an approach for adapting existing character RL-based control policies to new motion tasks. Our approach applies two strategies. The first adapts the latent space by conditioning on the character's state and allowing the addition of Figure 17. Latent space visualization with respect to different styles of walk-related motions. The latent representations of the stylized motions are obtained by AdapNet without using the internal adaptation component. The walk motion (dark purple) near the center is provided by the pre-trained policy based on which AdapNet performs adaptation to learn the stylized motions. The visualization is achieved using multidimensional scaling technique to project the latent representations from 260 dimensions to 2 dimensions. new control inputs that will allow the control policy to perform new tasks. The second aims at control refinement which allows policy adaptation by shifting the original policy and generating new control actions based on new training. Importantly, AdaptNet training always begins with having no (zero) influence, starting from the existing policy and increasing its influence as training proceeds. We demonstrate that a previously trained control policy for locomotion can be adapted to support diverse style transfer, morphological changes including limb length variation and locked joints, and terrain adaptation including varied friction and geometry. These adaptations are also very efficient to learn. While the original locomotion policy requires 26 hours of training, our style adaptations take less than thirty minutes to produce a full controller that is capable of goal-directed steering while adhering to a specified walking style. More extreme adaptations require more time, but training is still far more efficient than the cost of learning the initial policy. A core limitation of this work is that policy adaptation requires an existing pre-trained policy, and thus it cannot act to produce new motions on its own. While it is capable of migrating the policy to many new behaviors and conditions, extreme adaptions (e.g., training a jumping action with long flight phase from a walking controller) do not produce the expected results. We believe this is due to the distinct characteristics of the two behaviors and we see such 'deep' adaptation as a direction for future work. Also, while we demonstrate smooth interpolation between latent space embeddings when we employ control-layer refinement, interpolation does not always produce coherent in-between behaviors. As we show in Section 9.2, an improper choice of the target latent space could lead to undesired control results. As such, we found starting with a proper latent space is important for obtaining high-quality controllers. In the current work, we use the recent approach of Xu et al. (2023) for pre-training an initial policy that is then modified by AdaptNet. In the future, we would like to see how well other recent approaches for training physics-based controllers (Peng et al., 2022, 2021; Yao et al., 2022) can work with our proposed approach. We would also like to investigate how our approach can be extended to generate a well-represented latent space that can be further exploited for motion synthesis. This opens up many avenues for further research, including latent space disentanglement, inversion, and shaping. ###### Acknowledgements. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NERC) and the National Science Foundation under Grants No. IIS-2047632 and IIS-2232066. Support for the first author was made through a generous gift from Roblox. The Bellairs Workshop on Computer Animation was instrumental in the conception of the research presented in this paper.
2309.09418
Real eigenvalues are determined by the recursion of eigenstates
Quantum physics is generally concerned with real eigenvalues due to the unitarity of time evolution. With the introduction of $\mathcal{PT}$ symmetry, a widely accepted consensus is that, even if the Hamiltonian of the system is not Hermitian, the eigenvalues can still be pure real under specific symmetry. Hence, great enthusiasm has been devoted to exploring the eigenvalue problem of non-Hermitian systems. In this work, from a distinct perspective, we demonstrate that real eigenvalues can also emerge under the appropriate recursive condition of eigenstates. Consequently, our findings provide another path to extract the real energy spectrum of non-Hermitian systems, which guarantees the conservation of probability and stimulates future experimental observations.
Tong Liu, Youguo Wang
2023-09-18T01:30:09Z
http://arxiv.org/abs/2309.09418v1
# Real eigenvalues are determined by the recursion of eigenstates ###### Abstract Quantum physics is generally concerned with real eigenvalues due to the unitarity of time evolution. With the introduction of \(\mathcal{PT}\) symmetry, a widely accepted consensus is that, even if the Hamiltonian of the system is not Hermitian, the eigenvalues can still be pure real under specific symmetry. Hence, great enthusiasm has been devoted to exploring the eigenvalue problem of non-Hermitian systems. In this work, from a distinct perspective, we demonstrate that real eigenvalues can also emerge under the appropriate recursive condition of eigenstates. Consequently, our findings provide another path to extract the real energy spectrum of non-Hermitian systems, which guarantees the conservation of probability and stimulates future experimental observations. ## I Introduction In the textbook of quantum mechanics, the operator of the observable is supposed to the Hermitian operator, to ensure that eigenvalues are totally real [1]. However, with the breakthrough of quantum theory and the development of experimental technology [2; 3; 4], there are several models of open quantum systems in which non-Hermitian Hamiltonians with complex eigenvalues make perfect sense, and both the real and imaginary parts of those are needed to reproduce the measured absorption and emission spectra [5; 6]. In fact, it is nowadays well understood that having real eigenvalues is a property of the Hamiltonian related to conservation of the total probability [7], rather than physical observability. However, the lifetime of the particle in non-Hermitian systems is considered short due to the imaginary part of the eigenvalue during the dynamical evolution. Therefore, non-Hermitian systems with pure real energy spectra are particularly valuable. The emergence of parity-time (\(\mathcal{PT}\)) symmetry class [8; 9] provides such a paradigm, and formulates an alternative theory of quantum mechanics in which the mathematical axiom of Hermiticity is replaced by the physically transparent condition. If the Hamiltonian has the unbroken \(\mathcal{PT}\) symmetry, fascinatingly, the eigenvalues are pure real. Hence the \(\mathcal{PT}\) symmetry class describes a class of non-Hermitian systems having conservation of the total probability and unitary time evolution. A question arises naturally: is there other physical mechanism for generating the real energy spectrum [10; 11] of non-Hermitian systems? Massive research enthusiasm has being devoted to unveiling the new non-Hermitian class [12; 13; 14; 15]. In this work, we attempt to revealing a new mechanism for real energy spectra of non-Hermitian systems, the core idea is that the recursion of eigenstates of the Hamiltonian can constraint the eigenvalues to be the real or complex numbers. In fact, there are many paradigms in quantum mechanics textbooks demonstrating the properties of eigenstates have indeed great influence on eigenvalues. For example, in the eigenvalue problem of the quantum harmonic oscillator, it can be proved that the Hermite equation has a polynomial solution (Hermite polynomial), namely the wave function is represented as \(\phi(x)=\sum_{n=0}^{\infty}a_{n}x^{n}\). And the recurrence relation of the coefficient \(a_{n}\) satisfies \(\frac{a_{n+2}}{a_{n}}=\frac{2n+1-E}{(n+2)(n+1)}\). The problem is that the solution of eigenstates \(\phi(x)\) will inevitably lead to divergence in large \(x\) limit, so the truncation must occur for the recursive relation of \(a_{n}\). The most straightforward way is to set the numerator to zero, namely \(2n+1-E=0\). This leads to \(E=2n+1\), which means eigenvalues are taken as discrete values rather than continuous values in classical physics. Analogy to the quantum harmonic oscillator model, the discretization of eigenvalues is constrained by the recursion of eigenstates, it is reasonably deduced that there exists a class of models in which the eigenvalues to be the real or complex numbers can be determined by the recursive relation of eigenstates. The rest of the paper is organized as follows. In Sec. II, we theoretically demonstrate in detail how the properties of eigenstates determine the real or complex eigenvalues through a simple model. In Sec. III, we validate the theoretical results through numerical simulations. In Sec. IV, we provide some prospects for more models, and point out that eigenvalues determined by the recursion of eigenstates hold in general. In Sec. V, we make a summary of the paper. ## II Model and real eigenvalues Let's first consider a simple model. Simplicity means that the solution does not require complicated mathematical skills, more relevantly, it can be regarded as a paradigm to grasp the physical picture intuitively. Previous efforts [16; 17] have been made to numerically and semi-analytically solve this model, whereas we attempt to obtain the eigenvalues in an analytical sense. The difference Schrodinger equation of the system can be written as \[\psi_{n+1}+\psi_{n-1}+V\exp[i(-2\pi\alpha n+\theta)]\psi_{n}=E\psi_{n}, \tag{1}\] with the periodic boundary conditions \[\psi_{n+L}=\psi_{n}, \tag{2}\] where \(V\) is the complex potential strength, \(E\) is the eigenvalue of systems, and \(\psi_{n}\) is the amplitude of wave function at the \(n\)-th lattice. We choose to unitize the nearest-neighbor hopping amplitude and a typical choice for irrational parameter is \(\alpha=(\sqrt{5}-1)/2\), \(\theta\) is the phase factor and generally not zero. Obviously, the complex potential \(\exp[i(-2\pi\alpha n+\theta)]\) doesn't satisfy \(V(n)=V^{\star}(-n)\) in the discrete lattice, hence the model is independent of \(\mathcal{PT}\) symmetry. This model describes a ring with \(L\) sites, where the system size \(L\) should be chosen extremely large so that the irrational number \(\alpha\) can be approximated as a rational \(\alpha\simeq p/q\) with \(p,q\) being irreducible integers. Then we can utilize the discrete Fourier transform \[\phi_{k}=\frac{1}{\sqrt{L}}\sum_{n=1}^{L}\psi_{n}\exp(2\pi i\alpha nk), \tag{3}\] the eigenvalue equation of Eq. (1) can be transformed into the momentum space, \[\exp(i\theta)\phi_{k-1}+\frac{2}{V}\cos(2\pi\alpha k)\phi_{k}=\frac{E}{V}\phi_ {k}, \tag{4}\] there is a multiplying factor \(\exp(i\theta)\) in the term \(\phi_{k-1}\), however this factor is readily suppressed under the gauge transformation \(\phi_{k}\rightarrow\exp(ik\theta)\phi_{k}\). According to Eq. (4), an initial wave function solution can be written as \[\frac{\phi_{k}}{\phi_{k-1}}\propto\left\{\begin{array}{cc}0&k<0\\ 1&k=0\\ \frac{V}{E-2\cos(2\pi\alpha k)}&k>0.\end{array}\right. \tag{5}\] Then the recursive relation of wave function can be written as \[\left|\frac{\phi_{k}}{\phi_{0}}\right|=\prod_{k=1}^{L}\left|\frac{V}{E-2\cos( 2\pi\alpha k)}\right|. \tag{6}\] Obviously, if we know the limit of Eq. (6), we can obtain the eigenvalue \(E\). Let the Ansatz of the normalized wave function be \(\left|\phi_{k}\right|\equiv\left|\phi_{0}\right|\exp(-\gamma_{m}k)\) with \(\gamma_{m}\geq 0\). When \(\gamma_{m}<0\), \(\left|\phi_{k}\right|\) cannot be normalized. Then we equivalently transform the Ansatz as \[\gamma_{m}(E)=-\lim_{k\rightarrow\infty}\frac{1}{k}\ln\left|\frac{\phi_{k}}{ \phi_{0}}\right|. \tag{7}\] Substitute Eq. (6) into Eq. (7), by utilizing the Weyl's equidistribution theorem [18], the summation can be transformed into the integral, \[\begin{split}\gamma_{m}(E)&=\lim_{L\rightarrow\infty }\frac{1}{L}\sum_{k=1}^{L}\ln\left|\frac{E-2\cos(2\pi\alpha k)}{V}\right|\\ &=\ln\left(\frac{1}{V}\right)+\frac{1}{2\pi}\int_{0}^{2\pi}\ln \left|E-2\cos(\tilde{k})\right|d\tilde{k}.\end{split} \tag{8}\] Obviously, to obtain the eigenvalue \(E\), we need to know the value of \(\gamma_{m}\). In fact, \(\gamma_{m}\) has the explicit physical significance, which is Lyapunov exponent in momentum space. To obtain \(\gamma_{m}\), we can utilize a famous formula, which has been obtained initially for random systems by Thouless and can be used without any change for non-random systems. Namely, Lyapunov exponent can be related to the density of states by \[\gamma(E)=\int dE^{{}^{\prime}}\ln|E-E^{{}^{\prime}}|\rho(E^{{}^{\prime}}), \tag{9}\] which is dubbed Thouless formula [19]. For non-Hermitian tight-binding lattices with nearest-neighbor hopping, provided that the hopping amplitudes are symmetric, a similar relation can be established [17]. The advantage of this formula is able to connect Lyapunov exponent in position space \(\gamma\) and Lyapunov exponent in momentum space \(\gamma_{m}\). The definition of density of states is the number of quantum states with energy ranging from \(E\) to \(E+\Delta E\). Under Fourier transform, the eigenvalues of Eq. (1) and Eq. (4) remain unchanged. Hence Eq. (1) and Eq. (4) have the same density of state, \[\rho(E)=\rho_{m}(\frac{E}{V}), \tag{10}\] then substitute Eq. (10) into Eq. (9), \[\begin{split}\gamma(E)&=\int dE^{{}^{\prime}}\ln|E -E^{{}^{\prime}}|\rho_{m}(\frac{E^{{}^{\prime}}}{V}),\\ &=\int dE^{{}^{\prime}}\ln|\frac{E-E^{{}^{\prime}}}{V}|\rho_{m}( \frac{E^{{}^{\prime}}}{V})+\ln(V),\\ &=\gamma_{m}(\frac{E}{V})+\ln(V).\end{split} \tag{11}\] From Eq. (11), if we obtain \(\gamma\), \(\gamma_{m}\) can also be naturally obtained. As regard to \(\gamma\), we refer to the transfer matrix method, which can be used for the analysis of the wave propagation in classical or quantum systems. The growth or decay of the propagation is governed by the Lyapunov spectrum of the product of transfer matrices. For the one-dimensional nearest-neighbor hopping model, the transfer matrix is two-dimensional, the nonzero or zero values of Lyapunov exponents are utilized to measure whether waves are localized at a certain location or spread throughout the entire space. The specific transfer matrix of Eq. (1) can be written as \[T(\theta)=\left(\begin{array}{cc}E-V\exp[i(2\pi\alpha+\theta)]&-1\\ 1&0\end{array}\right), \tag{12}\] according to Avila's theory [20], let us complexify the phase \(\theta\rightarrow\theta-i\vartheta\), and let \(\vartheta\rightarrow+\infty\), direct computation shows that \[T(\theta-i\vartheta)=\exp(\vartheta)\exp[i(2\pi\alpha+\theta)]\left(\begin{array} []{cc}-V&0\\ 0&0\end{array}\right)+o(1). \tag{13}\] Thus we have \(\gamma\left(E,\vartheta\right)=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\ln \|T_{n}(\theta-i\vartheta)\|=\vartheta+\ln(V)+o(1)\). Note \(\gamma(E,\vartheta)\) is a convex, piecewise linear function of \(\vartheta\) with their slopes being integers, if the energy \(E\) belongs to the spectrum, then \[\gamma\left(E,\vartheta\right)=\max\{0,\ln(V)+\vartheta\},\ \ \forall\vartheta\geq 0, \tag{14}\] consequently, we have \(\gamma\left(E\right)=\max\{0,\ln(V)\}\) by setting \(\vartheta=0\). According to Eq. (11), also note that \(\gamma\) and \(\gamma_{m}\) are independent of \(E\), we obtain the Lyapunov exponent in momentum space \[\gamma_{m}(E)=\max\{0,\ln\left(\frac{1}{V}\right)\}. \tag{15}\] It is obviously, when \(V<1\), the real space Hamiltonian is in the extended phase, whereas the momentum space Hamiltonian is in the localized phase; while \(V>1\), the real space Hamiltonian is in the localized phase, whereas the momentum space Hamiltonian is in the extended phase. It should be emphasized that Lyapunov exponent cannot directly give the information of eigenvalues, taking any value of \(E\) satisfies the formula \(\gamma=\ln(V)\), however, the eigenvalue of the system is certainly not arbitrary. Consequently, when \(\gamma_{m}(E)=\ln\left(\frac{1}{V}\right)\), namely \(0<V\leq 1\), from Eq. (8) it can be obtained \(\frac{1}{2\pi}\int_{0}^{2\pi}\ln\left|E-2\cos(\tilde{k})\right|d\tilde{k}=0.\) From the above integral equation, we can obtain that the set of all allowed eigenvalues is \[E=2\cos(\tilde{k}) \tag{16}\] with \(0\leq\tilde{k}<2\pi\). This result has important physical consequences, it means that when \(0<V\leq 1\), all eigenvalues are real numbers. When \(\gamma_{m}(E)=0\), namely \(V>1\), from Eq. (8) it can be obtained \(\frac{1}{2\pi}\int_{0}^{2\pi}\ln\left|E-2\cos(\tilde{k})\right|d\tilde{k}=\ln \left(V\right).\) This is Dini Integral, which is a logarithmic integral with important applications in mathematical physics and engineering, one first evaluated in 1878 by the Italian mathematician Ulisse Dini. According to the conclusions of Dini Integral, we can obtain that the set of all allowed eigenvalues is \[E=Ve^{i\tilde{k}}+\frac{e^{-i\tilde{k}}}{V} \tag{17}\] with \(0\leq\tilde{k}<2\pi\). At the phase transition point \(V=1\), two sets of eigenvalues can be connected smoothly. In the above derivation, we focus on the integral equation of the Lyapunov exponent in the momentum space, essentially the recursion of eigenstate in the momentum space, to determine the value range of eigenvalues. ## III Numerical verification To support the above analytical result, we now present the numerical verification, namely directly diagonalize Eq. (1) to obtain the eigenvalues and eigenstates. In Fig. 1(a) and (b), numerical results under periodic boundary conditions demonstrate that when the potential strength \(V<1\), all eigenvalues of the system are filled with intervals \([-2,2]\), therefore, they are pure real. Thus, the conserved evolution probability of the system is guaranteed, just as the \(\mathcal{PT}\) unbroken system. While \(V>1\), as shown in Fig. 1(c) and (d), the imaginary part of eigenvalues is no longer limited to 0, whereas the real and imaginary parts form a closed loop, satisfying the expression \(E=Ve^{i\tilde{k}}+\frac{e^{-i\tilde{k}}}{V}\) with \(0\leq\tilde{k}<2\pi\). Considering non-Hermitian skin effect [14; 15], real energy spectra can be induced by open boundary conditions, we further perform the numerical simulation for Eq. (1) under open boundary conditions. As shown in Fig. 1(e) and (f), and as Figure 1: (Color online) The eigenvalues of Eq. (1) are illustrated, the abscissa is the real part of the eigenvalues, and the ordinate is the imaginary part. The total number of sites is set to be \(L=2000\). The red dots represent numerical solutions under periodic boundary conditions, the blue circles represent theoretical values, and the red “x” represent numerical solutions under open boundary conditions. As shown in (a) and (b), when \(V<1\), the system host the pure real energy spectrum \([-2,2]\). While \(V>1\) [(c) and (d)], the energy spectrum of the system forms a closed loop, and the numerical solutions are in good agreement with the theoretical predicted values. (e) and (f) show that the spectra are not affected by the boundary conditions. Figure 2: (Color online) The absolute value of eigenstates of Eq. (1) under periodic boundary conditions for typical eigenvalues. The total number of sites is set to be \(L=2000\). (a) and (b) demonstrate that the general real eigenvalues correspond to extended states when \(V<1\); (c) and (d) demonstrate that some special real eigenvalues correspond to localized states when \(V>1\). compared to Fig. 1(a) and (d), the energy spectra remain unchanged under both two boundary conditions, which demonstrates that the eigenvalue problem of Eq. (1) is independent of non-Hermitian skin effect. Thus, all numerical results are completely consistent with the theoretical predictions, which confirm the validation of our theory. Resulting from that all eigenstates of the system are extended for \(V<1\), there exists a major misunderstanding that extended states (\(V<1\)) correspond to real eigenvalues, and localized states (\(V>1\)) correspond to complex eigenvalues, as shown in Fig. 2. Here we should clarify that some special localized states can also host real eigenvalues. When \(V>1\), with regard to the \(\bar{k}=0\) eigenstate, the eigenvalue is \(E_{0}=V+\frac{1}{V}>2\), which means that the localized eigenstates with eigenvalues \(V+\frac{1}{V}\) are also real numbers, as shown in Fig. 2(c) and (d). Hence, the corresponding principle mentioned above is no longer valid, which is consistent with that the real-complex spectrum transition does not originate from some symmetry breaking. The suitable recursive relation can lead to the appearance of the real energy, whether the eigenstate is an extended state or a localized state. ## IV Prospect in more models In fact, real eigenvalues in many complicated models [21; 22; 23; 24] also originated from the recursion of eigenstates. However, due to the complexity of models, obtaining rigorous mathematical solutions is very difficult. To illustrate the generality of the framework, we briefly introduce two models, and discuss some semi-analytical solutions. Firstly, we introduce the following difference equation [23], which is written as \[\psi_{n+1}+\psi_{n-1}+Vi\tan(2\pi\alpha n+\theta)\psi_{n}=E\psi_{n}, \tag{18}\] where the meaning of each parameter is the same as Eq. (1), except that the potential is replaced by \(i\tan(2\pi\alpha n+\theta)\) from \(\exp[i(-2\pi\alpha n+\theta)]\). Through Fourier transformation, the dual equation of Eq. (18) in momentum space is written as \[\phi_{k+1}=\frac{-2\cos[2\pi(k-1)\alpha]+V+E}{2\cos[2\pi(k+1)\alpha]+V-E}\phi_ {k-1}. \tag{19}\] From Eq. (19), Lyapunov exponent \(\gamma_{m}\) can be obtained, \[\gamma_{m}(E)=\frac{1}{2\pi}\int_{0}^{2\pi}\ln g^{(1)}-\ln g^{(2)}d\theta, \tag{20}\] where \(g^{(1)}=|-2\cos(2\pi\theta)+V+E|,g^{(2)}=|2\cos(2\pi\theta)+V-E|\). Then we get the explicit expression for Lyapunov exponent in positon space \(\gamma\) by utilizing Avila's global theory [20], \[\begin{split}\gamma(E)=\max\{&\ \mathrm{arcosh}\ \frac{|E+V+2|+|E+V-2|}{4},\\ &\ \mathrm{arcosh}\ \frac{|E-V+2|+|E-V-2|}{4}\}.\end{split} \tag{21}\] Unfortunately, due to the complexity of this model, we are unable to obtain the explicit expression of \(\gamma_{m}\) through Thouless formula. Alternatively, we make \(\gamma(E)=0\), and obtain that \(E\) is within the region \([V-2,2-V]\); we make \(\gamma(E)>0\), and obtain that \(E\) is within the region \(\{iy\mid y\in\mathbb{R}^{*}\}\) (\(V\leq 2\)) or \(\{iy\mid y\in\mathbb{R}\}\) (\(V>2\)). Then, we substitute these guessed energies "E" into Eq. (20), and find that when \(E\in[V-2,2-V]\), \(\gamma_{m}(E)>0\), while \(E\) is a pure imaginary number, \(\gamma_{m}(E)=0\), all detailed calculations can be found in Ref. [23]. Since the energy "E" exactly satisfies the duality relation (\(\gamma=0,\gamma_{m}>0\) and \(\gamma>0,\gamma_{m}=0\)) indicated by Thouless formula between Lyapunov exponent in position space and momentum space, we conjecture that "E" is the eigenvalue of Eq. (18). In addition to quasiperiodic models, real eigenvalues determined by the recursion of eigenstates is also applicable to random disordered systems. A paradigm of non-Hermitian random disorder is Hatano-Nelson model [25], which originated from the study of the pinning of flux lines by random columnar defects in a superconductor. In the clean limit (no random impurities), the model is well known as the non-Hermitian skin effect due to the imaginary gauge field \(h\), the eigenvalues form an ellipse on the complex plane under periodic boundary conditions, and the corresponding eigenstates are extended. With increase of concentration of random impurities, real eigenvalues emerge at the edge of spectra and the corresponding eigenstates are localized, while complex eigenvalues at the center of spectra also correspond to extended eigenstates. On the surface, it seems that Hatano-Nelson model violates the principle of real eigenvalues corresponding to extended eigenstates for quasiperiodic models. Actually, numerous works in physics community [26; 27; 28] have shown that the emergence of real energy spectra still stems from Lyapunov exponent of the eigenstate for Hatano-Nelson model. Unfortunately, a complete and rigorous calculating result is still missing. Nevertheless, some mathematical references [29; 30] demonstrate that the behaviour of the eigenvalues depends crucially on the Lyapunov exponent associated to the Hermitian operator. The mathematical skills of relational papers are quite advanced, and we directly quote their conclusions. Making the potential of random impurities has the uniform distribution \([-1,1]\), there exist two critical values \(0<h_{1}<h_{2}\) and the following hold: (i) when \(0\leq h<h_{1}\), the eigenvalues of Hatano-Nelson model are totally real; (ii) when \(h_{1}<h<h_{2}\), some of the eigenvalues remain real, while others form a smooth curve on the complex plane; (iii) when \(h_{2}<h\), all eigenvalues become complex. An intuitive understanding is that when the imaginary gauge field \(h=0\), the system is the Hermitian Anderson model, and the eigenvalues are pure real accompanied by localized eigenstates. While \(h\) gradually increases, the eigenvalues corresponding to the localized state remain real, nevertheless, the complex eigenvalues induced by non-Hermitian effect correspond to the delocalization of the eigenstates. Thus, the recursions of eigenstates are still closely linked with eigenvalues. It is worth mentioning that, the exact solution of eigenvalues for non-Hermitian disordered and quasi-disordered systems is a huge challenge, the complete clarification of these problems depend on the progress of future mathematical tools. Summary In summary, we provide, for the first time analytically, an example of pure real energy spectrum originated from the recursive relation of eigenstates, which is different from the known physical mechanism, such as the \(\mathcal{PT}\) symmetry. As long as the recursion of the eigenstate is determined, the eigenvalue of the system may have a pure real energy spectrum, which means that the system can undergo unitary time evolution. In addition, we need to emphasize that extended states of the system lead to the real eigenvalues in most cases, however, this is not a necessary condition for the eigenvalue to be a real number, and localized states can also produce the real eigenvalues. Finally, we provide the prospect that eigenvalues determined by the recursion of eigenstates are widely present in various systems. Our discovery of this non-Hermitian phenomenon promotes the realm of the eigenvalue problem in non-Hermitian quantum theory towards a new avenue, and these findings are expected to be of great interest to the broad community. ###### Acknowledgements. This work was supported by the National Natural Science Foundation of China (Grant No. 62071248), the Natural Science Foundation of Jiangsu Province (Grant No. BK20200737), NUPTSF (Grants No. NY220090 and No. NY220208), the Innovation Research Project of Jiangsu Province (Grant No. JSSCBS20210521), and China Postdoctoral Science Foundation (Grant No. 2022M721693).
2308.16611
Detecting Out-of-Context Image-Caption Pairs in News: A Counter-Intuitive Method
The growth of misinformation and re-contextualized media in social media and news leads to an increasing need for fact-checking methods. Concurrently, the advancement in generative models makes cheapfakes and deepfakes both easier to make and harder to detect. In this paper, we present a novel approach using generative image models to our advantage for detecting Out-of-Context (OOC) use of images-caption pairs in news. We present two new datasets with a total of $6800$ images generated using two different generative models including (1) DALL-E 2, and (2) Stable-Diffusion. We are confident that the method proposed in this paper can further research on generative models in the field of cheapfake detection, and that the resulting datasets can be used to train and evaluate new models aimed at detecting cheapfakes. We run a preliminary qualitative and quantitative analysis to evaluate the performance of each image generation model for this task, and evaluate a handful of methods for computing image similarity.
Eivind Moholdt, Sohail Ahmed Khan, Duc-Tien Dang-Nguyen
2023-08-31T10:16:59Z
http://arxiv.org/abs/2308.16611v1
# Detecting Out-of-Context Image-Caption Pairs in News: ###### Abstract. The growth of misinformation and re-contextualized media in social media and news leads to an increasing need for fact-checking methods. Concurrently, the advancement in generative models makes cheapfakes and deepfakes both easier to make and harder to detect. In this paper, we present a novel approach using generative image models to our advantage for detecting Out-of-Context (OOC) use of images-caption pairs in news. We present two new datasets with a total of 6800 images generated using two different generative models including (1) DALL-E 2, and (2) Stable-Diffusion. We are confident that the method proposed in this paper can further research on generative models in the field of cheapfake detection, and that the resulting datasets can be used to train and evaluate new models aimed at detecting cheapfakes. We run a preliminary qualitative and quantitative analysis to evaluate the performance of each image generation model for this task, and evaluate a handful of methods for computing image similarity. Cheapfake Detection, Text-to-Image, Generative Models, Dataset, Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + Footnote †: journal: Computer Vision, Image Similarity + with a large-scale dataset of 200K images with 450K textual captions. The test set includes image-caption triplets that are labeled Out-Of-Context (OOC) or Not-Out-Of-Context (NOOC) (Beng et al., 2016). Given the recent advancements in text-to-image generative models, we propose that they can be employed for cheapfake detection by generating images that express the caption's content. In this paper, we present two new datasets comprising of 3400 images each, generated using OpenAI's DALL-E 2 (Dall et al., 2016)(Dall et al., 2016), and Stable Diffusion (Dall et al., 2016)(Dall et al., 2016) along with the textual captions used to generate the images. We also present a novel approach for verifying the consistency between captions and images by comparing the perceptual similarity between AI generated images based on the captions from the COSMOS dataset. The idea behind our approach is that, synthetically generated images (from both DALL-E 2 and Stable Diffusion) should have a high semantic similarity towards each other or the original image if the captions are also similar in their semantics, and thus can help identify OOC cheapfake media. We carry out qualitative as well as quantitative analysis of our proposed method on the generated datasets, and report some insightful results in our study. The code and the generated datasets are available at the Github Repository 1. Footnote 1: [https://github.com/eivindmoholdt/Master-Code-git](https://github.com/eivindmoholdt/Master-Code-git) ## 2. Related Work New cheapfake detection models have improved the accuracy of the COSMOS baseline in several different ways. Akgul _et al._ propose a method named Differential Sensing, which adds a negative ("was not true') and positive ("was true') probe to each caption (Dall et al., 2016). Using the SBERT similarity score between the original captions and the probes, the scores moving opposite directions when compared to the original captions would indicate the captions contradicted each other (Krause et al., 2017). This method increased the accuracy from 81.9% to 85.6% on the test set. Tran _et al._ propose a Natural Language Inference (NLI) task to determine whether a given caption pair contradict or entail each other in order to address the relationship between the captions. They also propose a Online Caption Checking method that crawls online resources to find a third caption to gain additional context to an image and verify the caption's truthfulness. This method however struggled to verify the captions, as the third caption sometimes would relate to a different image in the article with the original image (Krause et al., 2017). La _et al._ demonstrates a bottom-up attention model with visual semantic reasoning to extract image features for image-text matching, resulting in a 4.5% increase from the COSMOS baseline (Lai et al., 2016). ### Stable Diffusion Stable Diffusion is a latent text-to-image diffusion model developed by Stability AI and published in 2022 (Dall et al., 2016). It combines deep learning techniques and probabilistic modeling to generate high-quality images. Unlike basic diffusion models, Stable Diffusion operates in the latent space, where it applies noise to a compressed representation of the data and then performs denoising operations to recover samples in the data space. This approach is computationally efficient while preserving the essential features of the image. The model consists of three main components: a Variational AutoEncoder (VAE) encoder, a U-Net block with a ResNet backbone for denoising, and a VAE decoder that generates the final image from the reconstructed latent representation (Dall et al., 2016). ### Dall-E 2 DALL-E 2 is a text-to-image model developed by OpenAI (Dall et al., 2016). It combines two previously published models: CLIP and GLIDE. CLIP is a zero-shot neural network introduced in 2021, which forms the foundation for the multimodal approach to image synthesis. It learns to associate objects in sentences with objects in images using contrastive learning, creating a joint embedding space for visual and textual information. CLIP is also robust to distribution shifts, making it generalize well to different data patterns (Dall et al., 2016). GLIDE, proposed in 2022, modifies the basic diffusion model to incorporate CLIP during training. By augmenting the training process with CLIP text embeddings, GLIDE enables text-conditional image generation, producing images that align better with the text's semantics (Dall et al., 2016). DALL-E 2 consists of three main components: a text encoder (CLIP) that maps text to an embedding, a diffusion model (prior) that maps the text encoding to an image encoding, and the GLIDE model that decodes the image encoding into the final image. This process is referred to as unCLIP by OpenAI (Dall et al., 2016). Figure 2. High-level architecture of the proposed model ## 3. Proposed Method We propose a novel approach for detecting OOC captions and image pairs by comparing the perceptual similarity between AI generated images based on the captions from the COSMOS dataset. As image generation is time-consuming, we present two new datasets of synthetically generated images along with their textual captions that can be used for future research in this area. We employ two newly proposed synthetic text-to-image generative models, (1) Stable Diffusion (Dosov et al., 2016; Zhang et al., 2017) and (2) DALL-E 2 (Dosov et al., 2016; Zhang et al., 2017) to generate the datasets. Both models report state-of-the-art performance on benchmarks for image generation models, and are able to both generate highly realistic images, as well as images that have a high semantic alignment towards the input caption. Thus, both models are suitable for this task. We perform a qualitative analysis of the generated images by conducting a user study to achieve annotated similarity ratings, as well as a quantitative analysis to test the effectiveness of the proposed method. We employ a feature-based approach for computing image similarity, utilizing an object detection model and an object encoder to capture the semantic content of the images into feature vector representations, and computing their similarity with Cosine Similarity. By computing the similarity of the original image vs each generated image, or the similarity of the generated images, we predict OOC/NOOC and verify our results against the gold labels from the COSMOS dataset (Dosov et al., 2016). ### Pre-processing Before prompting the captions to the image generation models, extensive text pre-processing is needed. The COSMOS captions often include political statements, slurs, fake news and misleading information. This provides a challenge in order to comply with the ethical use and content policy filters of the image generation models. OpenAI's content policy for DALL-E 2 states that the user is not allow to 'create, upload, or share images that are not G-rated or that could cause harm' (Dosov et al., 2016). This includes names, foil language, violence, drugs, and more. In addition, DALL-E 2's safety filter is also activated by topics such as COVID-19, abortion, pregnancy, drugs and more, which are necessary to remove from the dataset. The former provides a challenge as seven % of the COSMOS dataset falls under the 'Covid' category (Dosov et al., 2016). Extensive text-processing is therefore needed before using the prompts to generate images. We use the modified captions from the COSMOS dataset that has been pre-processed using Named Entity Recognition (NER) to replace proper nouns with corresponding entity labels, such as replacing 'Obama' with 'Person'. NER also helps decrease the abstraction between the caption and the images, as neither of the image generation models or object detection models will distinguish between types of persons or locations. We believe this will make similarity comparisons easier, thus increasing the accuracy of our model. An additional round of NER processing is performed to identify any proper nouns that were missed during the initial step using the same en_core_web_sm model from the spaCy library as COSMOS. Furthermore, a list of inappropriate words is compiled and used as an additional filter before prompting the captions to DALL-E 2 and Stable Diffusion. As a base for our list, we utilize the open-source Github LDNOOBW list from Shutterstock (Shutterstock, 2018). ### Image generation and dataset collection For this project we use the Test set from the COSMOS dataset, which includes 1700 images and 3400 captions. We generate one synthetic image for each caption corresponding to the original image, gaining 3400 generated images in each dataset, totaling 6800 generated images. We generate images of 512x512 pixels for each model to ensure comparability between the datasets. This can be easily changed for both models. Generating images with higher resolution will be more expensive and time-consuming. Due to the generative nature of the models, providing the same prompt to the model twice will results in a different generated image. Thus, even captions that are completely similar will produce different results. While this can produce variations in the output, we still anticipate a high degree of similarity in captions that possess semantic similarity. Although the process is automated using Python, generating images is time-consuming: Stable Diffusion model uses about 15 seconds to generate each image and save it to our directory with standard Google Colab GPU. DALL-E 2 uses about seven seconds to generate each image and save it to our directory with a standard Google Colab GPU. To speed up the process, we utilize premium GPU from Colab for the Stable Diffusion model, which decreased the generation runtime to three seconds per image. For this reason, verifying our proposed method might be hard if images are to be generated in real time. In order to facilitate for further testing we therefore present the datasets in this paper. ### Computing Image Similarity There are several algorithms available for computing image similarity, such as pixel-to-pixel comparison with MSE or structural comparison using SSIM. Given that image generation models introduce randomness, resulting in differences in object placement and angles in the generated images, traditional pixel-to-pixel or SSIM comparison may not be effective. To capture the context of the images, we use a feature-based similarity approach: Our method uses feature extraction techniques to extract high-level features from images to vector representations, which are then used to compute the similarity between images with Cosine Similarity. Using object encoders such as ResNet50, we can extract features from images and compare them using distance metrics such as Cosine similarity. We test 8 different object encoders for this task. We also employ 3 object detection models and combine them with all object encoders to find the best method for capturing the semantic content of the images for image similarity comparison. Our prediction model employs two methods. In the first method, we use a pre-trained object detection model to locate objects in an image and create bounding boxes around them. The idea is that using a combination of an object detection model and an object encoder, we can capture contextual information and relationships between objects, resulting in a more accurate feature vector representation of the image. While object encoders also includes object detection capabilities, object detection models such as YOLO and MASK-RCNN are specialized for this, and have a higher detection accuracy and more accurate classification capabilities. The bounding boxes are used to isolate each object as a separate image, which is then processed by the object encoder. The feature vectors of the detected objects are combined, along with the feature vector representation of the entire image to create a global representation of the image. We test three different object detection models: MASK-RCNN, YOLOv5, and YOLOv7 in order to find the best combination. In the second method, we only utilize the object encoder to obtain a feature vector representation of the image. We test various object encoders, including different versions of ResNet, DenseNet, EfficientNet, and CLIP. While object detection models have higher accuracy in detecting objects, they may not necessarily provide a more accurate feature vector representation of the entire image. By relying solely on the object encoder, we aim to capture a global representation that considers both the objects and their surroundings equally. The choice of an appropriate object encoder is crucial, as it is responsible for the feature vector used in our predictions. For both methods we calculate similarity using Cosine Similarity and use the scores to predict OOC/NOOC labels. ## 4. Experiments ### Qualitative Analysis Defining image similarity is a difficult task, even for humans. One might consider a high-level comparison between images, such as the overall category or context of an image. For example, an image of an apple and a pear can be seen as similar because they are both fruits. On the other hand, taking a low-level comparison considering the colors, shapes and objects in the image, one might not find the images similar at all. In order to gather humanly annotated similarity scores, we conduct a survey using a small subset of the generated datasets, asking participants to rate the perceived similarity between the generated images for 24 caption pairs on a scale from 1-10, where 1 is the lowest degree of similarity and 10 the highest. We choose a sample of 24 image pairs (48 generated images) for each dataset with an even distribution of OCC/NOOC labels in the original caption pairs (12 of each). The average rating of the images indicate whether the participants find the image pairs more similar than dissimilar. On a scale of 1-10, 5.5 is the median rating. We can use this as a threshold as an indication as to whether participants believe the image are more similar than dissimilar or opposite. An average score equal to the threshold is defined as above. Our survey shows that rating the similarity between the images is a difficult task even for humans. The rating distributing shows a high variation in similarity scores for the same images. Figure 3 show an image pair where the variation of ratings is high, showing that the perceptual similarity of images highly varies from viewer to viewer. The caption pair used to generate the image pair in Figure 3 is NOOC. The average rating indicates that participants correctly identify this. This shows that the similarity in the image pair correlates to the similarity in the caption pair, and the text-to-image model effectively captures the semantic similarity. Furthermore, we can convert the average ratings to corresponding OOC / NOOC labels in order to compare these to the gold labels from the COSMOS dataset for the caption pairs used to generate the images. Intuitively, an average score below the threshold indicates that the images are dissimilar, which in turn should indicate a presence of an OOC in the caption pairs used to generate the images. By defining scores below the threshold as OOC (1), and scores above or equal to the threshold as NOOC (0), we achieve the scores presented in table 2. More importantly, the humanly annotated similarity scores allows us to evaluate whether our prediction model aligns with human perception, and allows us to analyze whether the prediction model accurately measures image similarity. By testing the variations of prediction models, we can see which model aligns with the predictions from the survey, presented in Table 2. ### Quantitative Analysis Quantitative analysis is performed using the automated prediction model. We test a total of 8 object encoders paired with 3 object detection models. While MASK-RCNN and YOLOv7 has better detection accuracy than YOLOv5, it does not improve the performance of the model significantly. Table 3 shows the accuracy score of the model when utilizing YOLOv7 over other object detection models with the ResNet50 encoder. YOLOv7 produces a slightly better detection accuracy when paired with ResNet50 than YOLOv5 and MASK-RCNN. However, the slight increase in accuracy comes with a huge increase in runtime when utilizing normal GPUs. While the other variations use around 30 minutes on prediction on the entire dataset, YOLOv7 uses around 1hour and 30 minutes. MASK-RCNN, despite boosting better detection accuracy than YOLOv5, actually performs worse than YOLOv5 paired with all object encoders expect for EfferentNet on the DALL-E 2 dataset, where it returns a 1% better accuracy. However, on the Stable Diffusion dataset, utilizing MASK-RCNN is superior to YOLOv5 and provides a 5-7% boost in accuracy in general. The runtime is also similar on a normal Colab GPU. The best performing version of our model utilizing an object detection model is MASK-RCNN combined with an EfficientNet-B5 object encoder, yielding a 0.57% detection accuracy. Despite this, none of the versions where we utilize object detection models outperform the versions where we only utilize object encoders. Paired with our best performing model, YOLOv5 decreases the accuracy score of the CLIP model by 16% on the Stable Diffusion dataset. We see a general 10% accuracy decrease when Figure 3. Distribution of similarity scores from survey. The distribution of ratings show a high variance in perceived similarity among the participants. utilizing object detection models, versus only utilizing object encoders. Therefore, it is a clear advantage of utilizing only object encoders for this task, both in terms of accuracy and runtime. Our study shows that several object encoders are able to accurately capture the perceptual similarity between images without the need for additional detection methods. The scores are presented in Table 5 and 4. The performance difference between DALL-E 2 and Stable Diffusion is negligible, and the accuracy scores are mostly closely matched across various encoders, with slight variations. Setting the right conditional rule for predictions is a difficult task. We calculate similarity of both generated images towards the original image, achieving two similarity scores, sim1 and sim2. We set a threshold for OOC/NOOC prediction of 0.50. Intuitively we can rule that if both images fall below the similarity threshold, we predict OOC, while if both images are above, we predict NOOC. However, this method presents a challenge as the generated images may not be comparable to the original images despite being NOOC. The modified captions used as prompts for the text-to-image generative models may differ from the original captions, so that both generated images are different towards the original image, despite both the original captions conveying the same semantic meaning. Hence, we might assume that if the generated images are similar, i.e. if both similarity scores either exceed or fall below the threshold, they are deemed as NOOC. We find that a combination of the two if/else statements mentioned above yields the most effective results. We utilize an if/else statement to capture if both sim1 and sim2 are below the threshold, predicting OOC. Furthermore, we can utilize an elif statement to rule that if sim1 is below the threshold and sim2 is above the threshold, or Opposite, we also predict OOC. If none of the previous conditions are met, this means that both sim1 and sim2 are on the same side of the threshold, having an equal similarity towards the original. If so, a NOOC label is predicted. ## 5. Discussion The proposed method is subject to a range of factors, not only the actual performance of the generative models for the task, but also the quality of the captions used to generate images, how well the feature vector representations captures the content of the image, and the way predictions are carried out and what similarity \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline **Image Pairs 16xx** & **99** & **98** & **97** & **96** & **95** & **94** & **93** & **92** & **91** & **90** & **89** & **88** & **87** & **86** & **85** & **84** & **83** & **82** & **81** & **80** & **79** & **78** & **77** & **76** \\ \hline measures are employed. Consequently there are numerous opportunities for optimization, and finding the most optimal approach is not necessarily straight forward. In order to find an effective combination, we test a variety of object encoders and object detection models. The analysis proves that utilizing image generation models for this task provides both strengths and limitations. Figure 5 demonstrates that the models generate highly realistic images that closely resembles the original image, allowing us to make correct predictions. Figure 6 also shows that the models are able to generate images with high semantic alignment to the input caption. This shows that the image generation models can effectively capture the semantic similarity or dissimilarity in news captions. Using the survey scores with humanly annotated similarity ratings, we are able to compare how the models align with human perception, and as such, which one of our model versions most objectively capture perceptual similarity. Table 1 shows how CLIP strongly correlates to human perception on the image pairs in the subset used in the survey. Table 2 shows how the predictions converted to accuracy and precision scores, and demonstrates each models effectiveness at capturing accurate similarity scores and alignment with human perception. This serves as an indicator that our version utilizing CLIP accurately measures the efficiency of Stable Diffusion and DALL-E 2 for detecting mismatch in image caption pairs. We find that CLIP presents a high correlation with human annotations, suggesting CLIP aligns with human perception, even for difficult tasks such as the subset utilized in the survey. Therefore we are confident that the reported scores from our model version utilizing CLIP provides an accurate and reliable evaluation of the performance of both image generation models for the task of cheapfake detection. The qualitative analysis demonstrates that the models are not able to capture contradictions in the caption pairs. This also assessed by Marcus _et. al_ for DALL-E 2 (Marcus et al., 2018). Akgul _et al._ and Tran _et al._ demonstrate the increase in accuracy achieved when detecting contradictions in the caption pairs (Marcus et al., 2018)(Akgul et al., 2018). The COSMOS baseline approach defines that captions referring to different objects in the image are considered NOOC. NOOC captions often contain references to different objects or describe the same object differently. Our method does not incorporate any rules prior to image generation. Instead, we directly feed the captions to the image generation models. Combining previously proposed methods for cheapfake detection could therefore effectively increase the performance of our approach. ## 6. Conclusion Our paper conducts a comprehensive analysis of DALL-E 2 and Stable Diffusion in generating news-related images. The rapid progress in AI generative models presents opportunities for further research. For example, Midjourney has shown exceptional image generation capabilities (Midjourney, 2018) (Midjourney, 2018). Google's Imagen achieves state-of-the-art performance on the COCO dataset and is preferred by human evaluators compared to other models (Midjourney, 2018). We recommend exploring these new models when they become publicly available or accessible through APIs. Additionally, there are optimization steps to enhance the model's accuracy in future research. The text pre-processing step may result in decreased contextual information, particularly for non-descriptive captions where entity labels replace most of the words, leading to a loss of context. Certain NER tags like LOC and GPE are difficult to interpret, as well as ambiguous words like DATE and CARDINAL, posing challenges for DALL-E 2 and Stable Diffusion. NER tagging for dates and numbers may not be necessary as they do not introduce harmful context triggering safety filters or harmful content in the datasets. However, it's important to acknowledge that navigating the safety filters of DALL-E 2 and Stable Diffusion is challenging. Optimizing the text-processing step is crucial for achieving higher performance, but it requires careful consideration of the safety filters in both models. Figure 4. Example of false positive: Generated image pair based on caption pair: 1:A photograph shows a purple lobster caught in Maine, (left). 2: “One-in-a-million” purple lobster fools the internet since it is not genuine (right). Figure 5. Generated image pair with a high similarity resemblance towards the original image suggest the models ability to create highly realistic and semantically aligned images from descriptive captions. The original image is to the right. Figure 6. Original image 1676 vs generated caption 2: ’I’m primarily the dog walker, but usually the kids come with me. The original image is to the left. ###### Acknowledgements. This research was funded by NORDIS, European Horizon 2020 grant number 825469.
2309.10149
Analysis of the Memorization and Generalization Capabilities of AI Agents: Are Continual Learners Robust?
In continual learning (CL), an AI agent (e.g., autonomous vehicles or robotics) learns from non-stationary data streams under dynamic environments. For the practical deployment of such applications, it is important to guarantee robustness to unseen environments while maintaining past experiences. In this paper, a novel CL framework is proposed to achieve robust generalization to dynamic environments while retaining past knowledge. The considered CL agent uses a capacity-limited memory to save previously observed environmental information to mitigate forgetting issues. Then, data points are sampled from the memory to estimate the distribution of risks over environmental change so as to obtain predictors that are robust with unseen changes. The generalization and memorization performance of the proposed framework are theoretically analyzed. This analysis showcases the tradeoff between memorization and generalization with the memory size. Experiments show that the proposed algorithm outperforms memory-based CL baselines across all environments while significantly improving the generalization performance on unseen target environments.
Minsu Kim, Walid Saad
2023-09-18T21:00:01Z
http://arxiv.org/abs/2309.10149v2
Analysis of the Memorization and Generalization Capabilities of AI Agents: Are Continual Learners Robust? ###### Abstract In continual learning (CL), an AI agent (e.g., autonomous vehicles or robotics) learns from non-stationary data streams under dynamic environments. For the practical deployment of such applications, it is important to guarantee robustness to unseen environments while maintaining past experiences. In this paper, a novel CL framework is proposed to achieve robust generalization to dynamic environments while retaining past knowledge. The considered CL agent uses a capacity-limited memory to save previously observed environmental information to mitigate forgetting issues. Then, data points are sampled from the memory to estimate the distribution of risks over environmental change so as to obtain predictors that are robust with unseen changes. The generalization and memorization performance of the proposed framework are theoretically analyzed. This analysis showcases the tradeoff between memorization and generalization with the memory size. Experiments show that the proposed algorithm outperforms memory-based CL baselines across all environments while significantly improving the generalization performance on unseen target environments. Minsu Kim and Walid Saad Wireless@VT, Bradley Department of Electrical and Computer Engineering, Virginia Tech, Arlington, VA, USA. Robustness, Generalization, Memorization, Continual Learning ## 1 Introduction Continual learning (CL) recently emerged as a new paradigm for designing artificial intelligent (AI) systems that are adaptive and self-improving over time [1]. In CL, AI agents continuously learn from non-stationary data streams and adapt to dynamic environments. As such, CL can be applied to many real-time AI applications such as autonomous vehicles or digital twins [2]. For these applications to be effectively deployed, it is important to guarantee robustness to unseen environments while retaining past knowledge. However, modern deep neural networks often forget previous experiences after learning new information and struggle when faced with changes in data distributions [3]. Although CL can mitigate forgetting issues, ensuring both memorization and robust generalization to unseen environments is still a challenging problem. To handle forgetting issues in CL, many practical approaches have been proposed using memory [4, 5, 6, 7, 8] and regularization [9, 10]. For memory-based methods [4, 5, 6, 7, 8], a memory is deployed to save past data samples and to replay them when learning new information. Regularization-based methods [9, 10] use regularization terms during model updates to avoid overfitting the current environment. However, these approaches [4, 5, 6, 7, 8, 9, 10] did not consider or theoretically analyze generalization performance on unseen environments even though they targeted non-stationary data streams. Recently, a handful of works [11, 12, 13, 14, 15, 16] studied the generalization of CL agents. In [11], the authors theoretically analyzed the generalization bound of memory-based CL agents. The work in [12] used game theory to investigate the tradeoff between generalization and memorization. In [13], the authors analyzed generalization and memorization under overparameterized linear models. However, the works in [11, 12, 13] require that an AI agent knows when and how task/environment identities, e.g., labels, will change. In practice, such information is usually unavailable and unpredictable. Hence, we focus on more general CL settings [5] without such assumption. Meanwhile, the works in [14, 15, 16] empirically analyzed generalization under general CL settings. However, they did not provide a theoretical analysis of the tradeoff between generalization and memorization performance. The main contribution of this paper is a novel CL framework that can achieve robust generalization to dynamic environments while retaining past knowledge. In the considered framework, a CL agent deploys a capacity-limited memory to save previously observed environmental information. Then, a novel optimization problem is formulated to minimize the worst-case risk over all possible environments to ensure the robust generalization while balancing the memorization over the past environments. However, it is generally not possible to know the change of dynamic environments, and deriving the worst-case risk is not feasible. To mitigate this intractability, the problem is relaxed with probabilistic generalization by considering risks as a random variable over environments. Then, data points are sampled from the memory to estimate the distribution of risks over environmental change so as to obtain predictors that are robust with unseen changes. We then provide theoretical analysis about the generalization and memorization of our framework with new insights that a tradeoff exists between them in terms of the memory size. Experiments show that our framework can achieve robust generalization performance for unseen target environments while retaining past experiences. The results show up to a 10% gain in the generalization compared to memory-based CL baselines. ## 2 System Model ### Setup Consider a single CL agent equipped with AI that performs certain tasks observing streams of data sampled from a dynamic environment. The agent is embedded with a machine learning (ML) model parameterized by \(\theta\in\Theta\subset\mathbb{R}^{d}\), where \(\Theta\) is a set of possible parameters and \(d>0\). We assume that the agent continuously performs tasks under dynamic environments and updates its model. To perform a task, at each time \(t\), the agent receives a batch of data \(\mathcal{B}=\{(X_{i}^{e_{t}},Y_{i}^{e_{t}})\}_{i=1}^{|\mathcal{B}|}\) sampled/observed from a joint distribution \(P(X^{e_{t}},Y^{e_{t}})\) under the current environment \(e_{t}\sim\mathcal{E}\), where \(\mathcal{E}\) represents the set of all environments. We assume that there exists a probability distribution \(\mathcal{Q}\) over environments in \(\mathcal{E}\)[3]. \(\mathcal{Q}\) can for example represent a distribution over changes to weather or cities for autonomous vehicles. It can also model a distribution over changes to rotation, brightness, or noise in image classification tasks. Hence, it is important for the agent to have robust generalization to such dynamic changes and to memorize past experiences. To this end, we use a memory \(\mathcal{M}_{t}\) of limited capacity \(0\leq|\mathcal{M}_{t}|\leq|\mathcal{M}|\) so that the agent can save the observed data samples \(\{(X_{i}^{e_{t}},Y_{i}^{e_{t}})\}_{i=1}^{|\mathcal{B}|}\) in \(\mathcal{M}_{t}\) and can replay them when updating its model \(\theta_{t}\). To measure the performance of \(\theta_{t}\) under the current environment \(e_{t}\), we consider the statistical risk \(\mathcal{R}^{e_{t}}(\theta_{t})=\mathbb{E}_{P(X^{e_{t}},Y^{e_{t}})}[l(\theta _{t}(X^{e_{t}}),\)\(Y^{e_{t}})]\), where \(l(\cdot)\) is a loss function, e.g., cross-entropy loss. Since we usually do not know the distribution \(P(X^{e_{t}},Y^{e_{t}})\), we also consider the empirical risk \(\widehat{\mathcal{R}}^{e_{t}}(\theta_{t})=\frac{1}{|\mathcal{B}|}\sum_{i=1}^{ |\mathcal{B}|}l(\theta_{t}(X^{e_{t}}),Y^{e_{t}}))\). ### Problem Formulation As the agent continuously experiences new environments, it is important to maintain the knowledge of previous environments (_memorization_) and generalize robustly to any unseen environments (_generalization_). This objective can be formulated into the following optimization problem: \[\min_{\theta_{t}\in\Theta}\quad\sum_{\tau\in\mathcal{M}_{t}}\frac{1}{| \mathcal{M}_{t}|}\widehat{\mathcal{R}}^{e_{\tau}}(\theta_{t})+\max_{e_{t}\sim \mathcal{E}}\rho\mathcal{R}^{e_{t}}(\theta_{t}), \tag{1}\] where \(|\mathcal{M}_{t}|\) is the size of the current memory and \(\rho>0\) is a coefficient that balances between the terms in (1). The first term is the _memorization performance_ of the current model \(\theta_{t}\) with respect to the past experienced environments \(e_{\tau},\forall\tau\in[1,\ldots,|\mathcal{M}_{t}|]\), at time \(t\). The second term corresponds to the _worst-case performance_ of \(\theta_{t}\) to any possible environment \(e_{t}\in\mathcal{E}\). Since the change of environments is dynamic and not predictable, we consider all possible cases to measure the robustness of the current model. This problem is challenging because the change of \(e_{t}\) is unpredictable and its information, e.g., label, is not available. Meanwhile, we only have limited access to the data from the observation in \(\mathcal{M}_{t}\). Since each environment follows a probability distribution \(\mathcal{Q}\), \(\mathcal{R}^{e_{t}}\) can be considered as a random variable. Then, we rewrite (1) as follows \[\min_{\theta_{t}\in\Theta,\gamma\in\mathcal{R}} \sum_{\tau\in\mathcal{M}_{t}}\frac{1}{|\mathcal{M}_{t}|}\widehat{ \mathcal{R}}^{e_{\tau}}(\theta_{t})+\gamma\rho,\] (2) s.t. \[\mathbb{P}[\mathcal{R}^{e_{t}}(\theta_{t})\leq\gamma]=1, \tag{3}\] where the probability in (3) considers the randomness in \(e_{t}\sim\mathcal{Q}\). However, the problem is still challenging because constraint (3) must be always satisfied. This can be too restrictive in practice due to the inherent randomness in training (e.g., environmental changes). To make the problem more tractable, we use the framework of probable domain generalization (PDG) [3]. In PDG, we relax constraint (3) with probability \(\alpha\in(0,1)\) as below \[\min_{\theta_{t}\in\Theta,\gamma\in\mathcal{R}} \sum_{\tau\in\mathcal{M}_{t}}\frac{1}{|\mathcal{M}_{t}|}\widehat{ \mathcal{R}}^{e_{\tau}}(\theta_{t})+\gamma\rho,\] (4) s.t. \[\mathbb{P}[\mathcal{R}^{e_{t}}(\theta_{t})\leq\gamma]\geq\alpha. \tag{5}\] Hence, constraint (5) now requires the risk of \(\theta_{t}\) to be lower than \(\gamma\) with probability at least \(\alpha\in(0,1)\). However, \(\mathcal{Q}\) is generally unknown, so the probability term in (5) is still intractable. Since risk \(\mathcal{R}^{e}(\cdot)\) is a random variable, we can consider a certain probability distribution \(f_{\mathcal{R}}\) of risks over environment \(e\sim\mathcal{E}\)[3]. Here, \(f_{\mathcal{R}}\) can capture the sensitivity of \(\theta\) to different environments. Then, we can rewrite the problem by using the cumulative distribution function (CDF) \(F_{\mathcal{R}}\) of \(f_{\mathcal{R}}\) as: \[\min_{\theta_{t}\in\Theta}\quad\sum_{\tau\in\mathcal{M}_{t}}\frac{1}{| \mathcal{M}_{t}|}\widehat{\mathcal{R}}^{e_{\tau}}(\theta_{t})+F_{\mathcal{R}}^{ -1}(\alpha;\theta_{t})\rho, \tag{6}\] where \(F_{\mathcal{R}}^{-1}(\alpha;\theta_{t})=\inf\{\gamma:\mathbb{P}[\mathcal{R}^{ e_{t}}(\theta_{t})\leq\gamma]\geq\alpha\}\). Now, to estimate \(F_{\mathcal{R}}\) through its empirical version \(F_{\mathcal{R}}\), we sample a batch of data \(\mathcal{B}_{\mathcal{R}}\) from \(\mathcal{M}_{t}\) and another batch \(\mathcal{B}\) from \(e_{t}\). Since \(F_{\mathcal{R}}\) is an unknown distribution, we can use kernel density estimation or Gaussian estimation [3] for \(F_{\mathcal{R}}\) using the sampled data. This approach is similar to minimizing empirical risks instead of statistical risks in conventional training settings [17]. For the computational efficiency, we also sample a batch \(\mathcal{B}_{M}\) from \(\mathcal{M}_{t}\) to approximate the performance of \(\theta_{t}\) over the memory. We then obtain the following problem \[\min_{\theta_{t}\in\Theta}\quad\sum_{\tau\in\mathcal{B}_{M}}\frac{1}{| \mathcal{B}_{M}|}\widehat{\mathcal{R}}^{e_{\tau}}(\theta_{t})+F_{\mathcal{R}}^{ -1}(\alpha;\theta_{t})\rho. \tag{7}\] The above problem (7) only uses data sampled from \(\mathcal{M}_{t}\) and \(e_{t}\), thereby mitigating the intractability due to the unknown distribution in (6). Hence, (7) can be solved using gradient-based methods after the distribution estimation. We summarize our method in Algorithm 1 with Gaussian estimation. Since we use data samples in \(\mathcal{M}_{t}\) to estimate problem (6), the size of \(\mathcal{M}_{t}\) naturally represents the richness of estimation \(F_{\mathcal{R}}\). As we have a larger memory size, we can save more environmental information and can achieve more robust generalization. However, a large \(|\mathcal{M}_{t}|\) also means that an agent has more information to memorize. Finding a well-performing model across all environments in \(\mathcal{M}_{t}\) is generally a challenging problem [18]. To capture this tradeoff between the generalization and memorization, next, we analyze the impact of the memory size \(|\mathcal{M}_{t}|\) on the memorization and generalization performance. ``` Input: Model \(\theta\), probability of generalization \(\alpha\), learning rate \(\eta\), memory \(\mathcal{M}_{t}\), CDF of normal distribution \(\Phi(\cdot)\), batches \(\mathcal{B},\mathcal{B}_{\mathcal{R}},\mathcal{B}_{M}\), and balance coefficient \(\rho\). 1for\(t=0\) to \(T-1\)do 2 Sample a batch of data \(\mathcal{B}\) from \(e_{t}\); 3 Calculate the mean \(\mu_{t}\) and variance \(\sigma_{t}^{2}\) of risks \(\widehat{\mathcal{R}}(\theta_{t})\) using sampled datasets in \(\mathcal{B}\) and \(\mathcal{B}_{\mathcal{R}}\) ; 4 Compute \(\alpha\) quantile of the estimated Gaussian distribution \(L_{G}\leftarrow\mu_{t}+\sigma_{t}^{2}\Phi^{-1}(\alpha)\); 5 Calculate \(L_{M}\leftarrow\sum_{\tau\in\mathcal{B}_{M}}\frac{1}{|\mathcal{B}_{M}|} \widehat{\mathcal{R}}^{e_{\tau}}(\theta_{t})\) using sampled data in \(\mathcal{B}_{M}\); 6 Update \(\theta_{t}\leftarrow\theta_{t}-\eta\nabla_{\theta_{t}}(\rho L_{G}+L_{M})\) ``` **Algorithm 1**Proposed Algorithm ## 3 Tradeoff between memorization and generalization We now study the impact of the memory size \(|\mathcal{M}_{t}|\) on the memorization. Motivated by [11], we assume that a global solution exists for every environment \(\tau\in[1,\ldots,|\mathcal{M}_{t}|]\) at time \(t\) such that \(\theta_{t}^{*}=\arg\min_{\theta\in\Theta}\sum_{\tau=1}^{|\mathcal{M}_{t}|} \mathcal{R}^{e_{\tau}}(\theta)\). If such \(\theta_{t}^{*}\) does not exist, memorization would not be feasible. Then, we can have the following theorem. **Theorem 1**.: _For time \(t\), let \(\theta_{\mathcal{M}_{t}}^{*}\) be a global solution for all environments \(\tau\in[1,\ldots,|\mathcal{M}_{t}|]\) in \(\mathcal{M}_{t}\) and suppose loss function \(l(\cdot)\) to be \(\lambda\)-strongly convex and \(L\)-Lipschitz-continuous. Then, for the current model \(\theta_{t}\in\Theta\) and \(\epsilon>0\), we have_ \[\mathbb{P}\Bigg{[} \bigcap_{\tau=1}^{|\mathcal{M}_{t}|}\left\{\mathcal{R}^{e_{\tau} }(\theta_{t})-\mathcal{R}^{e_{\tau}}(\theta_{\mathcal{M}_{t}}^{*})\leq\epsilon \right\}\Bigg{]}\] \[\geq 1-\frac{4|\mathcal{M}_{t}|L^{2}}{\lambda|\mathcal{B}_{M}| \Big{(}\epsilon-\sqrt{2L^{3}||\theta_{t}-\hat{\theta}_{\mathcal{M}_{t}}||/ \lambda}\Big{)}}, \tag{8}\] _where \(\hat{\theta}_{\mathcal{M}_{t}}\) is the empirical solution for all environments \(\tau\in[1,\ldots,|\mathcal{M}_{t}|]\)._ Proof.: We first state one standard lemma used in the proof as below **Lemma 1**.: _(From [17, Theorem 5]) For \(\theta\in\Theta\), a certain environment \(\tau\), and its optimal solution \(\theta^{*}\), with probability at least \(1-\delta\), we have_ \[\underbrace{\mathbb{P}\bigg{[}\mathcal{R}^{e_{\tau}}(\theta)- \mathcal{R}^{e_{\tau}}(\theta^{*})\leq\sqrt{\frac{2L^{2}}{\lambda}(\hat{ \mathcal{R}}^{e_{\tau}}(\theta)-\hat{\mathcal{R}}^{e_{\tau}}(\hat{\theta}))}+ \frac{4L^{2}}{\delta\lambda|\mathcal{B}_{M}|}\bigg{]}}_{\mathcal{A}}\geq 1-\delta. \tag{9}\] From the \(L\)-Lipschtiz assumption, we have the following inequality \[\mathbb{P}\Bigg{[}\mathcal{R}^{e_{\tau}}(\theta)-\mathcal{R}^{e_{\tau}}( \theta^{*})\leq\sqrt{\frac{2L^{2}}{\lambda}\sqrt{L||\theta-\hat{\theta}||}+ \frac{4L^{2}}{\delta\lambda|\mathcal{B}_{M}|}\bigg{]}}\geq A. \tag{10}\] For time \(t\), since \(\theta_{\mathcal{M}_{t}}^{*}\) is a global solution for all \(\tau\in[1,\ldots,|\mathcal{M}_{t}|]\), the following holds \[\mathbb{P}\Bigg{[}\bigcap_{\tau=1}^{|\mathcal{M}_{t}|}\bigg{\{} \mathcal{R}^{e_{\tau}}(\theta_{t})-\mathcal{R}^{e_{\tau}}(\theta_{\mathcal{M} _{t}}^{*})\leq\sqrt{\frac{2L^{2}}{\lambda}\sqrt{L||\theta_{t}-\hat{\theta}_{ \mathcal{M}_{t}}||}}+\frac{4L^{2}}{\delta\lambda|\mathcal{B}_{M}|}\bigg{]} \Bigg{]} \tag{11}\] \[\geq\sum_{\tau=1}^{|\mathcal{M}_{t}|}\bigg{[}\mathcal{R}^{e_{\tau }}(\theta_{t})-\mathcal{R}^{e_{\tau}}(\theta_{\mathcal{M}_{t}}^{*})\leq \sqrt{\frac{2L^{2}}{\lambda}\sqrt{L||\theta_{t}-\hat{\theta}_{\mathcal{M}_{t}}| }}\] \[\quad+\frac{4L^{2}}{\delta\lambda|\mathcal{B}_{M}|}\bigg{]}-(| \mathcal{M}_{t}|-1)\] \[\geq|\mathcal{M}_{t}|(1-\delta)-(|\mathcal{M}_{t}|-1)=1-| \mathcal{M}_{t}|\delta, \tag{12}\] where the first inequality results from \(\mathbb{P}[\bigcap_{t=1}^{n}A_{i}]\geq\sum_{i=1}^{n}\mathbb{P}[A_{i}]\)\(-(n-1)\) and the second inequality is from the Lemma 1. We set the right-hand side of (11) to \(\epsilon\) and the expression of \(\delta\) as \[\delta=\frac{4L^{2}}{\lambda|\mathcal{B}_{M}|\big{(}\epsilon- \sqrt{2L^{3}||\theta_{t}-\hat{\theta}_{\mathcal{M}_{t}}||/\lambda}\big{)}}. \tag{13}\] By plugging the derived \(\delta\) into (12), we complete the proof. From Theorem 1, we observe that as the memory size \(|\mathcal{M}_{t}|\) increases, the probability that the difference between risks of \(\theta_{t}\) and \(\theta_{\mathcal{M}_{t}}\) is smaller than \(\epsilon\) decreases. As we have more past experience in \(\mathcal{M}_{t}\), it becomes more difficult to achieve a global optimum for all experienced environments. For instance, if we have only one environment in \(\mathcal{M}_{t}\), finding \(\theta_{\mathcal{M}_{t}}^{*}\) will be trivial. However, as we have multiple environments in \(\mathcal{M}_{t}\), \(\theta_{t}\) will more likely deviate from \(\theta_{\mathcal{M}_{t}}^{*}\). Next, we present the impact of the memory size \(|\mathcal{M}_{t}|\) on the generalization performance. **Proposition 1**.: _Let \(\hat{\mathcal{F}}_{t}\) denote the space of possible estimated risk distributions over \(|\mathcal{M}_{t}|\) environments and \(\mathcal{N}_{\epsilon}(\hat{\mathcal{F}}_{t})\) be the \(\epsilon\) covering number of \(\hat{\mathcal{F}}_{t}\) for \(\epsilon>0\) at time \(t\). Then, for \(\alpha\in(0,1)\), we have the following_ \[\mathbb{P}\left[\sup_{\theta_{t}\in\Theta}F_{\mathcal{R}}^{-1}( \alpha-\text{Bias}(\theta_{t},\hat{\mathcal{R}}))-F_{\mathcal{R}}^{-1}(\alpha)>\epsilon\right]\] \[\leq\mathcal{O}(\mathcal{N}_{\epsilon/16}(\hat{\mathcal{F}}_{t}) \exp(-\frac{|\mathcal{M}_{t}|\epsilon^{2}}{16})), \tag{14}\] _where \(\text{Bias}(\theta_{t}\text{,}\hat{\mathcal{R}}))\!=\!\sup_{\theta_{t}\in \Theta,\gamma\in\mathcal{R}}\!F_{\mathcal{R}}(\theta_{t})-\mathbb{E}_{\epsilon_ {1},\ldots,\epsilon_{|\mathcal{M}_{t}|}}F_{\mathcal{R}}(\theta_{t})\)._ Proof.: The complete proof is omitted due to the space limitation. The proposition can be proven by leveraging [3, Theorem 1] and using the memory \(\mathcal{M}_{t}\) instead of the already given data samples. We can see that both the bias term and the upper bound are a decreasing function of the memory size \(|\mathcal{M}_{t}|\). As we have more information about the environments in \(\mathcal{M}_{t}\), the model \(\theta_{t}\) becomes more robust to the dynamic changes of the environments, thereby improving generalization. From Theorem 1 and Proposition 1, we can see that a tradeoff exists between memorization and generalization in terms of \(|\mathcal{M}_{t}|\). As \(|\mathcal{M}_{t}|\) increases, we have more knowledge about the entire environment \(\mathcal{E}\), and \(\theta_{t}\) can be more robust to changes in \(e_{t}\). However, maintaining the knowledge of all the stored environments in \(\mathcal{M}_{t}\) becomes more difficult. Hence, the deviation of \(\theta_{t}\) from \(\theta_{\mathcal{M}_{t}}^{*}\) will increase. Essentially, this observation is similar to the performance of FedAvg algorithm [19] in federated learning (FL). As the number of participating clients increases, the global model achieves better generalization. However, it does not perform very well on each dataset of clients. The global model usually moves toward just the average of each client's optima instead of the true optimum [20]. ## 4 Experiments We now conduct experiments to evaluate our proposed algorithm and to validate the analysis. We use the rotated MNIST datasets [1], where digits are rotated with a fixed degree. Here, the rotation degrees represent an environmental change inducing distributional shifts to the inputs. The training datasets are rotated by \(0^{\circ}\) to \(150^{\circ}\) by interval of \(25^{\circ}\). Hence, the agent classifies all MNIST digits for seven different rotations, i.e., environments. To measure the generalization performance, we left out one target degree from the training datasets. Unless specified otherwise, we use a two-hidden fully-connected layers with 100 ReLU units, a stochastic gradient descent (SGD) optimizer and a learning rate of 0.1. For the memory, we adopted a replay buffer in [4] with reservoir sampling with \(|\mathcal{M}|=10000\). We also set \(|\mathcal{B}|=512\) and \(|\mathcal{B}_{M}|=64\). To estimate \(F_{\mathcal{R}}\), we used Gaussian estimation by sampling three batches of size \(|\mathcal{B}_{\mathcal{R}}|=64\) from the memory with \(\alpha=0.99999\) and \(\rho=0.5\). We average our results over ten random seeds In Table 1, we compare our algorithm against five memory-based CL methods (ER [4], DER [5], DER++ [5],, GSS [6], and HAL [7]) and one regularization-based CL method (EWC-ON [9]). We present results only on four rotations due to space limitations. In Table 1, 'Avg'means average accuracy on test datasets across all rotations. 'Target'measures the generalization performance on unseen datasets. We can see that our algorithm outperforms the baselines in terms of both the generalization performance and average accuracy across all rotations. Hence, our algorithm achieves better generalization while not forgetting the knowledge of past environments. We can also observe that our algorithm achieves robust generalization to challenging rotations (\(0^{\circ}\) and \(150^{\circ}\)). In Table 2, we show the impact of \(\alpha\) on the performance of our algorithm. We left out the \(150^{\circ}\) datasets to measure the generalization. We can observe that as \(\alpha\) decreases, the trained model does not generalize well. This is because \(\alpha\) represents a probabilistic guarantee of the robustness to environments as shown in (2).However, too large \(\alpha\) can lead to overly-conservative models that cannot adapt to dynamic changes. In Fig. 1, we present the impact of the memory size \(|\mathcal{M}|\) on the memorization and generalization. We left out the \(150^{\circ}\) datasets to measure the generalization. For the memorization, we measured the accuracy of \(\theta_{t}\) on whole data samples in \(\mathcal{M}_{t}\) at time step \(t\) as done in [14]. As the memory size \(|\mathcal{M}|\) increases, a trained model generalizes better while struggling with memorization. This observation corroborates our analysis in Sec. 3 as well as empirical findings in [14]. However, for \(|\mathcal{M}|=20000\), we can see that the generalization performance does not improve. This is because we solved problem (6) through approximation by sampling batches from the memory \(\mathcal{M}_{t}\). Hence, if we do not sample more batches \(\mathcal{B}_{\mathcal{R}}\) with large \(|\mathcal{M}|\), we cannot fully leverage various environmental information in \(\mathcal{M}_{t}\). We can observe that the generalization improves by sampling more \(\mathcal{B}_{\mathcal{R}}\) batches for the distribution estimation. ## 5 Conclusion In this paper, we have developed a novel CL framework that provides robust generalization to dynamic environments while maintaining past experiences. We have utilized a memory to memorize past knowledge and achieve domain generalization with a high probability guarantee. We have also presented a new theoretical insights into the impact of the memory size on the memorization and generalization performance. The experimental results show that our framework can achieve robust generalization to unseen target environments during training while retaining past experiences. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{\(\mathbf{0^{\circ}}\)} & \multicolumn{2}{c}{\(50^{\circ}\)} & \multicolumn{2}{c}{\(100^{\circ}\)} & \multicolumn{2}{c}{\(150^{\circ}\)} \\ & & & & & & & \\ & Avg & Target & Avg & Target & Avg & Target & Avg & Target \\ \hline **Ours** & **86.79\(\pm\)0.7** & **60.54\(\pm\)0.7** & **89.07\(\pm\)0.2** & **81.27\(\pm\)0.5** & **89.53\(\pm\)0.4** & **81.46\(\pm\)0.7** & **87.08\(\pm\)0.2** & **60.12\(\pm\)1.3** \\ ER [4] & 85.68\(\pm\)\(0.5\) & 58.43\(\pm\)\(0.5\) & 88.72\(\pm\)0.2 & 80.92\(\pm\)0.6 & 88.63\(\pm\)0.3 & 80.58\(\pm\)0.7 & 84.82\(\pm\)0.4 & 54.11\(\pm\)1.0 \\ DER [5] & 81.58\(\pm\)\(0.4\) & 52.24\(\pm\)0.7 & 84.02\(\pm\)0.5 & 75.62\(\pm\)0.5 & 83.74\(\pm\)0.6 & 75.36\(\pm\)1.1 & 80.79\(\pm\)0.7 & 49.6\(\pm\)1.3 \\ DER++ [5] & 83.71\(\pm\)0.8 & 55.50\(\pm\)1.1 & 86.34\(\pm\)0.8 & 78.04\(\pm\)0.9 & 86.38\(\pm\)0.6 & 78.43\(\pm\)0.5 & 83.24\(\pm\)0.8 & 52.70\(\pm\)1.0 \\ EWC\_ON [9] & 83.10\(\pm\)0.4 & 54.42\(\pm\)0.8 & 85.63\(\pm\)0.4 & 77.23\(\pm\)0.1 & 85.52\(\pm\)0.6 & 77.01\(\pm\)1.0 & 82.66\(\pm\)0.2 & 51.38\(\pm\)1.4 \\ GSS [6] & 83.23\(\pm\)0.4 & 54.72\(\pm\)0.8 & 85.35\(\pm\)0.6 & 77.10\(\pm\)0.5 & 85.55\(\pm\)0.5 & 76.97\(\pm\)1.0 & 82.60\(\pm\)0.3 & 51.05\(\pm\)1.2 \\ HAL [7] & 83.15\(\pm\)0.3 & 54.52\(\pm\)1.0 & 85.37\(\pm\)0.6 & 77.45\(\pm\)0.6 & 85.75\(\pm\)0.3 & 77.25\(\pm\)0.6 & 82.69\(\pm\)0.3 & 52.52\(\pm\)0.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy on the held-out test datasets and average accuracy across all rotations. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{\(\alpha\)} & \multicolumn{2}{c}{\(150^{\circ}\)} \\ & Avg & Target \\ \hline 0.99999 & 86.44\(\pm\)0.6 & 59.20\(\pm\)1.3 \\ **0.9999** & **87.09\(\pm\)0.2** & **60.12\(\pm\)1.3** \\ 0.99 & 86.61\(\pm\)0.4 & 58.78\(\pm\)1.2 \\ 0.9 & 85.61\(\pm\)0.3 & 57.76\(\pm\)0.7 \\ 0.5 & 83.54\(\pm\)0.6 & 54.17\(\pm\)0.8 \\ 0.3 & 81.55\(\pm\)0.9 & 51.52\(\pm\)1.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Impact of \(\alpha\) on the performance. Figure 1: Impact of the memory size on the memorization and generalization.
2309.07353
Anytime-valid inference in N-of-1 trials
App-based N-of-1 trials offer a scalable experimental design for assessing the effects of health interventions at an individual level. Their practical success depends on the strong motivation of participants, which, in turn, translates into high adherence and reduced loss to follow-up. One way to maintain participant engagement is by sharing their interim results. Continuously testing hypotheses during a trial, known as "peeking", can also lead to shorter, lower-risk trials by detecting strong effects early. Nevertheless, traditionally, results are only presented upon the trial's conclusion. In this work, we introduce a potential outcomes framework that permits interim peeking of the results and enables statistically valid inferences to be drawn at any point during N-of-1 trials. Our work builds on the growing literature on valid confidence sequences, which enables anytime-valid inference with uniform type-1 error guarantees over time. We propose several causal estimands for treatment effects applicable in an N-of-1 trial and demonstrate, through empirical evaluation, that the proposed approach results in valid confidence sequences over time. We anticipate that incorporating anytime-valid inference into clinical trials can significantly enhance trial participation and empower participants.
Ivana Malenica, Yongyi Guo, Kyra Gan, Stefan Konigorski
2023-09-13T23:34:15Z
http://arxiv.org/abs/2309.07353v1
# Anytime-valid inference in N-of-1 trials ###### Abstract App-based N-of-1 trials offer a scalable experimental design for assessing the effects of health interventions at an individual level. Their practical success depends on the strong motivation of participants, which, in turn, translates into high adherence and reduced loss to follow-up. One way to maintain participant engagement is by sharing their interim results. Continuously testing hypotheses during a trial, known as "peeking", can also lead to shorter, lower-risk trials by detecting strong effects early. Nevertheless, traditionally, results are only presented upon the trial's conclusion. In this work, we introduce a potential outcomes framework that permits interim peeking of the results and enables statistically valid inferences to be drawn at any point during N-of-1 trials. Our work builds on the growing literature on _valid confidence sequences_, which enables anytime-valid inference with uniform type-1 error guarantees over time. We propose several causal estimands for treatment effects applicable in an N-of-1 trial and demonstrate, through empirical evaluation, that the proposed approach results in valid confidence sequences over time. We anticipate that incorporating anytime-valid inference into clinical trials can significantly enhance trial participation and empower participants. N-of-1 trials, anytime-valid inference, design-based, confidence sequence, causal inference, personalized medicine ## 1 Introduction The statistical inference of individual causal effects of health interventions holds great importance for many clinical and biomedical applications. In particular, understanding which treatment and dosage are most effective for a particular patient lies at the heart of personalized medicine. To this aim, different methodologies have been proposed. One popular approach involves the collection and analysis of extensive population-level datasets with the goal of estimating effects at the individual level. With suitable covariates and a clear understanding of disease mechanisms, it becomes feasible to potentially acquire individual-level effects (Shalit et al., 2017; Bica et al., 2020; Smith et al., 2020; Verstraete et al., 2021; Diemert et al., 2021). Nevertheless, the practicality of this approach is often limited to specific applications. For instance, in cases like cancer, personalized signatures can be derived from genetic mutations. However, more often, causal effects can only be identified within specific subgroups (van Kruijsdijk et al., 2014; Zhang et al., 2017; van Amsterdam et al., 2022; Masouel et al., 2022). As a second approach, stemming from recent biotechnological advancements, personalized treatments have been developed directly for selected rare target diseases in personalized drug development (Kim et al., 2019; Seydel, 2023). However, such approaches are still restricted to a selected class of drug targets and limited by resources and costs. As a third approach, experimental studies can be designed to directly evaluate and compare the effectiveness of one or multiple treatments in a given person. These so-called N-of-1 trials have been established as the gold standard for inferring individual-level effects, and different guidelines have been proposed for their standardized application (Nikles and Mitchell, 2015; Vohra et al., 2015; Porcino et al., 2020). More formally, N-of-1 trials are multi-crossover randomized controlled trials (RCTs) in one person, hence one or more treatments are administered over time in a predefined, potentially randomized, sequence. Also, if the same N-of-1 trial is performed in multiple persons in a so-called series of N-of-1 trials, the trials can be jointly analyzed to yield efficient population-level treatment effect estimates (Zucker et al., 2010). In order to achieve sufficient statistical power for inference, N-of-1 trials require frequent longitudinal measurements which has hindered their application in practice. Only recently, digital tools have been developed for setting up and executing N-of-1 trials. This, in turn, enables scalability across various studies, patients, and providers (Taylor et al., 2018; Daskalova et al., 2020; Zenner et al., 2022; Konigorski et al., 2022). Digital app-based N-of-1 trials provide a straightforward and expandable means for evaluating patient outcomes, whether actively reported by patients or passively collected sensor data. However, their success still relies on the retention and high adherence of the patients to the trial. To keep the trial participants engaged, apart from developing user-friendly interfaces for the apps, it is also important to provide frequent feedback. A classical study design involves collecting all trial data, conducting analysis, and subsequently reporting the results back to the participants. However, if patients are required to provide daily outcomes in a trial that runs for weeks or even months, without any intermediate feedback, patients might lose interest. On the other hand, "peeking" at the intermediate results (performing a hypothesis test before the end of the trial) introduces statistical biases. Thus, a framework for valid statistical inference is required at all time points of an N-of-1 trial in order to enable intermediate analysis. **Contributions.** In this work, we provide a statistical framework that allows anytime-valid inference in N-of-1 trials with intermediate analysis of the results. Our contributions are to provide a (i) potential outcomes framework for N-of-1 trials, (ii) formal definition of several causal estimands of interest in N-of-1 trials, and (iii) construction of confidence sequences that allow anytime-valid inference. Finally, we validate our approach empirically in simulation studies and compare it to existing state-of-the-art approaches. ### Related Work The field of N-of-1 trials has originated from the medical domain, and continues to be largely driven by its clinical applications. As such, the existing literature has almost exclusively focused on an applied presentation, and omitted a more formal statistical definition of the study setup and estimands of interest. There exist some recent contributions: Yang et al. (2021) give a formalization of different potential study designs and their consequences on design parameters. Daza (2018, 2019) and Daza and Schneider (2022) present a counterfactual framework for single case studies which include both observational and experimental N-of-1 settings. They define individual-level target estimands and estimators in their work, which can include time trends and carryover effects. As a difference to these previous approaches, we consider design-based estimands based on the immediate causal effects conditional on the accrued history, and focus on constructing anytime-valid confidence sequences for the proposed target parameters. Other related work has been published in the traditional RCTs literature that is relevant to our aim of enabling anytime-valid inference. In RCTs, interim analyses are often performed to evaluate safety and side effects, as well as to assess intermediate treatment effects that might warrant early termination of the trial based on predefined criteria for treatment inferiority or superiority. Common approaches for such interim analyses include the O'Brien-Fleming (O'Brien and Fleming, 1979), Haybittle-Peto (Haybittle, 1971; Peto et al., 1976) and Pocock (Pocock, 1977) methods, which aim to control the overall type I error across all interim tests by adjusting the respective critical test statistic values. While the Pocock method chooses the same alpha level at all interim tests, the O'Brien-Fleming and Haybittle-Peto method require stronger evidence at earlier interim points. In the CONSORT extension with guidelines for harm-related stopping rules, the O'Brien-Fleming method is recommended (Ioannidis et al., 2004). Demets and Lan (1994) and others have generalized these ideas through an "alpha-spending function" which generates critical values such that the sum of probabilities of exceeding those values across the interim tests equal the type I error rate alpha. We note that all these approaches have been designed for classical population-level RCTs, and have not been evaluated in N-of-1 trials. As only exception, Selukar (2021) considers a very specific situa tion, when a series of N-of-1 trials is sequentially monitored, interim analyses are few and pre-defined, and only summary statistics are available of each trial. Third, relevant work has originated from causal inference literature on sequential designs for single time series (Bojinov and Shephard, 2019; Malenica et al., 2021; Ham et al., 2022). Malenica et al. (2021) define conditional estimand classes of interest and provide inference on them in an adaptive setting of a single time series. Bojinov and Shephard (2019) provide a potential outcome framework for single time series and define estimands of interest from the design-based perspective. Finally, we adapt and build on work by Ham et al. (2022), who provide design-based confidence sequences for very long time series in a setting where treatment is randomized at each time point. In this work we focus specifically on the setup of N-of-1 trials as crossover experiments in treatment blocks, with N-of-1 trial-specific estimands. ## 2 Statistical Formulation ### Notation and Observed Data We consider the trajectory of a single individual in an N-of-1 trial with \(K\) treatment periods (also denoted as "periods" or "blocks"). Suppose for each period \(k\in[K]:=\{1,\ldots,K\}\), there are \(T_{k}\) time points. We write \((k,t)\) to indicate the \(t\)-th time point of the treatment period \(k\). We emphasize that, despite our focus on a single N-of-1 trial, our results generalize to a series of N-of-1 trials with multiple patients. Let \(O_{k,t}:=(A_{k,t},Y_{k,t},W_{k,t})\) denote data for a single individual at time point \((k,t)\) including the treatment, outcome, and covariates. Specifically, at each time point \((k,t)\), one assigns a binary treatment \(A_{k,t}\in\mathcal{A}:=\{0,1\}\) to a patient, where \(A_{k,t}=1\) denotes the treatment and \(A_{k,t}=0\) the control (or alternative treatment). In N-of-1 trials, the same treatment is assigned throughout a period (e.g., for block \(k\), control is given at all time points \((1,\ldots,T_{k})\)). In an extreme case, one might randomize at each follow-up time (corresponding to the chronological observation), so \(T_{k}=1\) for all \(k\in[K]\). Notice that the number of time points within a block, \(T_{k}\), can vary by \(k\), allowing for different lengths of each treatment period. Once the treatment is assigned for block \(k\), post-treatment outcome of interest \(Y_{k,t}\in\mathcal{Y}\) and possibly a vector of other time-varying covariates \(W_{k,t}\in\mathcal{W}\) are collected. Therefore, at each time point \(t\), we assign treatment \(A_{k,t}\) according to the period \(k\), then collect \(Y_{k,t}\), followed by \(W_{k,t}\). Let \(O_{k,1:t}=(O_{k,1},\ldots,O_{k,t})\) denote the data collected in period \(k\) up to time \(t\). As each \(k\) contains \(T_{k}\) data points, we define \(O_{k}:=O_{k,1:T_{k}}=(O_{k,1},\ldots,O_{k,T_{k}})\) as the full data collected in the \(k\)-th treatment period. Similarly, we write \(A_{k}=(A_{k,1},\ldots,A_{k,T_{k}})\) for the full and \(A_{k,1:t}=(A_{k,1},\ldots,A_{k,t})\) for the cropped sequence of treatments in block \(k\). To clarify, we illustrate the proposed notation with a simple example where \(K=2\) and \(T_{1}=T_{2}=2\). Without putting any assumptions on the design of treatment blocks (and therefore treatment assignment), all possible treatment sequences for a single individual are as follows: \(\{(1,1),(0,0)\}\), \(\{(0,0),(1,1)\}\), \(\{(1,1),(1,1)\}\) and \(\{(0,0),(0,0)\}\). Symbolically, we write the resulting treatment sequence as \(\{(A_{1,1},A_{1,2}),(A_{2,1},A_{2,2})\}=\{A_{1,1:2},A_{2,1:2}\}\). It follows that data collected for a whole block \(k\) is then \(O_{k}=(A_{k},Y_{k},W_{k})\), where \(Y_{k}=(Y_{k,1},\ldots,Y_{k,T_{k}})\) and \(W_{k}=(W_{k,1},\ldots,W_{k,T_{k}})\). When no time-varying covariates are gathered beyond the outcome of interest, we represent this as \(W_{k}=\emptyset\). We emphasize that \(W_{k}\) (and the entire history, or some function of it, for that matter) can be used to inform treatment allocation in the subsequent block, \(k+1\), allowing for adaptive treatment assignment. Finally, let \(O_{0}\) be the baseline covariates obtained before the start of the trial. Without loss of generality, we assume \(T_{k}=T\) for all blocks \(k\in[K]\) (\(T>1\)), which aligns with the design of a canonical N-of-1 trial. We, however, emphasize that all of our results generalize to the setting where the \(T_{k}\) values vary in each treatment period. Below, we provide some essential definitions regarding time and block-specific variable collections used throughout the manuscript. For a single individual, let \(\bar{O}_{k,t}=(O_{0},O_{1,1},\ldots,O_{k,t})\) denote the full history up to time \(t\) of block \(k\) (including baseline covariates), while \(\underline{O}_{k,t}=(O_{k,t},\ldots,O_{K,T})\) denotes all future variables from index \((k,t)\) to \((K,T)\). Then, the full _trajectory_ (or a _time-series_) is presented by \(O^{KT}:=\bar{O}_{K,T}=\{O_{0},O_{1,1},\ldots,O_{K,T}\}\). Similarly, we define all past and future collections of \(A\), \(Y\), and \(W\). For example, we write \(\bar{A}_{k,t}=(A_{1,1},\ldots,A_{k,t})\) for the sequence of treatments until the \(t\)-th time point of period \(k\). Lastly, let \(H^{A}_{k,t}:=\bar{O}_{k,t-1}\) be the full history of all variables until \(A_{k,t}\). It then follows that \(H^{Y}_{k,t}:=(A_{k,t},\bar{O}_{k,t-1})\) and \(H^{W}_{k,t}:=(A_{k,t},\bar{X}_{k,t},\bar{O}_{k,t-1})\) are the full variable histories until \(Y_{k,t}\) and \(W_{k,t}\). ### Likelihood of the Trajectory We let \(O^{KT}\sim P_{0}\), where \(P_{0}\) denotes the true probability distribution of \(O^{KT}\). Throughout the remainder of the text, we use capital letters to indicate random variables, and lowercase for their realizations. We use the naught subscript to denote _true_ probability distributions or components thereof. Let \(\mathcal{M}\) denote the _statistical model_ for the probability distribution of the data, which is nonparametric, beyond possible knowledge of the treatment mechanism (i.e., known randomization probabilities). We note that the true probability distribution of the data is an element of \(\mathcal{M}\), and denote \(P\) as any probability distribution such that \(P\in\mathcal{M}\). Suppose that \(P_{0}\) admits a density \(p_{0}\) w.r.t. a dominating measure \(\mu\) over \(\mathcal{O}\) which can be written as the product measure \(\mu=\times_{k=1,t=1}^{k=K,t=T}(\mu_{A}\times\mu_{Y}\times\mu_{W})\), with \(\mu_{A}\), \(\mu_{Y}\), and \(\mu_{W}\) measures over \(\mathcal{A}\), \(\mathcal{Y}\), and \(\mathcal{W}\). The likelihood of \(o^{KT}\) can be factorized according to the time-ordering as \[p_{0}(o^{KT})=p_{0,O_{0}}(o_{0})\prod_{k=1}^{K}\prod_{t=1}^{T}p_ {0,A}(a_{k,t}\mid h_{k,t}^{A}) \tag{1}\] \[p_{0,Y}(y_{k,t}\mid h_{k,t}^{Y})p_{0,W}(w_{k,t}\mid h_{k,t}^{W}),\] where \(a_{k,t}\mapsto p_{0,A}(a_{k,t}\mid h_{k,t}^{A})\), \(y_{k,t}\mapsto p_{0,Y}(y_{k,t}\mid h_{k,t}^{Y})\), and \(w_{k,t}\mapsto p_{0,W}(w_{k,t}\mid h_{k,t}^{W})\) are conditional densities w.r.t. dominating measures \(\mu_{A}\), \(\mu_{Y}\), \(\mu_{W}\). ## 3 Causal Effects for N-of-1 Trials The main design components of an N-of-1 trial include the (1) number of blocks \(K\), (2) length of each block, \(T\), and (3) choice of treatment allocation for treatment periods (pre-specified or randomized, type of randomization) (Yang et al., 2021). If the treatment sequence is pre-specified before a trial, then an individual is assigned a specific treatment sequence deterministically (e.g., control-treatment-control-treatment). This is useful when one is interested in the effect of a specific treatment sequence, or wishes to avoid randomly generating unwanted treatment sequences (e.g., giving the same treatment across consecutive periods). Alternatively, one might generate treatment sequences by randomizing treatment allocation across blocks. In this work, we focus specifically on randomized treatment sequences. There are numerous randomization schemes for designing an N-of-1 trial (Yang et al., 2021). In this work, we rely on (1) _pairwise randomization_, where the order of two different treatments in a consecutive pair of treatment periods is randomized; (2) _restricted randomization_, where treatment is randomly assigned with the restriction that the number of treatment and control periods is approximately the same (but treatment probability is never zero) and (3) _unrestricted randomization_, where treatment is randomly assigned at each period. All of the listed schemes randomize on the block-level, and assign the same treatment at all time points within a block. ### Time-Series Potential Outcomes We define \(\bar{a}_{k}=(a_{1},\ldots,a_{k})\) as the _treatment path_ until period \(k\), where we remind that \(a_{k}=a_{k,1:T}\). In a point treatment setting, the treatment path is of length 1, and each study participant has \(2^{1}\) potential outcomes. In an N-of-1 trial, however, we follow a single individual over time and administer \(K\) different treatments, each of length \(T\) (or more generally, \(T_{k}\)). For a binary treatment and unrestricted randomization, we then have \(2^{K}\) different treatment paths that could have been observed. In Table 1, we include the total number of possible treatment sequences for each considered randomization scheme, at both odd and even number of treatment periods. We define \(Y_{k,t}(\bar{a}_{k,t})\) as the potential outcome at time point \((k,t)\), which may depend on the full history of assigned treatments up until \((k,t)\). Note that we don't make assumptions on carryover or other time-dependent effects. Consequently, we denote \(\bar{Y}_{k,t}(\bar{a}_{k,t})\) as the collection of potential outcomes up until \((k,t)\): \(\bar{Y}_{k,t}(\bar{a}_{k,t})=(Y_{1,1}(\bar{a}_{1,1}),...,Y_{1,T}(\bar{a}_{1,T} ),...,Y_{k,1}(\bar{a}_{k,1}),...,Y_{k,t}(\bar{a}_{k,t}))\). Further, let \(Y_{k}(\bar{a}_{k})\) denote a summary potential outcome of treatment period \(k\). In particular, \(Y_{k}(\bar{a}_{k})\) depends on the potential outcomes for all time points \(t\) within block \(k\), i.e., \(Y_{k}(\bar{a}_{k}):=f(Y_{k,1}(\bar{a}_{k,1}),\ldots,Y_{k,T}(\bar{a}_{k,T}))\), where \(f\) is any function that takes as input the potential outcomes in block \(k\). Finally, let \(\bar{Y}_{k}(\bar{a}_{k})=(Y_{1}(\bar{a}_{1}),\ldots,Y_{k}(\bar{a}_{k}))\) denote \begin{table} \begin{tabular}{l l l} \hline \hline **Randomization** & **Odd \(K\)** & **Even \(K\)** \\ \hline Pairwise & \(2^{\frac{K+1}{2}}\) & \(2^{\frac{K}{2}}\) \\ Restricted & \(\sim 2\binom{K}{(K-1)/2}\) & \(\sim\binom{K}{K/2}\) \\ Unrestricted & \(2^{K}\) & \(2^{K}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Number of possible treatment paths generated under different randomization schemes. the counterfactual outcomes that would have been observed over time under treatment path \(\bar{A}_{k}=\bar{a}_{k}\). When \(f\) is the average function across the respective block, we have that \(\tilde{Y}_{k}(\bar{a}_{k})=(Y_{1}(\bar{a}_{1}),\dots,Y_{k}(\bar{a}_{k}))=(1/T \sum_{t=1}^{T}Y_{1,t}(\bar{a}_{1,t}),\dots,1/T\sum_{t=1}^{T}Y_{K,t}(\bar{a}_{K,t }))\). Note that we assume that future potential outcomes do not cause past treatments (Granger, 1969). We formalize this in Assumption 1. Finally, we do not make assumptions on the dimension of any of the defined potential outcomes. **Assumption 1** (Granger Causality): _Let \(\bar{W}_{T}=\emptyset\). For each \(k\in[K]\), we have that_ \[P(A_{k,1}=a_{k,1}\mid\bar{A}_{k-1}=\bar{a}_{k-1},\bar{Y}_{K}( \bar{a}_{K}))\] \[=P(A_{k,1}=a_{k,1}\mid\bar{A}_{k-1}=\bar{a}_{k-1},\bar{Y}_{k-1}( \bar{a}_{k-1})).\] To illustrate the notation, consider a simple design with unrestricted randomization, \(A\in\{0,1\}\), \(K=2\) and \(T=2\). For the first period \(k=1\), there are \(2^{1}=2\) potential outcomes: \(Y_{1,2}(0,0)\) and \(Y_{1,2}(1,1)\). For the second treatment period, there are \(2^{2}=4\) potential outcomes at the end of the treatment block: \(Y_{2,2}(0,0,0,0)\), \(Y_{2,2}(0,0,1,1)\), \(Y_{2,2}(1,1,0,0)\), and \(Y_{2,2}(1,1,1,1)\), and the total number of potential outcomes at the end of the two treatment periods under unrestricted randomization is \(2+4=6\) (more generally, \(2(2^{K}-1)\) for a trial consisting of \(K\) treatment periods). The total number of treatment paths, however, is \(2^{K}=2^{2}=4\), out of which we observe only one. See Bojinov and Shephard (2019) for more examples. Let's consider the block-average function for \(f\), so that each \(Y_{k}(\bar{a}_{k})\) is the average of all potential outcomes in block \(k\). For the first period, we then have that \(Y_{1}(\bar{a}_{1})=1/2\sum_{t=1}^{T=2}Y_{1,t}(\bar{a}_{1,t})\), and \(Y_{1}(\bar{a}_{1})\) is an average of \(Y_{1,1}(0)\) and \(Y_{1,2}(0,0)\), or \(Y_{1,1}(1)\) and \(Y_{1,2}(1,1)\). Similarly, for the second treatment period \(k=2\), \(Y_{2}(\bar{a}_{2})=1/2\sum_{t=1}^{T=2}Y_{2,t}(\bar{a}_{2,t})\). Therefore, the potential outcomes that would have been observed over time are \(\bar{Y}_{2}(\bar{a}_{2})=(Y_{1}(\bar{a}_{1}),Y_{2}(\bar{a}_{2}))\). ### Target Parameter In an N-of-1 trial, treatments are assigned based on the current treatment period. In this section, we establish several causal estimands that might be of interest in an N-of-1 trial, using the potential outcomes as defined in Subsection 3.1. The first target parameter is the _immediate causal effect_ (ICE) of treatment, as opposed to control, at period \(k\). It is defined as the short-term (contemporaneous) effect of administering treatments during period \(k\), assessed at time \((k,T)\), right after the last treatment in period \(k\), conditional on the observed past. A formal definition is provided in Definition 1. Another parameter of interest is the time \(t\)-specific ICE, which represents the causal effect of assigning treatment from time \((k,1)\) to \((k,t)\). This target parameter hints at the effect of administering treatment for \(t\) time points in a treatment period. We note that ICE is a special case of the time \(t\)-specific ICE where \(t=T\). **Definition 1** (Immediate Causal Effect): \[\psi_{k}(\bar{a}_{k-1})=Y_{k}(\bar{a}_{k-1},a_{k}=\textbf{1})-Y_{k}(\bar{a}_{k -1},a_{k}=\textbf{0})\] _for any \(k\in[K]\). **1**, **0** are vectors of dimension \(T\times 1\)._ **Definition 2** (Time \(t\)-specific ICE): \[\psi_{k,t}(\bar{a}_{k-1})=Y_{k,t}(\bar{a}_{k-1},a_{k,1:t}=\textbf{1 }_{t})\] \[\qquad\qquad\qquad\qquad-Y_{k,t}(\bar{a}_{k-1},a_{k,1:t}=\textbf{0 }_{t})\] _for any \(k\in[K]\), where **1** and **0**\({}_{t}\) are vectors of dimension \(t\times 1\), and \(a_{k,1:t}=(a_{k,1},\dots,a_{k,t})\)._ Notice that the causal estimands in both Definition 1 and Definition 2 are functions of the entire treatment path. As such, they include the _carry-over effect_ from the past treatment period assignment in addition to the effect of period \(k\). We also emphasize that they are _data-adaptive_ parameters -- the estimand changes as a function of the observed past and/or treatment path. Defining causal effects conditional on history might seem unusual from the classical causal inference perspective. However, such data-adaptive approach allows us to define causal effects (1) for long time-series, (2) with valid inference (as the central limit theorem still holds), and (3) without any additional assumptions on the time-series structure (Bojinov and Shephard, 2019). Data-adaptive target parameters in longitudinal settings have been previously described from both super population and design-based perspectives (Bojinov and Shephard, 2019; Malenica et al., 2021). Lastly, we define another target parameter of interest in N-of-1 trials in Definition 3, the Average Immediate Causal Effect (AICE) -- the running average of treatment effects over blocks. A similar parameter can also be defined for the average over time \(t\)-specific ICE. **Definition 3** (Aice): _For any \(k\in[K]\),_ \[\psi_{\mathrm{AICE}}^{k}=\frac{1}{k}\sum_{j=1}^{k}\psi_{j}(\bar{a}_{j-1}).\] ### Estimation In this paper, we focus on the design-based approach to causal inference. Within the design-based paradigm, the full set of potential outcomes is fixed and always conditioned on; as such, the only source of randomness comes from the treatment assignment. Let \(\mathcal{F}_{k}\) denote the filtration which contains all observed data up to time \((k,T)\) conditional on \(\tilde{Y}_{k}(\bar{a}_{k})\) (\(\bar{O}_{k}=\{(A_{j,t},Y_{j,t},W_{j,t})\}_{j=1,t=1}^{j=k,t=T}\)), and all the potential outcomes (\(\bar{Y}_{K}(\bar{a}_{K})\)). The filtration \(\mathcal{F}_{k}\) obeys the nesting property where \(\mathcal{F}_{k}\subset\mathcal{F}_{k+1}\) for all \(k\). At the beginning of each period \(k\), we randomly assign treatment with probability \(g(A_{k,1})=P(A_{k,1}\mid\mathcal{F}_{k-1})\). Note that \(g(A_{k,1})=g(A_{k,2})=\ldots=g(A_{k,T})\), as the probability of treatment is the same for each time point \(t\) in block \(k\). By Assumption 1, it follows that \(g(A_{k,1})=P(A_{k,1}\mid\mathcal{F}_{k-1})=P(A_{k,1}\mid\bar{A}_{k-1}=\bar{a} _{k-1},\bar{Y}_{k-1}(\bar{a}_{k-1}))\). We emphasize that if treatment is assigned independently of the past, then \(g(A_{k,1})=P(A_{k,1})\). To estimate ICE and AICE, we focus on the time-series version of the Horvitz-Thomson and the stabilized IPTW (Hajek) estimator in this work (Horvitz and Thompson, 1952; Hajek, 1971; Robins et al., 2000; Hirano et al., 2003; Imbens and Rubin, 2015). To enable statistical inference, we assume there is a positive probability of treatment and control at every period \(k\). Formally stated in Assumption 2, the positivity assumption excludes the pre-determined treatment periods occasionally used in N-of-1 trials. **Assumption 2** (Positivity): _For every \(k\in[K]\),_ \[0<g(A_{k,1})<1.\] Under Assumption 2, we can estimate ICE and AICE using the observed data. The IPTW estimator of \(\psi_{k}(\bar{a}_{k-1})\), denoted as \(\hat{\psi}_{k}\), is then defined as \[\hat{\psi}_{k}:= \frac{\mathds{1}(A_{k,1}=1)f(Y_{k,1:T})}{g(A_{k,1})}-\frac{ \mathds{1}(A_{k,1}=0)f(Y_{k,1:T})}{1-g(A_{k,1})}.\] Recall \(f\) is any function that takes data of block \(k\) as input (in the estimator, observed data at time points \((k,1:T)\)). The variance estimator is defined as \[\hat{\sigma}_{k}^{2}:= \frac{\mathds{1}(A_{k,1}=1)f(Y_{k,1:T})^{2}}{g(A_{k,1})^{2}}+ \frac{\mathds{1}(A_{k,1}=0)f(Y_{k,1:T})^{2}}{(1-g(A_{k,1}))^{2}}.\] Lemma 1 establishes that the proposed estimator is unbiased, and derives its variance over the randomization (proof in Appendix A). **Lemma 1** (Properties of the ICE Estimator): _Under Assumption 2, it follows that_ \[\mathbb{E}(\hat{\psi}_{k}-\psi_{k}(\bar{a}_{k-1})|\mathcal{F}_{k- 1})=0\] _and_ \[\text{Var}(\hat{\psi}_{k}-\psi_{k}(\bar{a}_{k-1})|\mathcal{F}_{k- 1})\leq\mathbb{E}(\hat{\sigma}_{k}^{2}|\mathcal{F}_{k-1}).\] The running average immediate effect over treatment periods, i.e. the AICE, can be estimated by \[\hat{\psi}_{\text{AICE}}^{k}=1/k\sum_{j=1}^{k}\hat{\psi}_{j}. \tag{2}\] The unbiasedness of \(\hat{\psi}_{\text{AICE}}^{k}\) follows trivially from Lemma 1. To stabilize the variance of the IPTW, we also investigate the Hajek estimator of AICE, denoted as \(\hat{\psi}_{\text{AICE}}^{k}\) and presented in Equation (3). We allocate the study of the Hajek estimator of AICE to Appendix B. \[\tilde{\psi}_{\text{AICE}}^{k}:= \frac{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=1)f(Y_{j,1:T})/g(A_{j,1}) }{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=1)/g(A_{j,1})} \tag{3}\] \[-\frac{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=0)f(Y_{j,1:T})/(1-g(A_{j, 1}))}{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=0)/(1-g(A_{j,1}))},\] with the corresponding variance estimator: \[\hat{\sigma}_{\text{AICE}}^{2}:= \frac{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=1)f(Y_{j,1:T})^{2}/g(A_{j, 1})^{2}}{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=1)/g(A_{j,1})^{2}} \tag{4}\] \[+\frac{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=0)f(Y_{j,1:T})^{2}/(1-g( A_{j,1}))^{2}}{\sum_{j=1}^{k}\mathds{1}(A_{j,1}=0)/(1-g(A_{j,1}))^{2}}.\] ## 4 Confidence Sequences We now introduce design-based asymptotic confidence sequences for N-of-1 trials. First, we define a _confidence sequence_ as a sequence of confidence intervals that are uniformly valid over time (also known as _anytime-valid_). We say \((I_{k})_{k=1}^{K}\) is a valid confidence sequence with type-1 error \(\alpha\) (or level \(1-\alpha\)) for the target parameter \((\psi_{\text{AICE}}^{k})_{k=1}^{K}\) if for any data-dependent stopping rule at \(1\leq\tau\leq K\), \[P(\exists k\in\{1,\ldots,\tau\}\:s.t.\:\psi_{\text{AICE}}^{k} \notin I_{k})\leq\alpha. \tag{5}\] Under Equation (5), we can perform valid inference through each \(I_{k}\). Furthermore, one can terminate a trial as soon as a statistically significant effect is detected with \(\psi^{k}_{\rm{AICE}}\notin I_{k}\), allowing for "peeking" during the trial duration. Anytime-valid inference was first introduced by Wald (1945). Since then, significant advancements have been made in developing confidence sequences under minimal regularity conditions (Howard et al., 2021; Bibaut et al., 2023). Two of the key contributions include the idea of time-uniform analogues of asymptotic confidence intervals (known as _asymptotic confidence sequences_), and their extension to design-based framework for anytime-valid causal inference (Waudby-Smith et al., 2023; Ham et al., 2022). For clarity, we provide a semi-formal definition as Definition 4. Informally, asymptotic confidence sequences are valid confidence sequences as the number of time-points grows. While practically this might mean we don't have valid coverage at early times, this concern is alleviated by the (i) N-of-1 design, where each period is of length \(T>1\), and by the (ii) upper bound variance estimator introduced in Section 3.3. **Definition 4** (Asymptotic Confidence Sequence): _A sequence of intervals \((I_{k})^{K}_{k=1}\) is a level \(1-\alpha\) asymptotic confidence sequence for the target parameter sequence \((\psi^{k}_{\rm{AICE}})^{K}_{k=1}\) if there exists a non-asymptotic confidence sequence \((I^{\prime}_{k})^{K}_{k=1}\) of level \(1-\alpha\) such that each interval \(I_{k}\) shares the center with \(I^{\prime}_{k}\), and that \(width(I_{k})/width(I^{\prime}_{k})\to 1\)\(a.s.\). Moreover, we say \((I_{k})_{k}\) has approximation rate \(R\) if \(width(I_{k})-width(I^{\prime}_{k})=O_{a.s.}(R)\)._ We now formally introduce the asymptotically valid confidence sequences for the target parameter sequence of the running average immediate causal effect, \((\psi^{k}_{\rm{AICE}})_{k}\). Before stating Theorem 1 (proof in Appendix C), we need two more assumptions. In Assumption 3, we assume there is an unknown, possibly extreme constant \(M\) which bounds the realized potential outcomes. As \(M\) can be arbitrarily large and realizations are bounded, we consider Assumption 3 a mild regularity condition. Assumption 4 concerns the behavior of the variance, and is satisfied as long as potential outcomes do not vanish over time. **Assumption 3** (Bounded Potential Outcomes): _There exists a constant \(M\in\mathbb{R}\) such that for any \(k\in[K]\), and any treatment path \(\bar{a}_{k}\), \(|Y_{k}(\bar{a}_{k})|\leq M\)._ **Assumption 4** (Non-vanishing Variance): _Let \(\tilde{S}_{k}:=\sum_{j=1}^{k}\sigma_{j}^{2}\), where \(\sigma_{j}^{2}:=\frac{Y_{j}(\bar{a}_{j-1},\mathbf{1}_{T})^{2}}{g(A_{j,1})}+ \frac{Y_{j}(\bar{a}_{j-1},\mathbf{0}_{T})^{2}}{1-g(A_{j,1})}\). Then, \(\tilde{S}_{k}\to\infty\) as \(k\to\infty\) a.s._ **Theorem 1**: _Let \(S_{k}=\sum_{j=1}^{k}\hat{\sigma}_{j}^{2}\). Under Assumptions 2, 3 and 4, and for any constant \(\eta>0\),_ \[\frac{1}{k}\sum_{j=1}^{k}\hat{\psi}_{j}\pm\frac{1}{k}\sqrt{\frac{\eta^{2}S_{ k}+1}{\eta^{2}}\log\left(\frac{\eta^{2}S_{k}+1}{\alpha^{2}}\right)}\] _forms a valid \((1-\alpha)\) asymptotic confidence sequence for the target parameter sequence \((\psi^{k}_{\rm{AICE}})_{k}\), with approximation rate \(o(\sqrt{\tilde{S}_{k}\log\tilde{S}_{k}}/k)\)._ ## 5 Experiments In the first experiment, we generate 1000 independent N-of-1 trials for an individual under the null hypothesis and illustrate the need for novel approaches to construct valid confidence sequences. In each independent trial, we set \(K=30\) blocks and \(T=10\) time points within each block. For simplicity, there is no treatment effect or covariates. We consider unrestricted randomization with 50% probability for each treatment. First, we demonstrate the inflated type-1 error of a naive t-test (which constructs a confidence interval according to the two-sample t-test with level \(\alpha=0.05\) at each block \(K\)), as well as the O'Brien-Fleming approach (O'Brien and Fleming, 1979). In Figure 1, we observe that the naive approach quickly accumulates type I errors, reaching 0.6 only after 6 treatment blocks. A naive application of the O'Brien-Fleming approach yields better performance than the simple Figure 1: Empirical type I error of a naive t-test and of the O’Brien-Fleming approach of 1000 N-of-1 trials in the unrestricted randomization setting. The dashed line represents \(\alpha=0.05\). t-test, but still result in inflated type I errors that increase over time. The second experiment aims to visualize the performance of the proposed confidence sequences and demonstrate empirically that the sequences achieve both early stopping and time uniform coverage. For this purpose, we generate 1000 independent N-of-1 trials, each with a decreasing treatment effect size of \(5+1/k\), no carry-over effects, \(K=100\) treatment blocks, and \(T=10\) data points within each block. We consider both the unrestricted and pairwise randomization schemes, where we construct confidence sequences of AICE using our proposed IPTW (Theorem 1 derived from Equation (2)) and stabilized IPTW estimators (Equation (3)). In Figure 2 and Figure 3 in Appendix D, we illustrate the confidence sequences constructed by the proposed algorithms in a single run. In Table 2, we compute the average stopping time and time-uniform coverage proportion among the 1000 independent trials. Here the stopping time is set as the earliest block at which the confidence interval excludes 0, up to the 100th block. The time-uniform coverage refers to the proportion among the random experiments where _all_ confidence intervals cover the true treatment effect. We observe that the proposed confidence sequences enjoy high coverage and their widths are decreasing over time as expected. The stabilized estimator shows a slightly smoother interval sequence and slightly longer average stopping times, though overall both estimators show similar empirical behavior in our considered scenarios. These insights hold for both randomization schemes. ## 6 Discussion In this work, we provide a statistical framework that enables anytime-valid inference in N-of-1 trials for intermediate peeking at the results and analyzing them. We validate our approach in simulation studies and compare it to existing state-of-the-art approaches. The results indicate that recommended methods for population-level RCTs provide invalid confidence sequences for N-of-1 trials. Our proposed approach, however, results in valid confidence sequences. This contribution adds to the literature on interim analysis and anytime-valid inference, with a specific focus on N-of-1 trials. Our proposed estimands allow traditional N-of-1 trials to make use of our developed methodology, and allow participants of digital N-of-1 trials to start looking at the results while the trial \begin{table} \begin{tabular}{c c c} \hline \hline & Avg. Stopping Time & Coverage \\ \hline IPTW & 32.50 (7.40) & 0.959 \\ S-IPTW & 31.28 (7.41) & 1.0 \\ Pair IPTW & 31.96 (2.98) & 1.0 \\ Pair S-IPTW & 33.32 (3.96) & 1.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Average stopping time and uniform time coverage proportion over 1000 independent trials for the proposed confidence sequences under the unrestricted and pairwise randomization setting. ‘S-IPTW’ denotes the stabilized IPTW. ‘Pair IPTW’ and ‘Pair S-IPTW’ represent the IPTW and stabilized IPTW, respectively, in the pairwise randomization setting. Figure 2: All-time valid confidence intervals of AICE obtained by IPTW in a single run at \(\alpha=0.05\). The dashed line represents the zero (null) line. Top row: unrestricted randomization scheme. Bottom row: pairwise randomization scheme. is ongoing. There is a high need for a user-friendly solution that allows peeking, and we expect that this will enable the further widespread use of N-of-1 trials. Follow-up work can evaluate our developed methodology across a broader range of scenarios and apply it to a clinical N-of-1 trial. Further, it can be extended to population-level analyses of (adaptive) series of N-of-1 trials.
2309.05544
Sasakian Geometry on Sphere Bundles II: Constant Scalar Curvature
In a previous paper [BTF21] the authors employed the fiber join construction of Yamazaki [Yam99] together with the admissible construction of Apostolov, Calderbank, Gauduchon, and T{\o}nnesen-Friedman [ACGTF08a] to construct new extremal Sasaki metrics on odd dimensional sphere bundles over smooth projective algebraic varieties. In the present paper we continue this study by applying a recent existence theorem [BHLTF23] that shows that under certain conditions one can always obtain a constant scalar curvature Sasaki metric in the Sasaki cone. Moreover, we explicitly describe this construction for certain sphere bundles of dimension 5 and 7.
Charles P. Boyer, Christina W. Tønnesen-Friedman
2023-09-11T15:31:14Z
http://arxiv.org/abs/2309.05544v1
# Sasakian geometry on sphere bundles II: constant scalar curvature ###### Abstract. In a previous paper [1] the authors employed the fiber join construction of Yamazaki [21] together with the admissible construction of Apostolov, Calderbank, Gauduchon, and Tonnesen-Friedman [1] to construct new extremal Sasaki metrics on odd dimensional sphere bundles over smooth projective algebraic varieties. In the present paper we continue this study by applying a recent existence theorem [1] that shows that under certain conditions one can always obtain a constant scalar curvature Sasaki metric in the Sasaki cone. Moreover, we explicitly describe this construction for certain sphere bundles of dimension \(5\) and \(7\). The authors were partially supported by grants from the Simons Foundation, CPB by (#519432), and CWT-F by (#422410). ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(K\)-contact manifold \(M^{2n+1}\) is a K-contact manifold \(\mathcal{S}=(\xi,\eta,\Phi,g)\) with \(\xi\in\mathfrak{t}^{+}(\mathcal{D},J)\). [MISSING_PAGE_POST] is Sasaki. Moreover, as in the symplectic case there is a strong connection between the geometry and topology of \((M,\mathcal{S})\) and the combinatorics of \(\mathfrak{t}^{+}(\mathcal{D},J)\)[1, 1, 2, 10, 11]. Much can also be said in the complexity \(1\) case \((\dim\mathfrak{t}^{+}(\mathcal{D},J)=n)\)[1]. It is important to realize that there are two types of Reeb orbits, those that are closed (i.e periodic orbits) and those that are not. On a closed K-contact manifold a Reeb vector field in the Sasaki cone \(\mathfrak{t}^{+}\) is \(C^{\infty}\)-close to a Reeb vector field all of whose orbits are periodic. What can one say about Reeb vector fields in the complement of \(\mathfrak{t}^{+}\)? The famous Weinstein conjecture says that every Reeb vector field on a compact contact manifold has a periodic orbit, and this is known to hold on a compact simply connected K-contact manifold [1]. See also [1, 1]. We end this section with the following observation that applies to our examples. **Proposition 2.2**.: _Let \(\mathbb{CP}^{1}\to S_{\mathbf{n}}\to N\) be a projective bundle where \(N\) is a smooth projective algebraic variety of complex dimension \(d_{N}\geq 2\), and let \(M^{2d_{N}+3}\) be the total space of a Sasaki \(S^{1}\) bundle over \(S_{\mathbf{n}}\). Then \(M^{2d_{N}+3}\) is a nontrivial lens space bundle (with fiber \(F\)) over \(N\). Furthermore, \(F=S^{3}\) if and only if the natural induced map \(\pi_{2}(M)\longrightarrow\pi_{2}(N)\) is an epimorphism, and the natural induced map \(\pi_{1}(M)\longrightarrow\pi_{1}(N)\) is a monomorphism. In particular, if \(N\) is simply connected there is a choice of Kahler class on \(S_{\mathbf{n}}\) such \(F=S^{3}\)._ Proof.: By composition we have a smooth bundle \(F\to M^{2d_{N}+3}\to N\), and by construction the \(S^{1}\) action on \(M^{2d_{N}+3}\) only acts on the fibers \(F\). Moreover, since the total space of this bundle is Sasaki, the bundle is nontrivial. So its restriction to \(F\) is also nontrivial. It follows that \(F\) is a lens space and we have the commutative diagram \[\begin{array}{ccccc}S^{1}&\longrightarrow&F&\longrightarrow&\mathbb{CP}^{ 1}\\ \Big{\downarrow}_{id}&&\Big{\downarrow}&&\Big{\downarrow}\\ S^{1}&\longrightarrow&M^{2d_{N}+3}&\longrightarrow&S_{\mathbf{n}}\\ &&\Big{\downarrow}&&\Big{\downarrow}\\ &&N&\stackrel{{ id}}{{\longrightarrow}}&N.\end{array} \tag{3}\] Now since \(N\) is Kahler its third Betti number \(b_{3}(N)\) is even. Furthermore, since \(M^{2d_{N}+3}\) is Sasaki of dimension at least \(7\), its third Betti number \(b_{3}(M)\) must also be even which implies that the Euler class of the lens space bundle cannot vanish implying that the bundle is nontrivial. Since \(F\) is a lens space, \(\pi_{2}(F)=0\), so the long exact homotopy sequence becomes \[\mathbbm{1}\longrightarrow\pi_{2}(M)\longrightarrow\pi_{2}(N)\longrightarrow \pi_{1}(F)\longrightarrow\pi_{1}(M)\longrightarrow\pi_{1}(N)\longrightarrow \mathbbm{1}. \tag{4}\] So when the induced map \(\pi_{2}(M)\longrightarrow\pi_{2}(N)\) is an epimorphism, and the induced map \(\pi_{1}(M)\longrightarrow\pi_{1}(N)\) is a monomorphism we have \(\pi_{1}(F)=\mathbbm{1}\) which gives \(F=S^{3}\) in this case. The converse is also clear from the homotopy exact sequence. Now if \(N\) is simply connected so is \(S_{\mathbf{n}}\). Thus, by choosing a primitive Kahler class on \(S_{\mathbf{n}}\), we can take \(M^{2d+3}\) to be simply connected. Furthermore, we can choose the Kahler class on \(S_{\mathbf{n}}\) such that its restriction to \(\mathbb{CP}^{1}\) is primitive. It then follows that \(F=S^{3}\). ## 3. Yamazaki's Fiber Join Yamazaki [20] constructed his fiber join in the category of regular K-contact manifolds which as shown in [1] restricts to the Sasakian case in a natural way. We refer to ob.cit for details. Here we briefly recall that the fiber join is constructed by considering \(d+1\) regular Sasaki manifolds \(M_{j}\) over a smooth algebraic variety \(N\) with \(d+1\) Kahler forms \(\omega_{j}\) on \(N\) that are not necessarily distinct. One then constructs a smooth manifold \(M=M_{1}\star_{f}\cdots\star_{f}M_{d+1}\) as the unit sphere in the complex vector bundle \(E=\oplus_{j=1}^{d+1}L_{j}^{*}\) where \(L_{j}\) denotes the complex line bundle on \(N\) associated to \(M_{j}\) such that \(c_{1}(L_{j})=[\omega_{j}]\) and \(L_{j}^{*}\) is its dual. We shall refer to such a fiber join as a _Sasaki-Yamazaki fiber join_. Topologically, we have **Proposition 3.1**.: _Let \(M\) be a Sasaki-Yamazaki fiber join as described above. Then_ 1. \(M\) _is a_ \(S^{2d+1}\) _bundle over_ \(N\) _with a_ \(d+1\) _dimensional Sasaki cone. Moreover,_ 2. _if_ \(d\geq n\) _then_ \(M\) _has the cohomology groups of the product_ \(S^{2d+1}\times N\)_; whereas,_ 3. _if_ \(d<n\) _then the Euler class of the bundle does not vanish, and the Betti numbers satisfy_ \(b_{2d+2i}(M)=b_{2d+2i}(N)-1\) _where_ \(i=1,\ldots,n-d\)_._ Proof.: That \(M\) is an \(S^{2d+1}\) bundle follows from the construction, and Theorem 3.4 in [1] shows that \(M\) admits a \(d+1\) dimensional family of Sasakian structures. When \(d\geq n\) the Euler class of the bundle vanishes and the Leray-Serre spectral sequence collapses giving the product groups in the limit. However, if \(d<n\) with \(M\) having a Sasakian structure, the odd Betti numbers less than half the dimension must be even (cf. [1]). Moreover, the odd Betti numbers of \(N\) are also even, and the even Betti numbers are greater than zero. So if the Euler class vanishes the orientation class \(\alpha\) of the sphere which lies in the \(E_{2}^{0,2d+1}\) term of the spectral sequence would survive to infinity which would imply that the Betti number \(b_{2d+1}\) is odd. This contradicts the fact that \(M\) has a Sasakian structure since \(2d+1<2n<\frac{1}{2}\dim\;M\). Thus, the Euler class, which is represented by the differential \(d_{2d+2}(\alpha)\), cannot vanish in this case. So the real class \(d_{2d+2}(\alpha)\in E_{\infty}^{2d+2,0}\) is killed which reduces the \((2d+2)th\) Betti number by one. The other equalities follow from this and naturality of the differential. The Euler class of the bundle is \(\omega^{d+1}\) where \(\omega\) is an integral Kahler form on \(N\). We want to determine the conditions under which a sphere bundle is a fiber join. It is convenient to think of this in terms of \(G\)-structures. An oriented \(S^{2d+1}\)-bundle over \(N\) is an associated bundle to a principal bundle with group \(SO(2d+2)\). **Proposition 3.2**.: _An \(S^{2d+1}\)-bundle \(S(E)\) over a smooth projective algebraic variety \(N\) is of the form \(S(\oplus_{i}L_{i})\) if and only if the group of the corresponding principal bundle is the maximal torus \(\mathbb{T}_{C}^{d+1}\). Moreover, this is a Sasaki-Yamazaki fiber join if there is a choice of complex line bundles \(L_{i}\) such that \(c_{1}(L_{i}^{*})\) is positive definite for all \(i=1,\ldots,d+1\)._ Proof.: The only if part is clear. Conversely, let \(M\) be the total space of the unit sphere bundle in a complex vector bundle \(E\) over a smooth projective algebraic variety \(N\) Assume that the structure group of \(E\) reduces to a maximal torus \(\mathbb{T}_{\mathbb{C}}^{d+1}\). Then \(E\) is isomorphic to a sum of complex line bundles \(\oplus_{i=1}^{d+1}L_{i}\). Assume further that the \(L_{i}\) can be chosen such that \(c_{1}(L_{i}^{*})\) is positive definite for \(i=1,\dots,d+1\). But this gives precisely the fiber join of the corresponding \(S^{1}\) bundles over \(N\). Let \(M\) be a Sasaki-Yamazaki fiber join. Then as discussed above \(M\) is an \(S^{2d+1}\) bundle over a smooth projective algebraic variety \(N\) for some \(d\geq 1\). The Sasakian structure on \(M\) restricts to the standard weighted Sasakian structure on each fiber \(S^{2d+1}\). When the weights are integers, it is convenient to describe this by the following commutative diagram of \(S^{1}\) actions labelled by a weight vector \(\mathbf{w}\): (5) ### Quasi-regular Quotients when \(d=1\) For the case \(d=1\) and co-prime \(\mathbf{w}=(w_{1},w_{2})\in(\mathbb{Z}^{+})^{2}\), we want to understand \(\mathbb{P}_{\mathbf{w}}(\oplus_{j=1}^{d+1}L_{j}^{*})\) in the diagram (5). To this end we will follow the ideas in Section 3.6 of [1]. Let \(M_{i}^{3}\to N\) denote the primitive principal \(S^{1}\)-bundle corresponding to the line bundle \(L_{i}\). Here we assume that \(N\) is a smooth projective algebraic manifold. So \(c_{1}(L_{i}^{*})\) equals some (negative) integer \(d_{i}\) times a primitive cohomology class that in turns defines \(M_{i}^{3}\). [Recall that \(L_{i}\) has to be a positive line bundle over \(N\).] Consider the \(S^{1}\times S^{1}\times\mathbb{C}^{*}\) action \(\mathcal{A}_{\mathbf{w},L_{1},L_{2}}\) on \(M_{1}^{3}\times M_{2}^{3}\times\mathbb{C}^{2}\) defined by \[\mathcal{A}_{\mathbf{w},L_{1},L_{2}}(\lambda_{1},\lambda_{2},\tau)(x_{1},u_{1 },x_{2},u_{2};z_{1},z_{2})=(x_{1},\lambda_{1}u_{1},x_{2},\lambda_{2}u_{2}; \tau^{w_{1}}\lambda_{1}^{d_{1}}z_{1},\tau^{w_{2}}\lambda_{2}^{d_{2}}z_{2}), \tag{6}\] where \(\lambda_{1},\lambda_{2},\tau\in\mathbb{C}^{*}\) and \(|\lambda_{i}|=1\). Then \(\mathbb{P}_{\mathbf{w}}(L_{1}^{*}\oplus L_{2}^{*})\) should equal \[M_{1}^{3}\times M_{2}^{3}\times\mathbb{C}^{2}/\mathcal{A}_{\mathbf{w},L_{1},L _{2}}(\lambda_{1},\lambda_{2},\tau).\] Now, we also can define a \(w_{1}w_{2}\)-fold covering map \(\tilde{h}_{\mathbf{w}}:M_{1}^{3}\times M_{2}^{3}\times\mathbb{C}^{2}\to M_{1} ^{3}\times M_{2}^{3}\times\mathbb{C}^{2}\) by \[\tilde{h}(x_{1},u_{1},x_{2},u_{2};z_{1},z_{2})=(x_{1},u_{1},x_{2},u_{2};z_{1}^ {w_{2}},z_{2}^{w_{1}})\] and this gives a commutative diagram (7) and so we have a fiber preserving biholomorphism \(h_{\mathbf{w}}:\mathbb{P}_{\mathbf{w}}(L_{1}^{*}\oplus L_{2}^{*})\to\mathbb{P} ((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}})\) and we can write \(\mathbb{P}_{\mathbf{w}}(L_{1}^{*}\oplus L_{2}^{*})\) as the log pair \((\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}}),\Delta_{\mathbf{w}})\), where \(\Delta_{\mathbf{w}}=(1-1/w_{1})D_{1}+(1-1/w_{2})D_{2}\) and \(D_{1},D_{2}\) are the zero and infinity sections, respectively, of the bundle \(\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}})\to N\). **Remark 3.3**.: Note that if \(\mathbf{w}=(1,1)\) this checks out with the usual regular quotient. If the principal bundles \(M_{1}^{3}\) and \(M_{2}^{3}\) are equal, we can choose \((w_{1},w_{2})=(d_{1},d_{2})/a\) with \(a=-\gcd(|d_{1}|,|d_{2}|)\) to get that \((L_{1}^{*})^{w_{2}}=(L_{2}^{*})^{w_{1}}\) and so \(\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}})\) is trivial and the quasi-regular quotient is a product as expected from Proposition 3.8 of [1] By utilizing the set-up in Section A.3 of [1] we can also determine the quasi-regular Kahler class (up to scale) in the case with \(d=1\) and co-prime \(\mathbf{w}=(w_{1},w_{2})\in(\mathbb{Z}^{+})^{2}\) as above. Indeed, from (9) of [1] we have that \[w_{1}w_{2}d\eta_{\mathbf{w}}=w_{2}(r_{1}^{2}d\eta_{1}+2(r_{1}dr_{1}\wedge(\eta _{1}+d\theta_{1})))+w_{1}(r_{2}^{2}d\eta_{2}+2(r_{2}dr_{2}\wedge(\eta_{2}+d \theta_{2}))), \tag{8}\] where \((r_{j},\theta_{j})\) denote the polar coordinates on the fiber of the line bundle \(L_{j}^{*}\) (chosen via a Hermitian metric on the line bundle). As explained in Section A.3 of [1], we can say that \(z_{0}:=\frac{1}{2}r_{1}^{2}\) and \(z_{\infty}:=\frac{1}{2}r_{2}^{2}\) are the moment maps of the natural \(S^{1}\) action on \(L_{1}^{*}\) and \(L_{2}^{*}\), respectively. On \(2=z_{0}+z_{\infty}\), the function \(z:=z_{0}-1=1-z_{\infty}\) descends to a fiberwise moment map (with range [-1,1]) for the induced \(S^{1}\) action on \(\mathbb{P}(\mathbbm{1}\oplus(L_{1})^{w_{2}}\otimes(L_{2}^{*})^{w_{1}})\to N\). Using that \(r_{1}^{2}=2(z+1)\), \(r_{2}^{2}=2(1-z)\), \(r_{1}\,dr_{1}=dz\), and \(r_{2}\,dr_{2}=-dz\), we rewrite (8) to \[w_{1}w_{2}d\eta_{\mathbf{w}}=2(w_{2}d\eta_{1}+w_{1}d\eta_{2})+2d(z\theta),\] where \(\theta:=w_{2}(\eta_{1}+d\theta_{1})-w_{1}(\eta_{2}+d\theta_{2})\) is a connection form on \((L_{1})^{w_{2}}\otimes(L_{2}^{*})^{w_{1}}\). Now this descends to a Kahler form on \(\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}})=\mathbb{P}(\mathbbm{ 1}\oplus(L_{1})^{w_{2}}\otimes(L_{2}^{*})^{w_{1}})\to N\) with Kahler class \(2(2\pi(w_{2}[\omega_{1}]+w_{1}[\omega_{2}])+\Xi)\) where \(c_{1}(L_{j})=[\omega_{j}]\) and \(\Xi/(2\pi)\) is the Poincare dual of \((D_{1}+D_{2})\). We can summarize our findings for \(d=1\) in the following proposition. **Proposition 3.4**.: _For \(d=1\) and co-prime \(\mathbf{w}=(w_{1},w_{2})\in(\mathbb{Z}^{+})^{2}\), the quasi-regular quotient of \(M_{\mathfrak{w}}\) with respect to \(\xi_{\mathbf{w}}\) is the log pair \(B_{\mathfrak{w},\mathbf{w}}:=(\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*}) ^{w_{1}}),\Delta_{\mathbf{w}})\), where \(\Delta_{\mathbf{w}}=(1-1/w_{1})D_{1}+(1-1/w_{2})D_{2}\) and \(D_{1},D_{2}\) are the zero and infinity sections, respectively, of the bundle \(\mathbb{P}((L_{1}^{*})^{w_{2}}\oplus(L_{2}^{*})^{w_{1}})\to N\). Moreover, up to scale, the induced transverse Kahler class on \(B_{\mathfrak{w},\mathbf{w}}\) is equal to \(2\pi(w_{2}[\omega_{1}]+w_{1}[\omega_{2}])+\Xi\) where \(c_{1}(L_{j})=[\omega_{j}]\) and \(\Xi/(2\pi)\) is the Poincare dual of \((D_{1}+D_{2})\)._ **Remark 3.5**.: We can do the following sanity check: If colinearity (see [1] for the definition) holds on top of the above assumptions, we have according to Proposition 15 of [1] that the fiber join is just a regular \(S_{\mathbf{w}}^{3}\)-join as in [1]. Here \(\omega_{i}=b_{i}\omega_{N}\) for \([\omega_{N}]\) a primitive integer Kahler class. Connecting with the notation in [1] (setting \(w_{i}\) from [1] equal to \(\tilde{w}_{i}\)) we have \(l_{1}(\tilde{w}_{1},\tilde{w}_{2})=(b_{1},b_{2})\) and \(l_{2}=1\). Now Proposition 3.4 is consistent with Theorem 3.8 of [1] (with \(v_{i}=w_{i}\)) saying that the quotient of \(\xi_{\mathbf{w}}\) has \(n=b_{1}w_{2}-b_{2}w_{1}\). Moreover, the transverse Kahler class is then \[2\pi(w_{2}[\omega_{1}]+w_{1}[\omega_{2}])+\Xi=2\pi(b_{1}w_{2}+b_{2}w_{1})[ \omega_{N}]+\Xi=\frac{1}{r}[\omega_{N_{n}}]+\Xi\] with \(r_{a}=\frac{b_{1}w_{2}-b_{2}w_{1}}{b_{1}w_{2}+b_{2}w_{1}}\) and \([\omega_{N_{n}}]:=2\pi n[\omega_{N}]=c_{1}((L_{1})^{w_{2}}\otimes(L_{2}^{*})^{w_ {1}})\). This is consistent with (44) and (59) in [1]. ### The General \(d\) Case For the fiber join \(M_{\mathfrak{w}}\) we have in particular that the complex manifold arising as the quotient of the regular Reeb vector field \(\xi_{1}\) is equal to \(\mathbb{P}\left(\oplus_{j=1}^{d+1}L_{j}^{*}\right)\to N\). Recall from [1] that this is an _admissible_ projective bundle as defined in [1] exactly when the following all hold true: 1. The base \(N\) is a local product of Kahler manifolds \((N_{a},\Omega_{N_{a}})\), \(a\in\mathcal{A}\subset\mathbb{N}\), where \(\mathcal{A}\) is a finite index set. This means that there exist simply connected Kahler manifolds \(N_{a}\) of complex dimension \(d_{a}\) such that \(N\) is covered by \(\prod_{a\in\mathcal{A}}N_{a}\). On each \(N_{a}\) there is an \((1,1)\) form \(\Omega_{N_{a}}\), which is a pull-back of a tensor (also denoted by \(\Omega_{N_{a}}\)) on \(N\), such that \(\Omega_{N_{a}}\) is a Kahler form of a constant scalar curvature Kahler (CSCK) metric \(g_{a}\). 2. There exist \(d_{0},d_{\infty}\in\mathbb{N}\cup\{0\}\), with \(d=d_{0}+d_{\infty}+1\), such that \(E_{0}:=\oplus_{j=1}^{d_{0}+1}L_{j}^{*}\) and \(E_{\infty}:=\oplus_{j=d_{0}+2}^{d_{0}+d_{\infty}+2}L_{j}^{*}\) are both projectively flat hermitian holomorphic vector bundles. _This would, for example, be true if \(L_{j}^{*}=L_{0}\) for \(j=1,...,d_{0}+1\) and \(L_{j}^{*}=L_{\infty}\) for \(j=d_{0}+2,...,d_{0}+d_{\infty}+2\), where \(L_{0}\) and \(L_{\infty}\) are some holomorphic line bundles. That is, \(E_{0}=L_{0}\otimes\mathbb{C}^{d_{0}+1}\) and \(E_{\infty}=L_{\infty}\otimes\mathbb{C}^{d_{\infty}+1}\). More generally, \(c_{1}(L_{1}^{*})=\cdots=c_{1}(L_{d_{0}+1}^{*})\) and \(c_{1}(L_{d_{0}+2}^{*})=\cdots=c_{1}(L_{d_{0}+d_{\infty}+2}^{*})\) would be sufficient._ 3. \(\frac{c_{1}(E_{\infty})}{d_{\infty}+1}-\frac{c_{1}(E_{0})}{d_{0}+1}=\sum_{a \in\mathcal{A}}[\epsilon_{a}\Omega_{N_{a}}]\), where \(\epsilon_{a}=\pm 1\). The Kahler cone of the total space of an admissible bundle \(\mathbb{P}\left(E_{0}\oplus E_{\infty}\right)\to N\) has a subcone of so-called **admissible Kahler classes** (defined in Section 1.3 of [1]). This subcone has dimension \(|\mathcal{A}|+1\) and, in general, this is not the entire Kahler cone. However, by Remark 2 in [1], if \(b_{2}(N_{a})=1\) for all \(a\in\mathcal{A}\) and \(b_{1}(N_{a})\neq 0\) for at most one \(a\in\mathcal{A}\), then the entire Kahler cone is indeed admissible. ### Admissibility As briefly discussed in [1], it is convenient to have refined notions of admissibility. **Definition 3.6**.: Any fiber join \(M_{\mathfrak{w}}\) where the quotient of the regular Reeb vector field \(\xi_{1}\) is an admissible projective bundle will also be called **admissible**. If further the transverse Kahler class of the regular quotient is a pullback of an admissible Kahler class, then we call \(M_{\mathfrak{w}}\)**strongly admissible**. **Remark 3.7**.: Note that in Definition 4.1 of [1] we introduced the condition of being **super admissible**. There we required the entire Kahler cone of the regular admissible quotient to be admissible. Of course, if that is the case then in particular the transverse Kahler class of the regular quotient is a pullback of an admissible Kahler classes. Thus \(M_{\mathfrak{w}}\) is strongly admissible if it is super admissible. In fact we have **Proposition 3.8**.: _Generally the inclusions_ \[\text{\rm super admissible}\subset\text{\rm strongly admissible} \subset\text{\rm admissible}\] _are proper._ The proof of this proposition is a consequence of either of the Examples 3.1 or 3.2 below. **Example 3.1**.: Let \(\Sigma_{g}\) be a Riemann surface of genus \(g>1\) and let \(\omega_{\Sigma_{g}}\) denote the unit area Kahler form of the constant scalar curvature Kahler metric on \(\Sigma_{g}\). Now consider \(N=\Sigma_{g}\times\Sigma_{g}\) (i.e. \(N_{1}=N_{2}=\Sigma_{g}\)) and let \(\pi_{a}\) denote the projection from \(N\) to the factor. Then \(\gamma_{a}:=[\pi_{a}^{*}\omega_{\Sigma_{g}}]\in H^{2}(N,\mathbb{Z})\). Let \(\delta\in H^{2}(N,\mathbb{Z})\) denote the Poincare dual of the diagonal divisor in \(N\) defined by the diagonal curve \(\{(x,x)\,|\,x\in\Sigma_{g}\}\). Then from Theorem 3.1 of [20] (which uses Nakai's criterion for ample divisors), we know that \(l_{s}:=(s-1)(\gamma_{1}+\gamma_{2})+\delta\in H^{2}(N,\mathbb{Z})\) is in the Kahler cone of \(N\) if and only if \(s>g\). Now we form a \(d=1\) Yamazaki fiber join by choosing line bundles \(L_{1}\) and \(L_{2}\) over \(N\) such that \(c_{1}(L_{1})=[\omega_{1}]=l_{g+2}\) and \(c_{1}(L_{2})=[\omega_{2}]=l_{g+1}\). In the above setting \(L_{1}^{*}=E_{0}\) and \(L_{2}^{*}=E_{\infty}\) and we easily see that the fiber join is indeed admissible with \(c_{1}(E_{\infty})-c_{1}(E_{0})=c_{1}(L_{1})-c_{1}(L_{2})=l_{g+2}-l_{g+1}=\gamma _{1}+\gamma_{2}\). Specifically, the regular quotient equals the admissible bundle \[S_{g}:=\mathbb{P}\left(L_{1}^{*}\oplus L_{2}^{*}\right)\to N=\mathbb{P} \left(\mathbbm{1}\oplus L_{1}\otimes L_{2}^{*}\right)\to N=\mathbb{P} \left(\mathbbm{1}\oplus\mathcal{O}(1,1)\right)\to\Sigma_{g}\times\Sigma_{g}.\] Note that with the above notation \([\Omega_{N_{a}}]=\gamma_{a}\) (and \(\epsilon_{a}=+1\)). On \(S_{g}\), the admissible Kahler classes are up to scale of the form \[\frac{1}{x_{1}}[\Omega_{N_{1}}]+\frac{1}{x_{2}}[\Omega_{N_{2}}]+\Xi,\] where \(0<x_{a}<1\) (following Section 1.3 of [1]). According to Proposition 3.4, the regular transverse Kahler class is, up to scale, the pull-back of \(2\pi([\omega_{1}]+[\omega_{2}])+\Xi\). This equals \[2\pi(l_{g+2}+l_{g+1})+\Xi=2\pi((2g+1)(\gamma_{1}+\gamma_{2})+2\delta)+\Xi=2\pi \big{(}(2g+1)[\Omega_{N_{1}}]+(2g+1)[\Omega_{N_{2}}]+2\delta\big{)}+\Xi\] which due to the "\(2\delta\)" bit is not an admissible Kahler class. Therefore, the fiber join is not strongly admissible. Furthermore, it is possible to chose the line bundles \(L_{1}\) and \(L_{2}\) so that the fiber join is strongly admissible (cf. Section 5), but it will never be super admissible due to the fact that the Kahler cone of \(N\) consist of more than just product classes and thus there are non-admissible Kahler classes on the total space of the \(\mathbb{CP}^{1}\)-bundle of the regular quotient. Hence, the inclusions in Proposition 3.8 are proper. **Example 3.2**.: Another example of admissible but not strongly admissible is the following case. Let \(N=\mathbb{P}(\mathbbm{1}\oplus\mathcal{O}(1,-1))\to\mathbb{CP}^{1}\times \mathbb{CP}^{1}\). Let \(\Omega_{FS}\) denote the standard Fubini-Study Kahler form on \(\mathbb{CP}^{1}\), let \(\pi_{i}\) denote the projection from \(N\) to the \(i^{th}\) factor in the product \(\mathbb{CP}^{1}\times\mathbb{CP}^{1}\), and let \(\chi\) denote the Poincare dual of \(2\pi(D_{1}^{N}+D_{2}^{N})\), where \(D_{1}^{N}\), \(D_{2}^{N}\) are the zero and infinity sections of \(N\to\mathbb{CP}^{1}\times\mathbb{CP}^{1}\). Now consider the two CSC Kahler forms \(\omega_{1}\) and \(\omega_{2}\) on \(N\) with Kahler classes \[[\omega_{1}]=2\left(3[\pi_{1}^{*}\Omega_{FS}]+3[\pi_{2}^{*}\Omega_{FS}]+\frac{ \chi}{2\pi}\right)\] and \[[\omega_{2}]=2[\pi_{1}^{*}\Omega_{FS}]+2[\pi_{2}^{*}\Omega_{FS}]+\frac{\chi}{2 \pi},\] respectively. (See e.g. Theorem 9 in [1] to confirm that \([\omega_{1}]\) and \([\omega_{2}]\) are indeed represented by CSC Kahler forms.) Now we form a \(d=1\) Yamazaki fiber join by choosing line bundles \(L_{1}\) and \(L_{2}\) over \(N\) such that \(c_{1}(L_{1})=[\omega_{1}]\) and \(c_{1}(L_{2})=[\omega_{2}]\). In the above setting \(L_{1}^{*}=E_{0}\) and \(L_{2}^{*}=E_{\infty}\) and we easily see that the fiber join is indeed admissible with \(c_{1}(E_{\infty})-c_{1}(E_{0})=c_{1}(L_{1})-c_{1}(L_{2})=4[\pi_{1}^{*}\Omega_{FS }]+4[\pi_{2}^{*}\Omega_{FS}]+\frac{\chi}{2\pi}\). Specifically, the regular quotient equals the admissible bundle \[S:\mathbb{P}\left(\mathbbm{1}\oplus L\right)\to N\] such that \(c_{1}(L)=[\Omega_{N}]:=4[\pi_{1}^{*}\Omega_{FS}]+4[\pi_{2}^{*}\Omega_{FS}]+\frac{ \chi}{2\pi}\) and \(\Omega_{N}\) is a CSC Kahler form on \(N\). Note that \(S\) is a so-called _stage four Bott manifold_ given by the matrix \[A=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 1&-1&1&0\\ 5&3&2&1\end{pmatrix}.\] [See e.g. Section 1 of [1] for details.] It is important to note that the CSC Kahler manifold \((N,\Omega_{N})\) is irreducible in the sense that for (1) at the beginning of Subsection 3.2, \(\mathcal{A}\) must be just \(\{1\}\). Following Section 1.3. in [1], we have that on \(S\), the admissible Kahler classes are up to scale of the form \[\frac{2\pi}{x}[\Omega_{N}]+\Xi,\] where \(0<x<1\), \(\Xi\) denote the Poincare dual of \(2\pi(D_{1}+D_{2})\), and \(D_{1},D_{2}\) are the zero and infinity sections, respectively, of the bundle \(\mathbb{P}(\mathbb{1}\oplus L)\to N\). According to Proposition 3.4, the regular transverse Kahler class is, up to scale, the pull-back of \(2\pi([\omega_{1}]+[\omega_{2}])+\Xi\). This equals \[2\pi(8[\pi_{1}^{*}\Omega_{FS}]+8[\pi_{2}^{*}\Omega_{FS}]+3\frac{\chi}{2\pi})+\Xi\] which cannot be written as (the rescale of) \(\frac{2\pi}{x}[\Omega_{N}]+\Xi\) for any \(0<x<1\). Thus this is not an admissible Kahler class and therefore the fiber join is not strongly admissible. ### The Main Theorems For Theorems 3.9 and 3.11 below, we only need the strongly admissible condition. In [1] we used the above observations together with existence results in [10], [11], and [12] (specifically, the slight generalization in the form of Propostion 11 of [1]) to prove the following theorem: **Theorem 3.9** ([1]).: _Let \(M_{\mathfrak{n}}\) be a strongly admissible fiber join whose regular quotient is a ruled manifold of the form \(\mathbb{P}(E_{0}\oplus E_{\infty})\longrightarrow N\) where \(E_{0},E_{\infty}\) are projectively flat hermitian holomorphic vector bundles on \(N\) of complex dimension \((d_{0}+1),(d_{\infty}+1)\) respectively, and \(N\) is a local Kahler product of non-negative CSC metrics. Then the Sasaki cone of \(M_{\mathfrak{n}}\) has an open set of extremal Sasaki metrics (up to isotopy)._ Together with E. Legendre and H. Huang we recently obtained the following result on admissible Kahler manifolds: **Theorem 3.10** (Theorem 3.1 in [1]).: _Suppose \(\Omega\) is a rational admissible Kahler class on the admissible manifold \(N^{ad}=\mathbb{P}(E_{0}\oplus E_{\infty})\longrightarrow N\), where \(N\) is a compact Kahler manifold which is a local product of nonnegative CSCK metrics. Let \((M,\mathcal{S})\) be the Boothby-Wang constructed Sasaki manifold given by an appropriate rescale of \(\Omega\). Then the corresponding Sasaki-Reeb cone will always have a (possibly irregular) CSC-ray (up to isotopy)._ The proof of this theorem (Section 3.1 of [1]) reveals that this CSC Sasaki metric lies in a 2-dimensional subcone of \(\mathfrak{t}^{+}(\mathcal{S})\) which is exhausted by extremal Sasaki metrics. Further, since this subcone is constructed via Killing potentials coming from a moment map induced by a fiber wise \(S^{1}\)-action on the admissible bundle, it is clear that this is also a subcone of \(\mathfrak{t}^{+}_{sph}\) (and all of \(\mathfrak{t}^{+}_{sph}\) when \(d=1\)). Recall from Section 2.2 of [1] that \(\mathfrak{t}^{+}_{sph}\) is defined to be the natural \((d+1)\)-subcone of the Sasaki-Reeb cone of \(M_{\mathfrak{m}}\) coming from considering the standard Sasaki CR structure on \(S^{2d+1}\). In light of all this, we can thus easily improve Theorem 3.9 to give Theorem 1.1 in the Introduction, namely **Theorem 3.11**.: _Let \(M_{\mathfrak{m}}\) be a strongly admissible Yamazaki fiber join whose regular quotient is a ruled manifold of the form \(\mathbb{P}(E_{0}\oplus E_{\infty})\longrightarrow N\) where \(E_{0},E_{\infty}\) are projectively flat hermitian holomorphic vector bundles on \(N\) of complex dimension \((d_{0}+1),(d_{\infty}+1)\) respectively, and \(N\) is a local Kahler product of non-negative CSC metrics. Then \(\mathfrak{t}^{+}_{sph}\) has a \(2\)-dimensional subcone of extremal Sasaki metrics (up to isotopy) which contains at least one ray of CSC Sasaki metrics._ ## 4. Further Examples In this section we work out the details of examples of fiber joins in dimensions \(5\) and \(7\). We consider only the case with \(d=1\), i.e. \(d_{0}=d_{\infty}=0\). So we have an \(S^{3}\) bundle \(M\), which we shall assume to be strongly admissible, over a smooth projective algebraic variety \(N\). We begin with the simplest case, namely where \(N\) is a Riemann surface, so the simplest fiber join is of dimension \(5\). Even in this case the geometry is quite involved. Note that the genus \(g=0\) case is a straightforward special case of Theorem 3.11 whose Sasaki cone is strictly larger than \(\mathfrak{t}^{+}_{sph}\); hence, we concentrate on \(g\geq 1\). In this case the fiber \(\mathbb{CP}^{d}[\mathbf{w}]\) is the log pair \((\mathbb{CP}^{1},\Delta_{\mathbf{w}})\) with branch divisors \[\Delta_{\mathbf{w}}=\big{(}1-\frac{1}{w_{1}}\big{)}D_{1}+\big{(}1-\frac{1}{w_ {2}}\big{)}D_{2}.\] Here we have \(c_{1}(L_{\infty})-c_{1}(L_{0})=\sum_{a}[\epsilon_{a}\Omega_{a}]\). In order to construct a non-colinear fiber join of this kind we must have the Picard number \(\rho(N)\geq 2\). In this case we may see the rays determined by \(\xi_{\mathbf{w}}\) explicitly as \(CR\)-twists of the regular quotient [1]. Indeed, following the notation of Section 3 of [1], on the regular quotient, \(N^{ad}=\mathbb{P}\big{(}\mathbbm{1}\oplus(L_{0}^{*}\otimes L_{\infty})\big{)}\to N\), we have a moment map \(\mathfrak{z}:N^{ad}\to[-1,1]\). A choice of \(c\in(-1,1)\) creates a new Sasaki structure (with Reeb vector field \(\xi_{c}\)) on \(M_{\mathfrak{m}}\) via the lift of \(f=c\mathfrak{z}+1\) from \(S\) to \(M_{\mathfrak{m}}\). In turn, this lift may be identified with \(c\,z+1\), where \(z:=z_{0}-1=1-z_{\infty}\) is given in the discussion above Proposition 3.4. In particular, \(z_{0}\) and \(z_{\infty}\) are the moment maps of the natural \(S^{1}\) action on \(L_{1}^{*}\) and \(L_{2}^{*}\), respectively. Thus, the weighted combination, \(w_{1}z_{0}+w_{2}z_{\infty}\), should define the Reeb vector field \(\xi_{\mathbf{w}}\) and since \[w_{1}z_{0}+w_{2}z_{1}=(w_{1}-w_{2})z+(w_{1}+w_{2})=(w_{1}+w_{2})(\frac{w_{1}- w_{2}}{w_{1}+w_{2}}z+1),\] we see that (up to scale) \(\xi_{\mathbf{w}}\) corresponds to choosing \(c=\frac{w_{1}-w_{2}}{w_{1}+w_{2}}\) in the \(CR\)-twist. ### \(N=\Sigma_{g}\), a compact Riemann surface of genus \(g\geq 1\) It is well known that if \(N=\Sigma_{g}\), a Riemann surface of genus \(g\), then an odd dimensional sphere bundle \(M\) over \(N\) is diffeomorphic to the trivial bundle \(S^{2d+1}\times\Sigma_{g}\) or the unique non-trivial bundle \(S^{2d+1}\tilde{\times}\Sigma_{g}\)[10]. We will consider \(d=1\) fiber joins over \(N=\Sigma_{g}\). Since these are necessarily colinear, they have earlier been treated as \(S^{3}_{\mathbf{w}}\) joins [1], but not in the setting of Yamazaki fiber joins. Let \(\omega_{\Sigma_{g}}\) denote the unit area Kahler form of the constant scalar curvature Kahler metric on \(\Sigma_{g}\) and let \(k_{1}>k_{2}>0\) be integers (the case \(0<k_{1}<k_{2}\) is completely similar) and let \(L_{1},L_{2}\) be holomorphic line bundles over \(\Sigma_{g}\) such that \(c_{1}(L_{i})=k_{i}[\omega_{\Sigma_{g}}]\). The corresponding \(d=1\) Yamazaki fiber join, \(M_{\mathbf{k}}=S(L_{1}^{*}\oplus L_{2}^{*})\to\Sigma_{g}\) has regular quotient \(S_{\mathbf{n}}=\mathbb{P}\big{(}\mathbbm{1}\oplus\mathcal{O}(k_{1}-k_{2}) \big{)}\to\Sigma_{g}\) and regular transverse Kahler class equal (up to scale) to the admissible Kahler class \(2\pi(k_{1}+k_{2})[\omega_{\Sigma_{g}}]+\Xi\), which we can write as \(\frac{1}{x}\left(2\pi(k_{1}-k_{2})[\omega_{\Sigma_{g}}]\right)+\Xi\) with \(x=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}\). [See Remark 3.5.] Note that since \(g\geq 1\), we have that the Sasaki cone equals the 2-dimensional cone \(\mathfrak{t}^{+}_{sph}\) We now follow Section 3 of [1]. On the regular quotient, \(S_{\mathbf{n}}\), we have a moment map \(\mathfrak{z}:S_{\mathbf{n}}\to[-1,1]\). A choice of \(c\in(-1,1)\) creates a new Sasaki structure (with Reeb vector field \(\xi_{c}=f\,\xi_{\mathbf{1}}\)) on \(M_{\mathfrak{w}}\) via the lift of \(f=c\mathfrak{z}+1\) from \(S_{\mathbf{n}}\) to \(M_{\mathfrak{w}}\). From the discussion in the beginning of Section 4 we know that \(c=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}=x\) corresponds to the Reeb vector field of the \(S^{3}_{\tilde{\mathbf{w}}}\) join, \(M^{5}_{g,l,\tilde{\mathbf{w}}}=S^{3}_{g}\star_{l,1}S^{3}_{\tilde{\mathbf{w}}}\) from Section 3.2 of [1] with \(S^{3}_{g}\) being the Boothby-Wang constructed smooth Sasaki structure over \((\Sigma_{g},[\omega_{\Sigma_{g}}])\), \(l=\gcd(k_{1},k_{2})\), and \((\tilde{w_{1}},\tilde{w_{2}})=(\frac{k_{1}}{l},\frac{k_{2}}{l})\). Since this Reeb vector field is extremal by construction, we know a priori that the set of extremal Sasaki rays in the Sasaki cone, \(\mathfrak{t}^{+}_{sph}\) is not empty. Proposition 3.10 of [1] tells us that the Reeb vector field determined - up to homothety - by \(c\in(-1,1)\) (as explained in the beginning of Section 4) is extremal (up to isotopy) if and only if \(F_{c}(\mathfrak{z})>0\), for \(-1<\mathfrak{z}<1\), where the polynomial \(F_{c}(\mathfrak{z})\) is given as follows: Let \(s=\frac{2(1-g)}{k_{1}-k_{2}}\), \(x=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}\) and define \[\alpha_{r,-4} = \int_{-1}^{1}(ct+1)^{-4}t^{r}(1+xt)\,dt\] \[\alpha_{r,-5} = \int_{-1}^{1}(ct+1)^{-5}t^{r}(1+xt)\,dt\] \[\beta_{r,-3} = \int_{-1}^{1}(ct+1)^{-3}xst^{r}\,dt\] \[+ (-1)^{r}(1-c)^{-3}(1-x)+(1+c)^{-3}(1+x).\] Then, \[F_{c}(\mathfrak{z})=(c\mathfrak{z}+1)^{3}\left[\frac{2(1-x)}{(1-c)^{3}}( \mathfrak{z}+1)+\int_{-1}^{\mathfrak{z}}Q(t)(\mathfrak{z}-t)\,dt\right], \tag{9}\] where \[Q(t)=\frac{2xs}{(ct+1)^{3}}-\frac{(A_{1}t+A_{2})(1+xt)}{(ct+1)^{5}}\] and \(A_{1}\) and \(A_{2}\) are the unique solutions to the linear system \[\begin{array}{rcl}\alpha_{1,-5}A_{1}+\alpha_{0,-5}A_{2}&=&2\beta_{0,-3}\\ \\ \alpha_{2,-5}A_{1}+\alpha_{1,-5}A_{2}&=&2\beta_{1,-3}.\end{array} \tag{10}\] Further, if the positivity of \(F_{c}(\mathfrak{z})\) is satisfied, then the extremal Sasaki structure is CSC exactly when \[\alpha_{1,-4}\beta_{0,-3}-\alpha_{0,-4}\beta_{1,-3}=0 \tag{11}\] is satisfied. The left hand side of (11) equals \(\frac{4h(c)}{3(1-c^{2})^{5}}\), with polynomial \(h(c)=x(sx-2)+(5+x^{2}-sx)c-x(6+sx)c^{2}-(1-sx-3x^{2})c^{3}\) and \(h(\pm 1)=\pm 4(1\mp x)^{2}\). Thus, since \(h(c)\) is negative at \(c=-1\) and positive at \(c=1\), (11) always has at least one solution \(c\in(-1,1)\). We calculate \(F_{c}(\mathfrak{z})\): \[F_{c}(\mathfrak{z})=\frac{(k_{1}+k_{2})^{2}(1-\mathfrak{z}^{2})p(\mathfrak{z })}{4((1-c)^{2}k_{1}^{2}+(1+c)^{2}k_{2}^{2}+4(1-c^{2})k_{1}k_{2})},\] where \(p(\mathfrak{z})\) is a polynomial of degree \(2\) whose coefficients depend on \(k_{1},k_{2},g\) and \(c\), but is more conveniently written as \[p(\mathfrak{z}) = c^{2}sx+3c^{2}x^{2}-c^{2}-2csx^{2}+3cx^{3}-7cx+sx^{3}-4x^{2}+6\] \[+ 2x\left(3c^{2}x^{2}-c^{2}-4cx-x^{2}+3\right)\mathfrak{z}\] \[+ \left(c-x\right)\left(-csx+3cx^{2}-c+sx^{2}-2x\right)\mathfrak{z }^{2},\] where \(s=\frac{2(1-g)}{k_{1}-k_{2}}\), \(x=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}\). Clearly \(F_{c}(\mathfrak{z})>0\) for all \(\mathfrak{z}\in(-1,1)\) exactly when \(p(\mathfrak{z})>0\) for all \(\mathfrak{z}\in(-1,1)\). We have arrived at **Proposition 4.1**.: _Consider the \(d=1\) fiber join \(S^{3}\longrightarrow M_{\mathbf{k}}\longrightarrow\Sigma_{g}\) over a Riemann surface \(\Sigma_{g}\) of genus \(g\geq 1\) with its natural Sasakian structure \(\mathcal{S}_{c}\) as described above. Then \(\mathcal{S}_{c}\) is extremal (up to isotopy) if and only if \(p(\mathfrak{z})>0\) for all \(\mathfrak{z}\in(-1,1)\)._ Note that \(p(-1)=\frac{8k_{2}((1-c)^{2}k_{1}^{2}+(1+c)^{2}k_{2}^{2}+4(1-c^{2})k_{1}k_{2}) }{(k_{1}+k_{2})^{3}}\) and \(p(1)=\frac{8k_{1}((1-c)^{2}k_{1}^{2}+(1+c)^{2}k_{2}^{2}+4(1-c^{2})k_{1}k_{2}) }{(k_{1}+k_{2})^{3}}>0\), thus \(p(\pm 1)>1\), so we see right away that when \(c=x\), \(p(\mathfrak{z})\) (which is now of degree one) is positive for \(-1<\mathfrak{z}<1\). This confirms our expectation from above that \(\xi_{c}\) is extremal when \(c=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}=x\). It is easy to check that for \(g>\frac{31k_{1}^{2}+14k_{1}k_{2}+k_{1}+k_{2}^{2}+k_{2}}{k_{1}+k_{2}}\) and \(c=\frac{k_{1}}{k_{1}+k_{2}}\), \(p(0)<0\). Thus we see that for any fixed choice of integers \(k_{1}>k_{2}>0\), \(\mathfrak{t}_{sph}^{+}\) is not exhausted by extremal rays when \(g\) is very large. This is expected in light of Theorem 5.1 in [1]. From [1] we have the following results: 1. (Proposition 5.5 in [1] combined with Theorem 3 in [1]) There is a unique ray in \(\mathfrak{t}_{sph}^{+}\) with a CSC Sasaki metric (up to isotopy). 2. (Proposition 5.10 in [1]) If \(g\leq 1+3k_{2}\) then every ray in \(\mathfrak{t}_{sph}^{+}\) has an extremal Sasaki metric (up to isotopy). In particular, this is true whenever \(g\leq 4\). Statement (1) means that (11) has a unique solution \(c\in(-1,1)\) (i.e. the cubic \(h(c)\) above has a unique real root \(c\in(-1,1)\)) and for this unique solution, \(p(\mathfrak{z})>0\) for all \(\mathfrak{z}\in(-1,1)\). An easy way to see the uniqueness of the real root directly from the present setup is to make a change of variable2\(c=\phi(b)=\frac{1-b}{1+b}\)\([\phi:(0,+\infty)\to(-1,1)]\) Then \(h(c)\) transforms to \(\tilde{h}(b)\), where Footnote 2: Note that \(b\) is exactly what \(c\) is in (51) of [1]. This follows from our discussion in Section 3. \[\tilde{h}(b)=\frac{4}{(b+1)^{3}}\left((1-x)^{2}+(1-x)(2+2x-sx)b-(1+x)(2(1-x)-sx )b^{2}-(1+x)^{2}b^{3}\right).\] Since the polynomial coefficients of the cubic \[(1-x)^{2}+(1-x)(2+2x-sx)b-(1+x)(2(1-x)-sx)b^{2}-(1+x)^{2}b^{3}\] change sign exactly once (recall \(sx\leq 0\) and \(0<x<1\)), we have (using Descartes' rule of signs) exactly one positive root \(b\in(0,+\infty)\) (corresponding to a unique root \(c\in(-1,1)\) of \(h(c)\)). Then too see that this \(c\) value (let us call it \(\hat{c}\)) satisfies that \(p(\mathfrak{z})>0\) for all \(\mathfrak{z}\in(-1,1)\) we can first observe that since \(h(x)=3x(1-x^{2})^{2}\neq 0\), \(\hat{c}\neq x\). With that settled we may (solve for \(s\) in \(h(\hat{c})=0\) and) write \(s=\frac{3\hat{c}^{3}x^{2}-\hat{c}^{3}-6\hat{c}^{2}+\hat{c}x^{2}+5\hat{c}-2x}{(1 -\hat{c}^{2})x(\hat{c}-x)}\). Substituting this into \(p(\mathfrak{z})\) (and using that \(x=\frac{k_{1}-k_{2}}{k_{1}+k_{2}}\)) gives us \(p(\mathfrak{z})=\frac{4((1-\hat{c})^{2}k_{1}^{2}+(1+\hat{c})^{2}k_{2}^{2}+4(1- \hat{c}^{2})k_{1}k_{2})(1+\hat{c}_{\mathfrak{z}})(1-\hat{c}x-\hat{c}_{\mathfrak {z}}+x\mathfrak{z})}{(1-\hat{c}^{2})(k_{1}+k_{2})^{2}}\). Since \(0<x<1\) and \(-1<\hat{c}<1\), it easily follows that \(p(\mathfrak{z})>0\) for \(-1<\mathfrak{z}<1\). Similarly, statement (2) is (re)verified if we show that for \(g\leq 1+3k_{2}\), \(p(\mathfrak{z})>0\) for all \(c,\mathfrak{z}\in(-1,1)\). This is done easily by writing \(p(\mathfrak{z})\) in a new variable \(y\): \(\mathfrak{z}=\psi(y)=\frac{1-y}{1+y}\) (\(0<y<+\infty\)) along with using the above transformation \(c=\phi(b)=\frac{1-b}{1+b}\). After multiplying by \((1+b)^{2}(1+y)^{2}\), this results in a polynomial in the two variables \(b,y>0\). The coefficients of this polynomial are all non-negative (with some strictly positive) precisely when \(g\leq 1+3k_{i}\) for \(i=1,2\). Since (we assumed without loss of generality that) \(k_{1}>k_{2}\), this is manifested by \(g\leq 1+3k_{2}\). **Example 4.1**.: Assume now that \(k_{2}=1\) and \(g=5\) or \(g=6\). Thus \(g\leq 1+3k_{2}\) is false and Statement (2) cannot be applied. Nevertheless we shall see that positivity of \(p(\mathfrak{z})\) for \(-1<\mathfrak{z}<1\) still holds for all \(k_{1}>1\): With \(g=5\), \(k_{2}=1\), \(c=\phi(b)=\frac{1-b}{1+b}\), and \(\mathfrak{z}=\psi(y)=\frac{1-y}{1+y}\), \(p(\mathfrak{z})\) rewrites to \[\frac{32\left(b^{2}k_{1}^{2}(k_{1}-y+y^{2})+3bk_{1}^{2}y+4bk_{1}^{2}+4bk_{1}y^ {2}+11bk_{1}y+(3k_{1}-4)y+k_{1}+y^{2}\right)}{(b+1)^{2}(k_{1}+1)^{3}(y+1)^{2}}.\] Since \(k_{1}\geq 2\) and \(y,b>0\), it is easy to see that this is always positive. With \(g=6\), \(k_{2}=1\), \(c=\phi(b)=\frac{1-b}{1+b}\), and \(\mathfrak{z}=\psi(y)=\frac{1-y}{1+y}\), \(p(\mathfrak{z})\) rewrites to \[\frac{32\left(b^{2}k_{1}^{2}(k_{1}-2y+y^{2})+3bk_{1}^{2}y+4bk_{1}^{2}+4bk_{1}y ^{2}+13bk_{1}y+(3k_{1}-5)y+k_{1}+y^{2}\right)}{(b+1)^{2}(k_{1}+1)^{3}(y+1)^{2}}\] Since \(k_{1}\geq 2\) and \(y,b>0\), we see also in this case that this is always positive. On the other hand, for \(k_{2}\geq 2\), then \(1+3k_{2}\geq 7\) and since \(7\) is larger than both \(5\) and \(6\), we already know from Statement (2) above that positivity of \(p(\mathfrak{z})\) for \(-1<\mathfrak{z}<1\) holds. In conclusion, when \(g\leq 6\) we have that for all integers \(k_{1}>k_{2}>0\), every ray in \(\mathfrak{t}^{+}_{sph}\) has an extremal Sasaki metric (up to isotopy). This improves the result we had in [10]. Finally notice that when \(g=7\), \(k_{1}=2\), \(k_{2}=1\) and \(c=-\frac{299}{301}\), we get that \(p(-\frac{1}{5})=-\frac{7794656}{61155675}<0\) and so positivity of \(p(\mathfrak{z})\) fails in this case. The case \(k_{1}=k_{2}\) and \(g\leq 6\) was already handled in Example 5.11 of [10] (recall that \((k_{1},k_{2})=l(\tilde{w}_{1},\tilde{w}_{2})\) in the \(S^{3}_{\tilde{\mathbf{w}}}\) join, \(M^{5}_{g,l,\tilde{\mathbf{w}}}=S^{3}_{g}\star_{l,1}S^{3}_{\tilde{\mathbf{w}}}\)). Similarly to the example above we had that every ray in \(\mathfrak{t}^{+}_{sph}\) has an extremal Sasaki metric (up to isotopy). We can thus state the following result. **Proposition 4.2**.: _Let \(\mathbf{k}=(k_{1},k_{2})\) with \(k_{1}\geq k_{2}>0\) being integers and consider the Yamazaki fiber join \(M_{\mathbf{k}}\) as described above. For \(1\leq g\leq 6\) or \(1\leq g\leq 1+3k_{2}\) we have that the entire Sasaki cone is extremal (up to isotopy)._ ### \(N=\mathbb{CP}^{1}\times\mathbb{CP}^{1}\) Let \(\Omega_{i}\) denote the standard area forms on the ith copy of \(\mathbb{CP}^{1}\). With slight abuse of notation, we denote the pull-back of their Kahler classes to \(H^{2}(N,\mathbb{Z})\) by \([\Omega_{1}]\) and \([\Omega_{2}]\). The Kahler cone of \(N\) then equals \(span_{\mathbb{R}^{+}}\{[\Omega_{1}],[\Omega_{2}]\}\). Let \(M_{\mathfrak{w}}\) be a \(d=1\) Yamazaki fiber join formed from a choice of Kahler classes which are represented by Kahler forms \[\omega_{j}=k_{j}^{1}\Omega_{1}+k_{j}^{2}\Omega_{2},\qquad k_{j}^{1},k_{j}^{2} \in\mathbb{Z}^{+}, \tag{12}\] for \(j=1,2\). The line bundles \(L_{1},L_{2}\) satisfy that \(c_{1}(L_{j})=[\omega_{j}]=k_{j}^{1}[\Omega_{1}]+k_{j}^{2}[\Omega_{2}]\). So the choices of Kahler forms is given by the \(2\) by \(2\) matrix \[K=\begin{pmatrix}k_{1}^{1}&k_{1}^{2}\\ k_{2}^{1}&k_{2}^{2},\end{pmatrix} \tag{13}\] and the fiber join is non-colinear exactly when \(det\,K\neq 0\). Now the quotient complex manifold of \(M_{\mathfrak{w}}\) arising from the regular Sasakian structure with Reeb vector field \(\xi_{\mathbf{1}}\) is equal to the following \(\mathbb{CP}^{1}\) bundle over \(\mathbb{CP}^{1}\times\mathbb{CP}^{1}\): \[\mathbb{P}\big{(}L_{1}^{*}\oplus L_{2}^{*}\big{)}=\mathbb{P}\big{(}\mathbbm{1 }\oplus L_{1}\otimes L_{2}^{*}\big{)}=\mathbb{P}\big{(}\mathbbm{1}\oplus \mathcal{O}(k_{1}^{1}-k_{2}^{1},k_{1}^{2}-k_{2}^{2})\big{)}\to\mathbb{CP}^{1} \times\mathbb{CP}^{1}.\] We assume here that \(k_{1}^{i}\neq k_{2}^{i}\) for \(i=1,2\). If we don't make this assumption our regular quotient could be a product of \(\mathbb{CP}^{1}\) with a Hirzebruch surface. This is not a problem per se, but needs to be treated slightly differently, so we will avoid this here. Every Kahler class on \(\mathbb{P}\big{(}\mathbbm{1}\oplus L_{1}\otimes L_{2}^{*}\big{)}\) is admissible in the broader sense of the definition given in [1]. Thus the fiber join is super admissible and therefore strongly admissible. This case is hence a special case of Theorem 3.11, with \(\mathfrak{t}_{sph}^{+}\) a proper subcone of the (unreduced) Sasaki cone. Nevertheless we shall study this example in details since it will illustrate two different approaches for locating CSC ray(s) in \(\mathfrak{t}_{sph}^{+}\). At the end of the section we will also discuss which polarized Kahler manifolds \((S_{\mathbf{n}},[\omega])\) of the form \(S_{\mathbf{n}}=\mathbb{P}\big{(}\mathbbm{1}\oplus\mathcal{O}(n_{1},n_{2}) \big{)}\to\mathbb{CP}^{1}\times\mathbb{CP}^{1}\) appear as regular quotients of a Sasaki Yamazaki fiber join. For \(n_{1},n_{2}\in\mathbb{Z}\setminus\{0\}\), a Kahler class on the complex manifold \(S_{\mathbf{n}}=\mathbb{P}\big{(}\mathbbm{1}\oplus\mathcal{O}(n_{1},n_{2}) \big{)}\to\mathbb{CP}^{1}\times\mathbb{CP}^{1}\) is, up to scale, of the form \(2\pi(\frac{n_{1}}{x_{1}}[\Omega_{1}]+\frac{n_{2}}{x_{2}}[\Omega_{2}])+\Xi)\), where \(0<|x_{i}|<1\) and \(x_{i}n_{i}>0\). As we saw in Section 5.3.3 of [1], as well as in Section 3.1 of the current paper, here we can calculate the quotient Kahler class up to scale and all in all we get a smooth admissible Kahler manifold with admissible data \[n_{1}=k_{1}^{1}-k_{2}^{1},\quad n_{2}=k_{1}^{2}-k_{2}^{2},\quad x_{1}=\frac{k_ {1}^{1}-k_{2}^{1}}{k_{1}^{1}+k_{2}^{1}},\quad x_{2}=\frac{k_{1}^{2}-k_{2}^{2}} {k_{1}^{2}+k_{2}^{2}}. \tag{14}\] Indeed, more generally, using Proposition 3.4 we have that for co-prime \(\mathbf{w}=(w_{1},w_{2})\in(\mathbb{Z}^{+})^{2}\) the quasi-regular quotient of \(M_{\mathfrak{w}}\) with respect to \(\xi_{\mathbf{w}}\) is the log pair \(B_{\mathbf{w},\mathbf{w}}:=(\mathbb{P}(\mathbbm{1}\oplus\mathcal{O}(w_{2}k_{1}^ {1}-w_{1}k_{2}^{1},w_{2}k_{1}^{2}-w_{1}k_{2}^{2})),\Delta_{\mathbf{w}})\). Together with the quotient Kahler class (up to scale, also from Proposition 3.4) this gives (assuming \(w_{2}k_{1}^{i}-w_{1}k_{2}^{i}\neq 0\)) admissible data \[n_{1}=w_{2}k_{1}^{1}-w_{1}k_{2}^{1},\quad n_{2}=w_{2}k_{1}^{2}-w_{1}k_{2}^{2}, \quad x_{1}=\frac{w_{2}k_{1}^{1}-w_{1}k_{2}^{1}}{w_{2}k_{1}^{1}+w_{1}k_{2}^{1} },\quad x_{2}=\frac{w_{2}k_{1}^{2}-w_{1}k_{2}^{2}}{w_{2}k_{1}^{2}+w_{1}k_{2}^{2 }}. \tag{15}\] Note that if \(w_{2}k_{1}^{i}-w_{1}k_{2}^{i}=0\) for one of (or both) \(i=1,2\), we get a product of \(\mathbb{CP}^{1}\) with a so-called Hirzebruch orbifold. From the discussion above we can see the rays, given up to scale by \(\xi_{\mathbf{w}}\), as \(CR\)-twists of the regular quotient [1]. So choosing \(c=\frac{w_{1}-w_{2}}{w_{1}+w_{2}}\) creates a new Sasaki structure via the lift of \(f=c\mathfrak{z}+1\) from \(S_{\mathbf{n}}\) to \(M_{\mathfrak{w}}\). With this correspondence in mind we can take two different approaches when seeking out rays in \(\mathfrak{t}^{+}_{sph}\) with constant scalar curvature. From the \(CR\)-twist point of view, the Reeb vector field \(\xi_{c}\) given by the \(CR\)-twist has a constant scalar curvature Sasaki metric (up to isotopy) exactly when Equation (10) from [1] holds. Applying this equation to the regular quotient with admissible data from (14) yields the equation \(f_{CR}(c)=0\), where \[\begin{split} f_{CR}(c)&:=18c\left(c^{2}-1\right)^{ 2}k_{1}^{1}k_{1}^{2}k_{2}^{1}k_{2}^{2}+3(c-1)^{5}(k_{1}^{1})^{2}(k_{1}^{2})^{2 }+3(c+1)^{5}(k_{2}^{1})^{2}(k_{2}^{2})^{2}\\ &+(c+1)(c-1)^{4}k_{1}^{1}k_{1}^{2}\left(k_{1}^{1}+k_{1}^{2}-3k_{1 }^{2}k_{2}^{1}-3k_{1}^{1}k_{2}^{2}\right)\\ &+(c+1)^{2}(c-1)^{3}\left((k_{1}^{2})^{2}k_{2}^{1}+(k_{1}^{1})^{2 }k_{2}^{2}-4k_{1}^{1}k_{1}^{2}k_{2}^{1}-4k_{1}^{1}k_{1}^{2}k_{2}^{2}\right)\\ &+(c+1)^{3}(c-1)^{2}\left(k_{1}^{1}(k_{2}^{2})^{2}+k_{1}^{2}(k_{2} ^{1})^{2}-4k_{1}^{2}k_{2}^{1}k_{2}^{2}-4k_{1}^{1}k_{2}^{1}k_{2}^{2}\right)\\ &+(c+1)^{4}(c-1)k_{2}^{1}k_{2}^{2}\left(k_{2}^{1}+k_{2}^{2}-3k_{1 }^{1}k_{2}^{2}-3k_{1}^{2}k_{2}^{1}\right)\end{split} \tag{16}\] If \(c\in\mathbb{Q}\cap(-1,1)\), we can then set \(c=\frac{w_{1}-w_{2}}{w_{1}+w_{2}}\) to get an equation in \((w_{1},w_{2})\in\mathbb{Z}^{+}\times\mathbb{Z}^{+}\) for \(\xi_{\mathbf{w}}\) being CSC (up to isotopy): \[\begin{split} 0&=-3(k_{2}^{1})^{2}(k_{2}^{2})^{2}w_{1}^{ 5}\\ &+k_{2}^{1}k_{2}^{2}\left(k_{2}^{1}+k_{2}^{2}-3k_{1}^{2}k_{2}^{1}-3 k_{1}^{1}k_{2}^{2}\right)w_{1}^{4}w_{2}\\ &+\left(4k_{1}^{1}k_{2}^{1}k_{2}^{2}+4k_{1}^{2}k_{2}^{1}k_{2}^{2} -9k_{1}^{1}k_{1}^{2}k_{2}^{1}k_{2}^{2}-k_{1}^{2}(k_{2}^{1})^{2}-k_{1}^{1}(k_{2 }^{2})^{2}\right)w_{1}^{3}w_{2}^{2}\\ &+\left(9k_{1}^{1}k_{1}^{2}k_{1}^{1}k_{2}^{2}+(k_{1}^{2})^{2}k_{2 }^{1}+(k_{1}^{1})^{2}k_{2}^{2}-4k_{1}^{1}k_{1}^{2}k_{2}^{2}-4k_{1}^{1}k_{1}^{2} k_{2}^{1}\right)w_{1}^{2}w_{2}^{3}\\ &+k_{1}^{1}k_{2}^{1}\left(3k_{1}^{2}k_{2}^{1}+3k_{1}^{1}k_{2}^{2} -k_{1}^{1}-k_{1}^{2}\right)w_{1}w_{2}^{4}\\ &+3(k_{1}^{1})^{2}(k_{1}^{2})^{2}w_{2}^{5}.\end{split} \tag{17}\] On the other hand, Proposition 4.13 of [1] (with \(m_{0}=w_{1}\), \(m_{\infty}=w_{2}\), \(r_{1}=x_{1}\), and \(r_{2}=x_{2}\)) tells us that the Kahler class given by \((x_{1},x_{2})\) on the log pair \((S_{\mathbf{n}},\Delta_{\mathbf{w}})\) has a constant scalar curvature Kahler metric when the following equation holds true: \[\begin{split} 0=& 9(w_{1}-w_{2})n_{1}n_{2}-6(w_{1}+w_{2})n_{1}n_{ 2}(x_{1}+x_{2})+6(w_{1}-w_{2})n_{1}n_{2}x_{1}x_{2}\\ &+3n_{2}(4w_{1}w_{2}-n_{1}(w_{1}-w_{2}))x_{1}^{2}+3n_{1}(4w_{1}w_ {2}-n_{2}(w_{1}-w_{2}))x_{2}^{2}\\ &-(4w_{1}w_{2}(n_{1}+n_{2})-3(w_{1}-w_{2})n_{1}n_{2})x_{1}^{2}x_{ 2}^{2}.\end{split} \tag{18}\] We can then use the data in (15) above to get an equation for the existence of a constant scalar curvature Kahler metric in the Kahler class of the quasi-regular Kahler quotient of \(\xi_{\mathbf{w}}\). As expected from the above discussion and the fact that a quasi-regular Sasaki structure has constant scalar curvature (up to isotopy) exactly when its Kahler quotient has a constant scalar curvature Kahler metric in its Kahler class, this gives an equation equivalent to (17). Consider a given complex manifold \(S_{\mathbf{n}}=\mathbb{P}\big{(}\mathbbm{1}\oplus\mathcal{O}(n_{1},n_{2}) \big{)}\to\mathbb{C}\mathbb{P}^{1}\times\mathbb{C}\mathbb{P}^{1}\). This will be the regular quotient of a \(d=1\) Yamazaki fiber join given by \(K\) for any matrix \(K\) of the form \[K=\begin{pmatrix}n_{1}+k^{1}&n_{2}+k^{2}\\ &\\ k^{1}&k^{2},\end{pmatrix} \tag{19}\] where \(k^{i}\in\mathbb{Z}\) such that \(k^{i}>Max\{0,-n_{i}\}\). For a given choice of \(k^{1},k^{2}\), the quotient Kahler class is then determined, up to scale, by \(x_{1}=\frac{n_{1}}{n_{1}+2k^{1}}\) and \(x_{2}=\frac{n_{2}}{n_{2}+2k^{2}}\). This gives a criterion for which Kahler classes on \(S_{\mathbf{n}}\) can show up as regular quotient Kahler classes of a \(d=1\) Yamazaki fiber join. For example, if \(n_{1}=1\) and \(n_{2}=-1\), we have \(x_{1}=\frac{1}{1+2k^{1}}\) and \(x_{2}=\frac{-1}{-1+2k^{2}}\). Here \(k^{1}\in\mathbb{Z}^{+}\) and \(k^{2}\in\mathbb{Z}^{+}\setminus\{1\}\). The Koiso-Sakane KE class is given by \(x_{1}=1/2\) and \(x_{2}=-1/2\) and we see right away that this class is out of range. The other CSC classes on this manifold are given by \(x_{2}=-x_{1}\) and \(x_{2}=x_{1}-1\) (see e.g. Theorem 9 in [1]). Now, \[\begin{array}{ccc}x_{2}&=&-x_{1}\\ &\Longleftrightarrow\\ \frac{-1}{-1+2k^{2}}&=&-\frac{1}{1+2k^{1}}\\ &\Longleftrightarrow\\ k^{2}&=&k^{1}+1,\end{array}\] which then gives us a one parameter family \((x_{1},x_{2})=(\frac{1}{1+2k^{1}},\frac{-1}{1+2k^{1}})\), \(k^{1}\in\mathbb{Z}^{+}\) of CSC Kahler classes that each are regular quotient Kahler classes of a \(d=1\) Yamazaki fiber join. One the other hand, \[\begin{array}{ccc}x_{2}&=&x_{1}-1\\ &\Longleftrightarrow\\ \frac{-1}{-1+2k^{2}}&=&\frac{-2k^{1}}{1+2k^{1}}\\ &\Longleftrightarrow\\ 1&=&-4k^{1}-4k^{1}k^{2},\end{array}\] which has no solutions for \(k^{1}\in\mathbb{Z}^{+}\) and \(k^{2}\in\mathbb{Z}^{+}\setminus\{1\}\). Thus, none of the CSC Kahler classes from this family can be regular quotient Kahler classes of a \(d=1\) Yamazaki fiber join. ### \(N=\Sigma_{g_{1}}\times\Sigma_{g_{2}}\), a product of Riemann surfaces We can generalize the example of Section 4.2 to consider the case where \(N=\Sigma_{g_{1}}\times\Sigma_{g_{2}}\) with \(\Sigma_{g_{i}}\) each being compact Riemann surfaces of genus \(g_{i}\), equipped with a standard CSC area form \(\Omega_{i}\). Similarly to Section 4.2, each choice of matrix \(K=\begin{pmatrix}k_{1}^{1}&k_{1}^{2}\\ k_{2}^{1}&k_{2}^{2}.\end{pmatrix}\), consisting of positive integer entries \(k_{j}^{i}\), yields a \(d=1\) Yamazaki fiber join \(M_{\mathfrak{w}}=S(L_{1}^{*}\oplus L_{2}^{*})\) via the line bundles \(L_{1},L_{2}\) satisfying \(c_{1}(L_{j})=[\omega_{j}]=k_{j}^{1}[\Omega_{1}]+k_{j}^{2}[\Omega_{2}]\). We assume here that \(k_{1}^{i}\neq k_{2}^{i}\) for \(i=1,2\). The case that \(M\) is the total space of a Sasakian fiber join with \(N=\Sigma_{g_{1}}\times\Sigma_{g_{2}}\) was treated in Proposition 5.8 of [1]. When \(d>1\) the spectral sequence of the fibration collapses, so the cohomology groups of \(M\) are the cohomology groups of the product \(S^{2d+1}\times\Sigma_{g_{1}}\times\Sigma_{g_{2}}\). When \(d=1\) we have \[H^{p}(M^{7},\mathbb{Z})=\begin{cases}\mathbb{Z}&\text{if }p=0,7\\ \mathbb{Z}^{2g_{1}+2g_{2}}&\text{if }p=1,3,6\\ \mathbb{Z}^{4g_{1}g_{2}+2}&\text{if }p=2,5\\ \mathbb{Z}^{2g_{1}+2g_{2}}+\mathbb{Z}_{e}&\text{if }p=4\\ 0&\text{otherwise}\end{cases} \tag{20}\] where the image of the differential \(d_{4}\) in \(E_{2}^{4,0}\) is the Euler class of the bundle with \(e=k_{1}^{1}k_{2}^{2}+k_{1}^{2}k_{2}^{1}\). In both cases with \(g_{1},g_{2}\) and \(e\) fixed we know that \(H^{4}(N,\mathbb{Z})=\mathbb{Z}\), so it follows from a theorem of Pontrjagin [12] (see also [14, 15]) that the sphere bundles \(M\) are classified by their 2nd and 4th Stiefel-Whitney classes \(w_{2},w_{4}\), and their Pontrjagin class \(p_{1}(M)\). Similarly to Section 4.2, the quotient complex manifold of \(M_{\mathfrak{w}}\) arising from the regular Sasakian structure with Reeb vector field \(\xi_{\mathbf{1}}\) is equal to the following \(\mathbb{CP}^{1}\) bundle over \(\Sigma_{g_{1}}\times\Sigma_{g_{2}}\): \[\mathbb{P}\big{(}L_{1}^{*}\oplus L_{2}^{*}\big{)}=\mathbb{P}\big{(}\mathbbm{1 }\oplus L_{1}\otimes L_{2}^{*}\big{)}=\mathbb{P}\big{(}\mathbbm{1}\oplus \mathcal{O}(n_{1},n_{2})\big{)}\to\Sigma_{g_{1}}\times\Sigma_{g_{2}},\] with \(n_{1}=k_{1}^{1}-k_{2}^{1}\) and \(n_{2}=k_{1}^{2}-k_{2}^{2}\). Further, the regular quotient Kahler class is, up to scale, equal to the admissible Kahler class \(2\pi(\frac{n_{1}}{x_{1}}[\Omega_{1}]+\frac{n_{2}}{x_{2}}[\Omega_{2}])+\Xi)\) where \(x_{1}=\frac{k_{1}^{1}-k_{2}^{1}}{k_{1}^{1}+k_{2}^{1}},\quad x_{2}=\frac{k_{1}^ {2}-k_{2}^{2}}{k_{1}^{2}+k_{2}^{2}}\). When \(g_{i}\geq 2\) for at least one of \(i=1,2\), we cannot use Theorem 3.11 to get existence of extremal/CSC Sasaki metrics. Further, we know from the examples in Sections 3.3 and 3.4 of [1] that the existence of CSC or even just extremal Sasaki metrics is by no means a given. More specifically, Proposition 3.10 of [1] tells us that the Reeb vector field determined - up to homothety - by \(c\in(-1,1)\) (as explained in the beginning of the section) is extremal (up to isotopy) if and only if \(F_{c}(\mathfrak{z})>0\), for \(-1<\mathfrak{z}<1\), where the polynomial \(F_{c}(\mathfrak{z})\) is given as follows: Let \(s_{i}=\frac{2(1-g_{i})}{n_{i}}=\frac{2(1-g_{i})}{k_{1}^{i}-k_{2}^{i}}\), \(x_{i}=\frac{k_{1}^{i}-k_{2}^{i}}{k_{1}^{i}+k_{2}^{i}}\), and define \[\alpha_{r,-5} = \int_{-1}^{1}(ct+1)^{-5}t^{r}(1+x_{1}t)(1+x_{2}t)\,dt\] \[\alpha_{r,-6} = \int_{-1}^{1}(ct+1)^{-6}t^{r}(1+x_{1}t)(1+x_{2}t)\,dt\] \[\beta_{r,-4} = \int_{-1}^{1}(ct+1)^{-4}t^{r}(x_{1}s_{1}(1+x_{2}t)+x_{2}s_{2}(1+x _{1}t))\,dt\] \[+ (-1)^{r}(1-c)^{-4}(1-x_{1})(1-x_{2})+(1+c)^{-4}(1+x_{1})(1+x_{2}).\] Then, \[F_{c}(\mathfrak{z})=(c\mathfrak{z}+1)^{4}\left[\frac{2(1-x_{1})(1-x_{2})}{(1- c)^{4}}(\mathfrak{z}+1)+\int_{-1}^{\mathfrak{z}}Q(t)(\mathfrak{z}-t)\,dt\right], \tag{21}\] where \[Q(t)=\frac{2\left(x_{1}s_{1}(1+x_{2}t)+x_{2}s_{2}(1+x_{1}t)\right)}{(ct+1)^{4}}- \frac{(A_{1}t+A_{2})(1+x_{1}t)(1+x_{2}t)}{(ct+1)^{6}}\] and \(A_{1}\) and \(A_{2}\) are the unique solutions to the linear system \[\begin{array}{rcl}\alpha_{1,-6}A_{1}+\alpha_{0,-6}A_{2}&=&2\beta_{0,-4}\\ \\ \alpha_{2,-6}A_{1}+\alpha_{1,-6}A_{2}&=&2\beta_{1,-4}.\end{array} \tag{22}\] Further, if the positivity of \(F_{c}(\mathfrak{z})\) is satisfied, then the extremal Sasaki structure is CSC exactly when \[\alpha_{1,-5}\beta_{0,-4}-\alpha_{0,-5}\beta_{1,-4}=0 \tag{23}\] is satisfied. A direct calculation shows that \(\alpha_{1,-5}\beta_{0,-4}-\alpha_{0,-5}\beta_{1,-4}=\frac{4h(c)}{9(1-c^{2})^{ \prime}}\) where \(h(c)\) is the polynomial given by \[h(c) =(3x_{1}x_{2}(s_{1}x_{2}+s_{2}x_{1})-s_{1}x_{1}-s_{2}x_{2}+3(3x_{ 1}^{2}x_{2}^{2}-x_{1}^{2}+2x_{1}x_{2}-x_{2}^{2}+1))c^{5}\] \[+(s_{1}x_{1}^{2}+s_{2}x_{2}^{2}-3(s_{1}+s_{2})x_{1}^{2}x_{2}^{2}- 4(s_{1}+s_{2})x_{1}x_{2}-6(x_{1}+x_{2})(4x_{1}x_{2}+1))c^{4}\] \[+4(((s_{1}x_{1}+s_{2}x_{2})-(s_{1}x_{2}+s_{2}x_{1}))x_{1}x_{2}+s_ {1}x_{1}+s_{2}x_{2}+3x_{1}x_{2}(x_{1}x_{2}+5)+6(x_{1}^{2}+x_{2}^{2}))c^{3}\] \[+4((s_{1}+s_{2})(x_{1}x_{2}+1)x_{1}x_{2}-s_{1}x_{1}^{2}-s_{2}x_{2 }^{2}-3(x_{1}+x_{2})(2x_{1}x_{2}+3))c^{2}\] \[+((s_{1}x_{2}+s_{2}x_{1})x_{1}x_{2}-(s_{1}x_{1}+s_{2}x_{2})(4x_{1 }x_{2}+3)+3(x_{1}^{2}x_{2}^{2}+x_{1}^{2}+x_{2}^{2}+10x_{1}x_{2}+7))c\] \[+3(s_{1}x_{1}^{2}+s_{2}x_{2}^{2})-(s_{1}+s_{2})x_{1}^{2}x_{2}^{2} -6(x_{1}+x_{2})\] and \(h(\pm 1)=\pm 24(1\mp x_{1})^{2}(1\mp x_{2})^{2}\). Thus, equation (23) always have a solution \(c\in(-1,1)\). In the event that \(g_{1},g_{2}\leq 1\), this is predicted by (the proof of) Theorem 3.10 and in the event that \(g_{1},g_{2}\geq 1\) (where \(\mathfrak{t}^{+}\) is 2-dimensional) this is predicted by Corollary 1.7 of [1]. The \(g_{1}=0\) and \(g_{2}>1\) (or vice versa) case falls outside of these results. Of course, a solution to \(h(c)=0\) only corresponds to an actual CSC ray if we also have that the positivity condition of \(F_{c}(\mathfrak{z})\) is satisfied. **Proposition 4.3**.: _Let \(M_{\mathfrak{w}}\) be a \(d=1\) fiber join over \(\Sigma_{g_{1}}\times\Sigma_{g_{2}}\) with its induced Sasakian structure. Then for all \(g_{1},g_{2}\geq 1\), there exists a matrix \(K=\begin{pmatrix}k_{1}^{k_{1}^{1}}&k_{1}^{2}\\ k_{2}^{k_{2}^{1}}&k_{2}^{2}\end{pmatrix}\) such that the entire Sasaki cone of \(M_{\mathfrak{w}}\) is extremal and contains a CSC ray._ Proof.: Without loss of generality, we assume that \(g_{2}\geq g_{1}\geq 1\). First we note that since \(g_{1},g_{2}\geq 1\), the Sasaki cone is of dimension 2. Thus, the proof will consist of showing that for all \(g_{2}\geq g_{1}\geq 1\), \(\exists\) a two-by-two matrix \(K\) such that \(\forall c\in(-1,1)\), \(F_{c}(\mathfrak{z})\) as defined in (21) is positive for \(-1<\mathfrak{z}<1\). Once this is proven we already know from the above discussion that for this such a choice of \(K\), (23) has a solution \(c\in(-1,1)\). This \(c\) will correspond to a CSC ray. If \(g_{1}=g_{2}=1\), the result follows from Theorem 3.11. Thus, we assume for the rest of the proof that \(g_{2}>1\). Now, let \(K=\begin{pmatrix}10g_{1}&100g_{2}\\ 2g_{1}&g_{2}\end{pmatrix}\). Using (21), we can calculate that \[F_{c}(\mathfrak{z})=\frac{(1-\mathfrak{z}^{2})p(\mathfrak{z})}{1212g_{1}g_{2}h_ {0}(c)},\] where \[h_{0}(c)=544829-1814364c+2225984c^{2}-1185624c^{3}+229199c^{4},\] and \(p(\mathfrak{z})\) is a cubic given by \[p(\mathfrak{z})=8g_{1}g_{2}h_{1}(c)+4h_{2}(c,g_{1},g_{2})(1+\mathfrak{z})+2h_{3 }(c,g_{1},g_{2})(1+\mathfrak{z})^{2}+h_{4}(c,g_{1},g_{2})(1+\mathfrak{z})^{3},\] where \[h_{1}(c) = h_{0}(c)\] \[h_{2}(c,g_{1},g_{2}) = 6h_{21}(c)+h_{22}(c)(g_{2}-2)+\left(5h_{23}(c)+h_{24}(c)(g_{2}-2) \right)(g_{1}-2)\] \[h_{3}(c,g_{1},g_{2}) = 2h_{31}(c)+2h_{32}(c)(g_{2}-2)+\left(h_{33}(c)+2h_{34}(c)(g_{2}-2 )\right)(g_{1}-2)\] \[h_{4}(c,g_{1},g_{2}) = 10h_{41}(c)+h_{42}(c)(g_{2}-2)+\left(2h_{43}(c)+h_{44}(c)(g_{2}-2 )\right)(g_{1}-2)\] with \[h_{21}(c) = 1849633-3952908c+2583653c^{2}-545438c^{3}+68368c^{4}\] \[h_{22}(c) = 5029446-10073556c+5505031c^{2}-421486c^{3}-29519c^{4}\] \[h_{23}(c) = 1085299-2250304c+1327594c^{2}-148704c^{3}-11901c^{4}\] \[h_{24}(c) = 2453521-4733176c+2196021c^{2}+235654c^{3}-147064c^{4}\] \[h_{31}(c) = 173925883-629489348c+863749558c^{2}-530449308c^{3}+122385903c^{4}\] \[h_{32}(c) = 86771822-314540932c+432305747c^{2}-265928422c^{3}+61453077c^{4}\] \[h_{33}(c) = 169929491-609982556c+828678836c^{2}-502956696c^{3}+114452421c^{4}\] \[h_{34}(c) = 42386813-152393768c+207385193c^{2}-126091058c^{3}+28743168c^{4}\] \[h_{41}(c) = 72852912-233877440c+270006303c^{2}-130233426c^{3}+21229919c^{4}\] \[h_{42}(c) = 365166252-1171579852c+1351415507c^{2}-650974422c^{3}+105863967c^{4}\] \[h_{43}(c) = 184191678-594750598c+693107613c^{2}-339776268c^{3}+57173843c^{4}\] \[h_{44}(c) = 184642524-595846924c+693799609c^{2}-339679914c^{3}+57031029c^{4}.\] We also notice that \(p(1)=4000g_{1}g_{2}h_{0}(c)\). _Claim:_ For all \(c\in(-1,1)\), \(h_{0}(c)>0\). Further, for all \(c\in(-1,1)\), \(i=2,3\), and \(j=1,2,3,4\), \(h_{ij}(c)>0\). From this claim it then follows that for \(g_{1},g_{2}>1\), all \(c\in(-1,1)\), and \(i=0,1,2,3\), \(h_{i}(c)>0\). Thus, in this case, we have \(p(\pm 1)>0\), \(p^{\prime}(-1)>0\), and \(p^{\prime\prime}(-1)>0\). Since \(p(\mathfrak{z})\) is a cubic, a moment's thought tells us that \(p(\mathfrak{z})>0\) for \(-1<\mathfrak{z}<1\). Finally, since the claim also tells us that \(h_{0}(c)>0\) for \(c\in(-1,1)\), we conclude that \(F_{c}(\mathfrak{z})\) is positive for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\) as desired. The proof of the claim is a standard exercise: For example, one easily checks that \(h_{0}(\pm 1)>0\), \(h_{0}^{\prime}(\pm 1)<0\). Further, since \(h_{0}^{\prime\prime}(c)\) is a second order polynomial in \(c\) with \(h_{0}^{\prime\prime}(\pm 1)>0\), and \(h_{0}^{\prime\prime\prime}(1)<0\), we know \(h_{0}^{\prime\prime}(c)>0\) for \(c\in(-1,1)\). Thus for \(-1\leq c\leq 1\), \(h_{0}(c)\), is a (concave up and) decreasing function that is positive at \(c=\pm 1\). It therefore must be positive for all \(c\in(-1,1)\), as desired. The argument for the claim concerning \(h_{ij}(c)\) with \(i=2,3\) and \(j=1,2,3,4\) is completely similar. Finally, if \(g_{1}=1\) (and \(g_{2}>1\)), we still have that \(h_{0}(c)=h_{1}(c)>0\) for \(c\in(-1,1)\). Further, note that \[h_{2}(c,1,g_{2}) = \tilde{h}_{21}(c)+5\tilde{h}_{22}(c)(g_{2}-2)\] \[h_{3}(c,1,g_{2}) = 5\tilde{h}_{31}(c)+2\tilde{h}_{32}(c)(g_{2}-2),\] where \[\tilde{h}_{21}(c) = 5671303-12465928c+8863948c^{2}-2529108c^{3}+469713c^{4}\] \[\tilde{h}_{22}(c) = 515185-1068076c+661802c^{2}-131428c^{3}+23509c^{4}\] \[\tilde{h}_{31}(c) = 35584455-129799228c+179764056c^{2}-111588384c^{3}+26063877c^{4}\] \[\tilde{h}_{32}(c) = 44385009-162147164c+224920554c^{2}-139837364c^{3}+32709909c^{4}\] Now, in an exact similar way as above, we can prove that for all \(c\in(-1,1)\), \(i=2,3\), and \(j=1,2\), \(\tilde{h}_{ij}(c)>0\). Therefore we may still conclude that \(p(\pm 1)>0\), \(p^{\prime}(-1)>0\), and \(p^{\prime\prime}(-1)>0\) and the proof finishes as above. (N=\mathbb{P}(E)\to\Sigma_{g}\), where \(E\to\Sigma_{g}\) is a polystable rank \(2\) holomorphic vector bundle over a compact Riemann surface of genus \(g\geq 1\) Let \(\Sigma_{g}\) be a compact Riemann surface and let \(E\to\Sigma_{g}\) be a holomorphic vector bundle. The degree of \(E\), is defined by \(deg\,E=\int_{\Sigma_{g}}c_{1}(E)\). Then \(E\) is _stable_(or _semistable_) in the sense of Mumford if for any proper coherent subsheaf \(F\), \(\frac{deg\,F}{rank\,F}<\frac{deg\,E}{rank\,E}\) (or \(\frac{deg\,F}{rank\,F}\leq\frac{deg\,E}{rank\,E}\)). Further, a semistable holomorphic vector bundle, \(E\), is called _polystable_ if it decomposes as a direct sum of stable holomorphic vector bundles, \(E=F_{1}\oplus\cdots\oplus F_{l}\), such that that \(\frac{deg\,F_{i}}{rank\,F_{i}}=\frac{deg\,E}{rank\,E}\), for \(i=1,\ldots,l\). (See e.g. [Kob87] for more details on this.) Assume \(N=\mathbb{P}(E)\stackrel{{\pi}}{{\to}}\Sigma_{g}\), where \(E\to\Sigma_{g}\) is a polystable rank \(2\) holomorphic vector bundle over a compact Riemann surface of genus \(g\geq 1\). Note that the polystabilty of \(E\) is independent of the choice of \(E\) in \(\mathbb{P}(E)\). Indeed, by the theorem of Narasimhan and Seshadri [NS65], polystability of \(E\) is equivalent to \(\mathbb{P}(E)\stackrel{{\pi}}{{\to}}\Sigma_{g}\) admitting a flat projective unitary connection which in turn is equivalent to \(N\) admitting a local product Kahler metric induced by constant scalar curvature Kahler metrics on \(\Sigma_{g}\) and \(\mathbb{CP}^{1}\). We shall explain and explore the latter in more detail below. Likewise, the condition of whether \(degE\) is even (\(E\) spin) or odd (\(E\) is non-spin), is independent of the choice of \(E\). Unless \(E\) is decomposable, we must have that \(Aut(N,J)\) is discrete ([Mar71]). Let \(\mathbf{v}=c_{1}(VP(E))\in H^{2}(N,\mathbb{Z})\) denoted the Chern class of the vertical line bundle and let \(\mathbf{f}\in H^{2}(N,\mathbb{Z})\) denote the Poincare dual of the fundamental class of a fiber of \(\mathbb{P}(E)\to\Sigma_{g}\). From e.g. [Fuj92] we know that if \(\mathbf{h}\in H^{2}(N,\mathbb{Z})\) denote the Chern class of the (\(E\)-dependent) tautological line bundle on \(N\), then \(H^{2}(N,\mathbb{Z})=\mathbb{Z}\mathbf{h}\oplus\mathbb{Z}\mathbf{f}\) and \(\mathbf{v}=2\mathbf{h}+(degE)\mathbf{f}\). Due to the fact that \(N=\mathbb{P}(E)\overset{\pi}{\rightarrow}\Sigma_{g}\) admits a flat projective unitary connection, we know that \(N\) has a universal cover \(\tilde{N}=\mathbb{C}\mathbb{P}^{1}\times\vec{\Sigma_{g}}\) (where \(\vec{\Sigma_{g}}\) is the universal cover of \(\Sigma_{g}\)). Let \(\Omega_{1}\) denote the standard Fubini-Study area form on \(\mathbb{C}\mathbb{P}^{1}\) and let \(\Omega_{2}\) denote a standard CSC area form on \(\Sigma_{g}\). Now consider the projection \(\pi_{1}:\mathbb{C}\mathbb{P}^{1}\times\vec{\Sigma_{g}}\rightarrow\mathbb{C} \mathbb{P}^{1}\) to the first factor. Then \(\pi_{1}^{*}(\Omega_{1})\) descends to a closed \((1,1)\) form on \(N\) representing the class \(\mathbf{v}/2\) and \([\pi^{*}\Omega_{2}]=\mathbf{f}\). If we (abuse notation slightly and) think of \(q_{1}\Omega_{1}+q_{2}\Omega_{2}\) as a local product of CSC Kahler forms on \(N\), then this represents the cohomology class \(\frac{q_{1}}{2}\mathbf{v}+q_{2}\mathbf{f}=q_{1}\mathbf{h}+(\frac{q_{1}}{2}( degE)+q_{2})\mathbf{f}\). If \(degE\) is even, this class is in \(H^{2}(N,\mathbb{Z})\) (and hence can represent a holomorphic line bundle) precisely when \(q_{1},q_{2}\in\mathbb{Z}\). If \(degE\) is odd, then the class is in \(H^{2}(N,\mathbb{Z})\) iff (\(q_{1}\) is an even integer and \(q_{2}\in\mathbb{Z}\)) or (\(q_{1}\) is an odd integer and \((q_{2}-1/2)\in\mathbb{Z}\)). Note that a similar discussion appears in the proof of Theorem 4.6 of [1]. With this in mind, we can we can (yet again) generalize to consider the case where \(N\) is as described above. We consider a matrix \(K=\begin{pmatrix}k_{1}^{1}&k_{1}^{2}\\ k_{2}^{1}&k_{2}^{2}.\end{pmatrix}\), consisting of entries \(k_{j}^{i}\), such that: * If \(degE\) is even, \(k_{j}^{i}\in\mathbb{Z}^{+}\) * If \(degE\) is odd, one of the following is true: * \(k_{j}^{1}\) is an even positive integer and \(k_{j}^{2}\in\mathbb{Z}^{+}\) * \(k_{j}^{1}\) is an odd positive integer and \((k_{j}^{2}-1/2)\in\mathbb{Z}^{+}\). Such a choice of \(K\) yields a \(d=1\) Yamazaki fiber join \(M_{\mathfrak{w}}=S(L_{1}^{*}\oplus L_{2}^{*})\) via the line bundles \(L_{1},L_{2}\) satisfying \(c_{1}(L_{j})=[\omega_{j}]=k_{j}^{1}[\Omega_{1}]+k_{j}^{2}[\Omega_{2}]=k_{j}^{1 }\mathbf{h}+(\frac{k_{j}^{1}}{2}(degE)+k_{j}^{2})\mathbf{f}\). As before we assume that \(k_{1}^{i}\neq k_{2}^{i}\) for \(i=1,2\). As we know, the quotient complex manifold of \(M_{\mathfrak{w}}\) arising from the regular Sasakian structure with Reeb vector field \(\xi_{1}\) is equal to the following \(\mathbb{C}\mathbb{P}^{1}\) bundle over \(N\): \(\mathbb{P}\big{(}L_{1}^{*}\oplus L_{2}^{*}\big{)}=\mathbb{P}\big{(}\mathbb{1} \oplus L_{1}\otimes L_{2}^{*}\big{)}\), with \(c_{1}(L_{1}\otimes L_{2}^{*})=(k_{1}^{1}-k_{2}^{1})[\Omega_{1}]+(k_{1}^{2}-k_ {2}^{2})[\Omega_{2}]=(k_{1}^{1}-k_{2}^{1})\mathbf{h}+(\frac{(k_{1}^{1}-k_{2}^{ 1})}{2}(degE)+(k_{1}^{2}-k_{2}^{2}))\mathbf{f}\). Similarly, as before, the regular quotient Kahler class is, up to scale, equal to the admissible Kahler class \(2\pi(\frac{k_{1}^{1}-k_{1}^{1}}{x_{1}}[\Omega_{1}]+\frac{k_{1}^{2}-k_{2}^{2}} {x_{2}}[\Omega_{2}])+\Xi)\) where \(x_{1}=\frac{k_{1}^{1}-k_{2}^{1}}{k_{1}^{1}+k_{2}^{1}},\quad x_{2}=\frac{k_{2}^ {1}-k_{2}^{2}}{k_{1}^{2}+k_{2}^{2}}\). We can now adapt the set-up from Section 4.3 with \(s_{1}=\frac{2}{k_{1}^{1}-k_{2}^{1}}\) and \(s_{2}=\frac{2(1-g)}{k_{1}^{2}-k_{2}^{2}}\). In particular, equation (23) continues to have some solution \(c\in(-1,1)\) and we can calculate \(F_{c}(\mathfrak{z})\) using (21). If a choice of \(K\) satisfies that \(F_{c}(\mathfrak{z})\) is positive for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\), then we will have a conclusion similar to the result in Proposition 4.3. Indeed, we have the following proposition. **Proposition 4.4**.: _Let \(N=\mathbb{P}(E)\overset{\pi}{\rightarrow}\Sigma_{g}\), where \(E\rightarrow\Sigma_{g}\) is a polystable rank \(2\) holomorphic vector bundle over a compact Riemann surface of genus \(g\geq 1\). Let \(K=\begin{pmatrix}k_{1}^{1}&k_{1}^{2}\\ k_{2}^{1}&k_{2}^{2}\end{pmatrix}=\begin{pmatrix}10g&100g\\ 2g&g\end{pmatrix}\) and let \(M_{\mathfrak{w}}\) be the \(d=1\) fiber join over \(N\) as described above with its induced Sasakian structure. Then the entire subcone, \(\mathfrak{t}^{+}_{sph}\), is extremal and contains a CSC ray._ _In particular, if \(E\) is indecomposable, then the entire Sasaki cone of \(M_{\mathfrak{w}}\) is extremal and contains a CSC ray._ Proof.: First we notice that \(k_{j}^{1}\) is even for \(j=1,2\) and thus this choice of \(K\) is allowed whether or not \(E\) is spin. Second, we have that the set of rays in \(\mathfrak{t}^{+}_{sph}\) is parametrized by \(c\in(-1,1)\) in the same manner as in Section 4.3. Further, in the case where \(E\) is indecomposable, \(Aut(N,J)\) is discrete and thus the Sasaki cone is exactly \(\mathfrak{t}^{+}_{sph}\). Therefore all we need to do to prove the proposition is to check that for this choice of \(K\), the polynomial \(F_{c}(\mathfrak{z})\), defined by (21), is positive for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\). If \(g=1\), we already know from (the proof of) Theorem 3.1 in [1] that for any choice of \(K\), \(F_{c}(\mathfrak{z})>0\) for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\). Thus we will assume that \(g>1\) for the rest of the proof. By direct calculations we get that \[F_{c}(\mathfrak{z})=\frac{(1-\mathfrak{z}^{2})p(\mathfrak{z})}{1212gh_{0}(c)},\] where \[h_{0}(c)=544829-1814364c+2225984c^{2}-1185624c^{3}+229199c^{4}\] and \(p(\mathfrak{z})\) is a cubic in \(\mathfrak{z}\) that we may write as \[p(\mathfrak{z}) = 8gh_{1}(c)+(4h_{21}(c)+20h_{22}(c)(g-2))\left(\mathfrak{z}+1 \right)+(2h_{31}(c)+4h_{32}(c)(g-2))\left(\mathfrak{z}+1\right)^{2}\] \[+ (h_{41}(c)+2h_{42}(c)(g-2))\left(\mathfrak{z}+1\right)^{3},\] where \[h_{1}(c) = h_{0}(c)\] \[h_{21}(c) = 5793707-13073132c+9976937c^{2}-3421902c^{3}+734322c^{4}\] \[h_{22}(c) = 515185-1068076c+661802c^{2}-131428c^{3}+23509c^{4}\] \[h_{31}(c) = 181918667-668502932c+933891002c^{2}-585434532c^{3}+138252867c^{4}\] \[h_{32}(c) = 44385009-162147164c+224920554c^{2}-139837364c^{3}+32709909c^{4}\] \[h_{41}(c) = 356026968-1129159208c+1277664093c^{2}-594396318c^{3}+89753413c^{4}\] \[h_{42}(c) = 90261864-287866464c+328807949c^{2}-155647254c^{3}+24416469c^{4}.\] Note also that \(p(1)=4000gh_{0}(c)\). Completely similar to the way the claim at the end of the proof of Proposition 4.3 is verified, we can now show that for all \(c\in(-1,1)\), \(h_{0}(c)>0\) and for all \(c\in(-1,1)\), \(i=2,3\), and \(j=1,2\), \(h_{ij}(c)>0\). This tells us that \(p(\pm 1)>0\), \(p^{\prime}(-1)>0\), and \(p^{\prime\prime}(-1)>0\). Since \(p(\mathfrak{z})\) is a cubic, we conclude that \(p(\mathfrak{z})>0\) for \(-1<\mathfrak{z}<1\). Finally, since \(h_{0}(c)>0\) for \(c\in(-1,1)\), \(F_{c}(\mathfrak{z})\) is positive for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\) as desired. **Remark 4.5**.: Note that if we fix a matrix \(K\) and calculate \(F_{c}(\mathfrak{z})\), then we can observe that \[\lim_{g\to+\infty}F_{0}(0)=-\infty.\] Thus is it clear that for any choice of \(K\) there exist values \(g>1\) such that the corresponding Sasaki cone is NOT exhausted by extremal Sasaki metrics. Experimenting with Mathematica, it seems that e.g. choosing \(K=\begin{pmatrix}4g&3g\\ &\\ 2g&g\end{pmatrix}\) would also yield a \(F_{c}(\mathfrak{z})\) such that \(F_{c}(\mathfrak{z})\) is positive for all \(c\in(-1,1)\) and \(\mathfrak{z}\in(-1,1)\), but the argument would be relying on using Mathematica to calculate the numerical values of the real roots of certain fourth degree polynomials. For the sake of a transparent argument we chose a more optimal \(K\) to do the job in the proof above.
2309.04882
Ambiguity, Invisibility, and Negativity
Many widely different problems have a common mathematical structure wherein limited knowledge lead to ambiguity that can be captured conveniently using a concept of invisibility that requires the introduction of negative values for quantities that are inherently positive. Here I analyze three examples taken from perception theory, rigid body mechanics, and quantum measurement.
Frank Wilczek
2023-09-09T22:01:18Z
http://arxiv.org/abs/2309.04882v1
# Mtt-Ctp/5609 ###### Abstract Many widely different problems have a common mathematical structure wherein limited knowledge lead to ambiguity that can be captured conveniently using a concept of invisibility that requires the introduction of negative values for quantities that are inherently positive. Here I analyze three examples taken from perception theory, rigid body mechanics, and quantum measurement. Stanley Deser's generosity and humor lifted my spirits on many occasions over many years. Our professional work in physics had very different centers, but there was some overlap. We even wrote a short paper together [1]. That paper is a minor work by any standard, though it does touch on a significant point. In it, we gave several examples of nonabelian gauge potentials that generate the same gauge fields but different gauge structures, so that (for instance) \(F^{1}_{\alpha\beta}=F^{2}_{\alpha\beta}\) but \(\nabla_{\gamma}F^{1}_{\alpha\beta}\neq\nabla_{\gamma}F^{2}_{\alpha\beta}\). This contrasts with the abelian case, where the fields determine the gauge potentials up to a gauge transformation, locally. (Globally, of course, they do not [2, 3].) The problem of classifying the ambiguity in cases like this has no general solution; indeed, the closely related problem of classifying spaces with equal curvature data of different kinds in different dimensions up to isometry quickly points us to some milestone theorems, famous unsolved problems, and unexplored territory. Here I will describe a trio of more down-to-earth problems that have the same flavor, but which share a mathematical structure that is much more tractable. In the context of "Gravity, Strings, and Beyond" they fall firmly within "Beyond", not in the sense of "Transcending", but rather just "Outside". They are sufficiently direct and simple that further introduction seems unwarranted. ## 1 Metamers in Visual Perception Within the vast and complex subject of visual perception [4] there is a useful idealization, with roots in the work of Maxwell [5], that captures important aspects of the primary perception of color. This is called colorimetry. The book by Koenderink [6] is a very attractive presentation of many aspects of theoretical colorimetry. The central concept of colorimetry is that the primary perception of the color of an illumination source - essentially meaning, in this context, a uniform beam of light - can be predicted using a few linear functions of its spectrum. Thus we summarize the responses of several detectors' \(\alpha\) with response functions \(c_{\alpha}(\lambda)\) to different illumination sources \(k\) with intensity spectra \(I_{k}(\lambda)\) according to \[M_{\alpha k}\ =\ \int\,d\lambda\,c_{\alpha}(\lambda)I_{k}(\lambda) \tag{1}\] What we mean by "predicting" the primary perception is that illumination sources that induce the same values of \(M_{\alpha k}\) will be indistinguishable to the detectors. This is the possibility we will be analyzing.. "Normal" - i.e., majority - human color perception is trichromatic. That is to say, most people share three very similar sensitivity functions, often called "blue, green, red" after the location of their peak values. They are rather broadly tuned, however, and in the scientific literature "S, M, L" (for "short, medium, long") is generally preferred. Maxwell did ingenious psychophysical experiments to establish the linearity and three-dimensionality of normal human color perception. Nowadays we can trace its molecular origin. There are three basic pigments, concentrated in three types of cone cells in the fovea, that can undergo shape changes upon absorbing photons. The shape changes trigger electrical impulses that are the primary events in color vision. These absorption events are probabilisitic and all-or-none. Human color vision is a beautiful case study in quantum mechanics at work! An illumination \(I(b_{k},\lambda)\equiv\sum\limits_{k}b_{k}I_{k}(\lambda)\) that satisfies \[0\ =\ \int\,d\lambda c_{\alpha}(\lambda)\,I(b_{k},\lambda)\ =\ \sum\limits_{k}\,M_{\alpha k}b_{k} \tag{2}\] will be invisible to all the detectors. Given \(M_{\alpha k}\), conditions (2) are a system of linear equations for the \(b_{k}\). Their solutions define a linear space that we will refer to as the space of _invisible metamers_. (The term "black metamers" is often used, but - like "dark matter" and "dark energy" - it tends to evoke misleading imagery.) Since the \(c_{\alpha}(\lambda)\) and \(I_{k}(\lambda)\) are intrinsically positive, so are the \(M_{\alpha k}\). To obey Eqn. (2), therefore, some of the \(b_{k}\) will have to be negative. Since \(b_{k}\) represents the strength with which illumination source \(k\) is present, however, only \(b_{k}\geq 0\) are physically realizable. Nevertheless, the invisible metamer concept is quite useful, because it parameterizes the ambiguity left open by perception. The point is that two illumination choices \(b_{k}^{(1)},b_{k}^{(2)}\) look the same to all the detectors if and only if \(b_{k}^{(1)}-b_{k}^{(2)}\) belongs to the space of invisible metamers. Thus, given any physical illumination choice \(b_{k}^{\rm phys.}\), we can find all the perceptually equivalent by illuminations by adding in vectors from the space of invisible metamers, as \(b_{k}^{\rm phys.}+b_{k}^{\rm inv.}\) The situation becomes richer, and our conceptual clarity bears fruit, when we come to compare different sets of detectors [7]. Let me describe a sample application from that paper. There are common forms of variant color perception, usually called color blindness, that result from mutations of the S, M, or L receptor molecules. Now suppose that we want to make a differential diagnosis among them. The invisible metamer concept suggests a powerful and efficient way to do that. Indeed, if we have four illumination sources (say four types of LEDs) with adjustable brightness, then there will be _different_ one-dimensional invisible metamer spaces associated with the normal and variant receptor sets. Let us call the basis vectors \(b_{k}^{N},b_{k}^{S^{\prime}},b_{k}^{M^{\prime}},b_{k}^{L^{\prime}}\), in an obvious notation. Then, starting with a reference color combination \(b_{k}^{\rm O}\) that has all positive components, we can mix dial in illumination patterns of the types \[{\rm normal\ metamers:} \ \ \ \ b^{\rm O}+\lambda b^{N}\] \[{\rm S\ mutant\ metamers:} \ \ \ \ b^{\rm O}+\lambda b^{S^{\prime}}\] \[{\rm M\ mutant\ metamers:} \ \ \ \ b^{\rm O}+\lambda b^{M^{\prime}}\] \[{\rm L\ mutant\ metamers:} \ \ \ \ b^{\rm O}+\lambda b^{L^{\prime}} \tag{3}\] with variable \(\lambda\). The first type will provide, for different values of \(\lambda\), a set of colors that cannot be distinguished by normal trichromats, but that _are_ distinguishable by the mutants. This phenomenon shows, rather dramatically, why it is not entirely appropriate to refer to the mutations as "color blindness". The second type provides colors that cannot be distinguished by S mutants, but can be distinguished by normal trichromats and M or L mutants, and so forth. By choosing appropriate illumination sources we can accentuate the differences. Following this strategy, we have made good, simple practical devices. Along similar lines, one can design quantitative tests for different hypothetical forms of "super" color vision. Indeed, since the relevant genes lie on the X chromosomes, females (with two X chromosomes) can carry both majority and mutant forms of the different receptors, allowing different kinds of tetrachromacy or even pentachromacy. For more on this and other applications, see [7]. ## 2 Equivalent Rigid Bodies In classical mechanics, a rigid body is defined by a distribution of masses \(m_{j}\) in space, at positions \(x_{j}^{\alpha}\),. According to the definition of a rigid body, we only consider motions that correspond to common rotation and translation of all the masses, induced by given summed forces (and torques). The degrees of freedom can be taken as the overall position and orientation of a "body-fixed" reference system. As is shown in textbooks, the dynamics of a rigid body - i.e., the evolution of its position and orientation - depends only on its total mass and its inertia tensor \[I^{\alpha\beta}\ =\ \sum_{j}\,m_{j}(|x_{j}|^{2}\delta^{\alpha\beta}-x_{j}^{ \alpha}x_{j}^{\beta}) \tag{4}\] referred to a coordinate system where the center of mass \[x_{\rm CM}^{\alpha}\ =\ \frac{\sum\limits_{j}m_{j}x_{j}^{\alpha}}{\sum \limits_{j}m_{j}} \tag{5}\] is at the origin. It is possible for different distributions of mass, i.e. different bodies, to agree in those properties. In that case, if we have access only to those bodies' overall motion - for example, if they are rigidly attached within identical opaque shells - then we will not be able to distinguish them. We can say that they are dynamically equivalent. The problem arises, to clarify and exemplify this ambiguity mathematically. The conditions for equality of total mass and inertia tensors, and zeroing of centers of mass, are all linear in the component mass variables \(m^{j}\). It is therefore natural, by analogy to our treatment of metamerism, to introduce a space of "dynamically invisible bodies". Dynamically invisible bodies are defined by distributions of mass such that \[0 = \sum_{j}m_{j}\] \[0 = \sum_{j}m_{j}x_{j}^{\alpha}\] \[0 = \sum_{j}\,m_{j}(|x_{j}|^{2}\delta^{\alpha\beta}-x_{j}^{\alpha}x_ {j}^{\beta}) \tag{6}\] In order for Eqn. (6) to be satisfied some of the \(m_{j}\) will need to be negative. Thus, dynamically invisible bodies, like invisible metamers, are not directly physical. But dynamically invisible bodies are relatively simple to construct, because their defining conditions are linear and highly symmetric. Dynamically invisible bodies are a useful conceptual tool, because we can construct physical dynamically equivalent objects from dynamically invisible bodies (i.e., their mass distributions) by adding invisible bodies to a positive mass distribution. Simple but flexible constructions based on these ideas can be used to generate complex, non-obvious examples of dynamically equivalent bodies. Here are two such constructions: 1. _Parity construction_: To any distribution of masses \(m_{j}\) at positions \(x_{j}^{\alpha}\), \(j=1,...,n\) whose center of mass is at the origin, add reflected negative masses at the inverted positions, according to \[m_{-j} = \ -\,m_{j}\] \[x_{-j}^{\alpha} = \ -\,x_{j}^{\alpha}\] (7) This creates a dynamically invisible body. 2. _Rotation construction_: To any distribution of masses \(m_{j}\) at positions \(x_{j}^{\alpha}\), \(j=1,...,n\) whose center of mass is at the origin, and whose inertia tensor is proportional to the unit tensor, and any rotation \(R_{\beta}^{\alpha}\), add negative masses at the rotated positions, according to \[m_{-j} = \ -\,m_{j}\] \[x_{-j}^{\alpha} = \ \ R_{\beta}^{\alpha}x_{j}^{\beta}\] (8) Here we can allow improper rotations, or use an equal-mass, equal-inertia tensor body of different form. Naturally, this begs the question of constructing non-trivial distributions whose inertia tensor is proportional to the unit tensor. Mass distributions that are symmetric under appropriate discrete subgroups of the rotation group, such as the symmetry groups of the Platonic solids, will have that property. 3. _Superposition_: The invisible bodies form a linear manifold: their mass distributions can be multiplied by constants, and added together freely. To ground the discussion, let us consider a minimal example of an invisible body. We put masses \(m_{1}\equiv m,m_{2}=ml_{1}/l_{2}\) at positions \(l_{1}\hat{z},-l_{2}\hat{z}\). The parity construction gives us a dynamically invisible body if we add in \(m_{3}=-m,m_{4}=-ml_{1}/l_{2}\) at positions \(-l_{1}\hat{z},l_{2}\hat{z}\). Now if we add this to a mass distribution \(M_{1},M_{2},M_{3},M_{4}\) at \(l_{1}\hat{z},-l_{2}\hat{z},-l_{1}\hat{z},l_{2}\hat{z}\) and \[M_{3} \geq m\] \[M_{4} \geq ml_{1}/l_{2} \tag{9}\] we will define a physical mass distribution. By varying \(m>0\) within these constraints, we produce a family of dynamically equivalent physical mass distributions. Untethered point masses are an extreme idealization of any actual rigid body, of course. We can make the foregoing construction more realistic by replacing the point masses with distributions of mass around the same centers, and by adding supporting material whose mass distribution is independent of \(m\) to fill the interstices. In this way, we reach practically realizable designs for dynamically equivalent rigid bodies. ## 3 Quantum Grey Boxes The state of a system in quantum mechanics is specified by a density matrix \(\rho\), which is required to be Hermitian and non-negative, with unit trace. Observables are represented by hermitian operators \(M\), and the expectation value of \(M\) in the state described by \(\rho\) is \({\rm Tr}\,\rho M\). Thus when a suite of measurements of the observables \(M_{j}\) on a system yield results \(v_{j}\), we learn \[{\rm Tr}\rho M_{j}\ =\ v_{j} \tag{10}\] These results might not determine \(\rho\) completely, and the issues arises, to parameterize the resulting ambiguity. (The measurements take us from a black box to a grey box.) Clearly, there is a strong family resemblance among this problem, the preceding one, and the color metamer problem. Following the same line of thought, we define a linear space of invisible density matrices consisting of hermitian matrices \(\rho^{\rm inv.}\) that obey the equations \[{\rm Tr}\rho^{\rm inv.} = 0\] \[{\rm Tr}\rho^{\rm inv.}M_{j} = 0 \tag{11}\] Invisible density matrices cannot be non-negative, so they do not describe physically realizable states. Basically, they contain negative probabilities. An extremely simple example may be helpful here, to ground the discussion. For a two-level system physical density matrices have the form \[\rho\ =\ \left(\begin{array}{cc}a&\beta\\ \beta^{*}&1-a\end{array}\right) \tag{12}\] where \(0\leq a\leq 1\) is a real number and \(\beta\) is a complex number, subject to the constraint \[a(1-a)-|\beta|^{2}\geq 0 \tag{13}\] For measurement of \(\sigma_{3}\), the invisible state conditions for the hermitian matrix \(M\equiv\left(\begin{array}{cc}r&\gamma\\ \gamma^{*}&s\end{array}\right)\) read \[{\rm Tr}M = r+s\ =\ 0\] \[{\rm Tr}\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)M = r-s\ =\ 0 \tag{14}\] so \[M\ =\ \left(\begin{array}{cc}0&\gamma\\ \gamma^{*}&0\end{array}\right)\ =\ {\rm Re}\,\gamma\ \sigma_{1}-{\rm Im}\,\gamma\ \sigma_{2} \tag{15}\] Thus, we see that the space of invisible density matrices a spanned by a mixture of spin up and spin down in the \(\hat{x}\) direction with equal and opposite probabilities, together with a mixture of spin up and spin down in the \(\hat{y}\) direction, with equal and opposite probabilities. Suppose that we measure the expectation value of \(\sigma_{3}\) in the state represented by \(\rho\) to be \(v\), i.e. \[{\rm Tr}\,\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right)\left(\begin{array}{cc}a&\beta\\ \beta^{*}&1-a\end{array}\right)\ =\ 2a-1\ =\ v \tag{16}\] This leaves \(\beta\) undetermined. Evidently, that ambiguity corresponds to motion within the space of invisible states. But (noting \(a=\frac{v+1}{2}\)) the physical states must obey \[1-v^{2}\geq 4|\beta|^{2} \tag{17}\] Thus we can only make use of a portion of the invisible state space, whose extent depends on \(v\). Negative probabilities as they appear in different contexts were the subject of a very entertaining presentation by Feynman, written up in [8]. Here, they can offer the same sorts of mathematical convenience and conceptual clarity here as do the invisible metamers and invisible bodies in their contexts; and we can take over ideas from one problem to the others. We can construct distinct physically realizable density matrices that cannot be resolved by a given measurement suite, or we can compare the blind spots of different measurement suites, for example. A natural extension will bring in superdensity matrices [9] and time-dependent measurements. Then we will have a precise concept of invisible histories, which arise in any realistic measurement protocol. _Acknowledgements_: Thanks to Nathan Newman and Jordan Cotler for helpful comments. This work is supported by the U.S. Department of Energy under grant Contract Number DE-SC0012567, by the European Research Council under grant 742104, and by the Swedish Research Council under Contract No. 335-2014-7424.
2309.15757
Latent Graphs for Semi-Supervised Learning on Biomedical Tabular Data
In the domain of semi-supervised learning, the current approaches insufficiently exploit the potential of considering inter-instance relationships among (un)labeled data. In this work, we address this limitation by providing an approach for inferring latent graphs that capture the intrinsic data relationships. By leveraging graph-based representations, our approach facilitates the seamless propagation of information throughout the graph, effectively incorporating global and local knowledge. Through evaluations on biomedical tabular datasets, we compare the capabilities of our approach to other contemporary methods. Our work demonstrates the significance of inter-instance relationship discovery as practical means for constructing robust latent graphs to enhance semi-supervised learning techniques. The experiments show that the proposed methodology outperforms contemporary state-of-the-art methods for (semi-)supervised learning on three biomedical datasets.
Boshko Koloski, Nada Lavrač, Senja Pollak, Blaž Škrlj
2023-09-27T16:13:36Z
http://arxiv.org/abs/2309.15757v3
# Latent Graphs for Semi-Supervised Learning on Biomedical Tabular Data ###### Abstract In the domain of semi-supervised learning, the current approaches insufficiently exploit the potential of considering inter-instance relationships among (un)labeled data. In this work, we address this limitation by providing an approach for inferring latent graphs that capture the intrinsic data relationships. By leveraging graph-based representations, our approach facilitates the seamless propagation of information throughout the graph, effectively incorporating global and local knowledge. Through evaluations on biomedical tabular datasets, we compare the capabilities of our approach to other contemporary methods. Our work demonstrates the significance of inter-instance relationship discovery as practical means for constructing robust latent graphs to enhance semi-supervised learning techniques. The experiments show that the proposed methodology outperforms contemporary state-of-the-art methods for (semi-)supervised learning on three biomedical datasets. Keywords:Latent Graph Construction Node Classification Graph Neural Networks Multi Label Classification ## 1 Introduction Machine learning has undergone remarkable advancements in recent years, transforming numerous domains by enabling computers to learn patterns and make predictions from data. In the early stages of this field, there was a strong emphasis on learning from tabular data [26, 3]. Pioneering researchers dedicated their efforts to constructing simple yet interpretable models that capitalized on this data type, yielding impressive performance during inference. The focus on learning from tabular data stemmed from its ubiquity in various domains, where structured information is readily available in the form of rows and columns. The simplicity and comprehensibility of tabular data make it an ideal starting point for machine learning tasks, allowing for effective modeling and decision-making. These early approaches to machine learning extracted valuable insights and predictions by leveraging the inherent structure and relationships within tabular datasets. The constructed models exhibit remarkable interpretability, enabling human experts to comprehend and reason for the decision-making processes. This interpretability is pivotal in domains where transparent and accountable decision-making is crucial. In real-world machine learning, labeled data is often scarce but unlabeled data is abundant. To enhance predictive performance, several approaches have been proposed to incorporate this unlabeled data into the learning process. Referred to as semi-supervised methods, these approaches combine supervised learning with unsupervised learning techniques to leverage the untapped potential of unlabeled data. By doing so, they aim to improve the overall predictive capabilities of the models while reducing the reliance on labeled data, ultimately addressing the challenges associated with data scarcity and the high cost of annotation.For example, predictive clustering trees (PCTs) [27] learn cluster labels as features, which can be used to enrich the feature set of the training data. This can lead to improved predictive performance, especially when there is limited labeled data available. Contemporary approaches in semi-supervised learning focus on projecting the data into lower-dimensional spaces using techniques such as linear learners like SVD [11] or autoencoder [1, 8] neural network architectures. These methods exploit dimensionality reduction to capture essential patterns and extract informative representations, enabling enhanced learning and generalization capabilities. In this work, we present a semi-supervised learning approach that transforms the problem of instance classification into node classification. We first construct a latent graph from the data, and then learn a graph neural network on this graph. This approach allows us to leverage the relationships between instances in the data based on inter-instance similarity to improve the classification accuracy. The rest of the paper is structured as follows: Section 2 presents an overview of the related work, Section 3 elaborates on our method, Section 4 details the experimental setup and Section 5 presents the obtained results. Finally, the paper presents the conclusions and suggestions for future work in Section 6. ## 2 Related work Semi-supervised learning is concerned with leveraging weakly-labeled or unlabeled data in addition to labeled data. Early approaches concentrated on employing clustering methods such as KMeans [21] and DBSCAN [9] to learn cluster labels and incorporate them into the learning process [19]. Contemporary methods have harnessed latent space projections achieved through dimensionality reduction techniques such as SVD [29], tSNE [20], and UMAP [22]. Initially, a linear projection is learned on the entire dataset, followed by applying a learner on the transformed data. Such approaches have demonstrated efficacy across diverse domains[11], with notable applications including Latent Semantic Analysis [17]. Alternatively, other approaches focus on learning data reconstruction using autoencoders to enhance the learning process [1, 8]. The encoded latent representation of the input is then used to train a predictive model. Graphs provide a distinctive means of representing data, offering the potential to enhance the predictive capabilities of statistical and neural learners [30, 16]. Nevertheless, graphs are not always readily accessible in every scenario, prompting researchers to propose diverse approaches to tackle this challenge. Koloski et al. [18] focused on inducing a graph based on the similarity of given instances and their closest k-neighbours. Bornstein et al. [12] proposed learning the graph in a differentiable end-to-end scenario. Learning on the latent graphs can be done with message-passing architectures like Graph Convolutional Neural Networks (GCN) [14], which are inherently semi-supervised learners. To our knowledge, no work has addressed building latent graphs and learning from them on wide tabular data from the biomedical domain, characterized by a small number of instances described by large number of features. ## 3 Methodology Following the framework proposed in our previous work [18], this section presents the proposed two-step methodology consisting of a latent graph construction step (Section 3.1), followed by a classification step implemented through a two-layer graph convolutional network (Section 3.2). ### Latent Graph Construction Given a dataset consisting of \(N\) instances, the goal is to construct a graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) represents the set of vertices and \(\mathcal{E}\) represents the set of edges. In our case, the set of vertices corresponds to the instances in the data \(|\mathcal{V}|=\textit{N}\). To create the graph's edges, we calculate the cosine distance between the instances \(i\) represented with features as \(X^{(i)}\) and instance \(j\) as \(X^{(j)}\): \[\textbf{cos}(X^{i},X^{j})=\frac{X^{(i)}\cdot X^{(j)}}{\|X^{(i)}\|\|X^{(j)}\|}\] This allows us to capture the similarity between instances and represent it as edge weights in the graph. The cosine distance metric provides a measure of similarity based on the angle between the instance vectors. The values in the adjacency matrix range from \(\{-1,1\}\). In this way, we obtain a full graph (similar to the previous work [18]); we only keep the edges that have a cosine score greater than some threshold \(\theta\) greater than 0, i.e., an edge between examples \(i\) and \(j\) is constructed as follows: \[e_{ij}=\begin{cases}1&\text{if }\textbf{cos}(X^{i},X^{j})\geq\theta\\ 0&\text{otherwise}\end{cases}\] ### Graph Convolutional Network We employ a two-layer Graph Convolutional Network (GCN) [14] to exploit the latent graph structure and learn meaningful representations of the instances. Formally, given an adjacency matrix \(A\) and node features matrix \(X\), a GCN performs node representation learning through a sequence of graph convolutional layers. The node representations are updated at each layer by aggregating information from neighboring nodes. This aggregation is achieved by combining the features of each node \(v\) with its neighbors \(\mathcal{N}(v)\), weighted by the graph structure. The computation can be expressed as: \[h_{v}^{(l+1)}=\sigma\left(\sum_{u\in\mathcal{N}(v)}\frac{h_{u}^{(l)}W^{(l)}}{ \sqrt{|\mathcal{N}(v)|\cdot|\mathcal{N}(u)|}}\right)\] where \(h_{v}^{(l)}\) denotes the representation of node \(v\) at layer \(l\), \(\sigma\) is the activation function, \(W^{(l)}\) is the learnable weight matrix at layer \(l\), and \(\mathcal{N}(v)\) represents the set of neighboring nodes of \(v\). Finally, a linear classification layer is applied to predict the probabilities \[\mathbf{y}=\text{softmax}(W^{(2)}\mathbf{h})\] where \(\mathbf{y}\) represents the predicted class probabilities, \(W^{(2)}\) is the weight matrix of the linear layer, and \(\mathbf{h}\) is the flattened output of the last GCN layer. We train our graph convolutional network (GCN) model using the Adam optimizer [13] with a learning rate of 0.01 and weight decay of 5e-4. To prevent overfitting and achieve optimal performance, we employ early stopping [25]. The training is stopped if the validation loss does not improve for 10 epochs. The GCN architecture is implemented using the PyTorch library [24]. ## 4 Experimental Setting ### Data In our study, we performed experiments on a collection of biomedical datasets characterized by a wide format, where the number of columns exceeded the number of instances, featuring instance counts ranging from 32 to 801 and the number of features spanning from 661 to 20,531. Across all datasets, the features are numerical. All datasets except \(Multi_{A}\) are unbalanced. Table 1 contains more comprehensive statistics for the used datasets. \begin{table} \begin{tabular}{l c c c|c c|c c} \hline **Dataset** & \multicolumn{2}{c|}{**Instances Features Classes**} & \multicolumn{4}{c}{**Class Distribution (\%)**} \\ & & & Class 1 & Class 2 & Class 3 & Class 4 & Class 5 \\ \hline \(Multi_{B}\)[10] & 32 & 5565 & 4 & 34.38 & 28.12 & 21.88 & 15.62 & – \\ \(Breat_{B}\)[10] & 49 & 1213 & 4 & 38.78 & 24.49 & 22.45 & 14.29 & – \\ \(DLBCL_{C}\)[10] & 58 & 3795 & 4 & 29.31 & 27.59 & 22.41 & 20.69 & – \\ \(Breat_{A}\)[10] & 98 & 1213 & 3 & 52.04 & 36.73 & 11.22 & – & – \\ \(Multi_{A}\)[10] & 103 & 5565 & 4 & 27.18 & 25.24 & 25.24 & 22.33 & – \\ \(DLBCL_{D}\)[10] & 129 & 3795 & 4 & 37.98 & 28.68 & 18.6 & 14.73 & – \\ \(DLBCL_{A}\)[10] & 141 & 661 & 3 & 35.46 & 34.75 & 29.79 & – & – \\ \(DLBCL_{B}\)[10] & 180 & 661 & 3 & 48.33 & 28.33 & 23.33 & – & – \\ \(TCGA\)[28] & 801 & 20531 & 5 & 37.45 & 18.23 & 17.6 & 16.98 & 9.74 \\ \hline \end{tabular} \end{table} Table 1: Dataset summary. For each dataset, we report the number of instances, features, classes, and the class distribution. ### Experimental Evaluation To assess the performance of our proposed methodology, we adopted a stratified 10-fold cross-validation strategy. This approach ensures that each fold includes a representative distribution of the target classes, reducing potential bias in the evaluation process. The dataset was randomly partitioned into 10 subsets, each containing an approximately equal distribution of samples from every class. We performed training and testing of our model iteratively, with each fold acting as the testing fold while the remaining nine folds were used for training. This process was repeated for all the folds, resulting in a robust evaluation of our approach. #### 4.2.1 Evaluation of our method For each fold of the 10-fold cross-validation, we first generate a graph for each fold. Since the nodes vary with each fold (thus the input to the GCN), the resulting sparsified graph differs across every cross-validation iteration. #### 4.2.2 Baselines In the experiments, we consider various baseline classifiers, ranging from simple linear classifiers such as decision trees (DTs) [4], oblique predictive clustering trees (SpyCTs) [27], and support vector machines (SVMs) [6] to ensemble methods such as random forests (RFs) [3] and XGBoost (XGB) [5]. Next, we explain the methods used to leverage signals from the unlabeled data to aid the model in model training. We use three well-established linear latent space projection methodologies, t-SNE [20], UMAP [22], and SVD [29], to reduce high-dimensional data into lower-dimensional representations. These methodologies convert the problem space from the original to a latent space where we can learn from labeled and unlabeled instances. After applying dimensionality reduction, the methods convert the high-dimensional data into lower-dimensional spaces. In this study, we exclude the comparison with autoencoder networks due to the scarcity of the data. In each cross-validation step, we first learn the shared lower-dimensional space of the whole dataset, learn a classifier (e.g., a DT or an RF) only on the train folds, and apply it to the test fold. ## 5 Results ### Experimental results We extensively evaluated our method compared to the base 5 linear learners and their corresponding combinations for each problem. Table 2 presents the results. The results are presented in Table 2, demonstrate the competitive performance of our method, outperforming the simple baselines DT, RF, SpyCT, and XGB consistently while achieving comparable results to the semi-supervised methods (where we introduced the unlabeled data and performed dimensionality reduction). The inherent local-structure learning dynamics and random initialization of t-SNE [15] render it less fitting for direct semi-supervised space learning tasks, resulting in lower performance when combined with the baseline learners. The base SVM method was superior to other methods on the \(TCGA\) and \(Multi_{B}\) datasets. Notably, our method exhibited superior performance on the \(DLBCL\)\(A\), \(B\), and \(C\) datasets and came within a 2% margin of the performance on the TCGA dataset. However, our method faced challenges when applied to the \(Breast\) datasets, characterized by limited data availability. Consequently, the performance of our method was suboptimal in this particular scenario. The semi-supervised methods resulted in a substantial performance boost for the simpler methods. This enhancement enabled the SpyCT method to outperform all other methods on the \(Breast\)\(A\) and \(B\) datasets and the \(DLBCL_{B}\) dataset. #### 4.2.2 Statistical tests We employ the Nemenyi test (Figure 1) with post-hoc correction [7] at a significance level of 0.01. Red lines indicate statistically insignificant differences based on average scores. We choose the best-performing method for each model family, whether standalone or combined with a semi-supervised learner. Our method and the combination of SpyCT-SVD exhibit no statisti \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Dataset & \(Breast_{A}\) & \(Breast_{B}\) & \(DLBCL_{A}\) & \(DLBCL_{B}\) & \(DLBCL_{C}\) & \(DLBCL_{D}\) & \(Multi_{A}\) & \(Multi_{B}\) & \(TCGA\) & \(Avg.\) \\ Method & & & & & & & & & \\ \hline ours & 0.939\({}_{0.05}\) & 0.845\({}_{0.22}\) & **0.98\({}_{0.013}\)** & **0.956\({}_{0.005}\)** & **0.88\({}_{0.109}\)** & 0.777\({}_{1.156}\) & 0.951\({}_{0.006}\) & 0.867\({}_{0.221}\) & 0.996\({}_{0.006}\) & **0.910** \\ \hline DT & 0.89\({}_{0.13}\) & 0.665\({}_{0.19}\) & 0.723\({}_{0.121}\) & 0.744\({}_{0.117}\) & 0.57\({}_{0.216}\) & 0.572\({}_{0.121}\) & 0.94\({}_{0.006}\) & 0.75\({}_{0.247}\) & 0.978\({}_{0.023}\) & 0.759 \\ DT-svd & 0.867\({}_{0.102}\) & 0.86\({}_{0.162}\) & 0.937\({}_{0.058}\) & 0.878\({}_{0.054}\) & 0.657\({}_{0.181}\) & 0.635\({}_{0.158}\) & 0.872\({}_{0.134}\) & 0.658\({}_{0.129}\) & 0.938\({}_{0.037}\) & 0.811 \\ DT-tsne & 0.50\({}_{1.356}\) & 0.205\({}_{1.127}\) & 0.503\({}_{0.148}\) & 0.578\({}_{0.158}\) & 0.203\({}_{0.124}\) & 0.210\({}_{0.112}\) & 0.414\({}_{0.114}\) & 0.117\({}_{0.103}\) & 0.966\({}_{0.21}\) & 0.410 \\ DT-umap & 0.927\({}_{0.082}\) & 0.645\({}_{0.205}\) & 0.943\({}_{0.07}\) & 0.928\({}_{0.075}\) & 0.673\({}_{0.171}\) & 0.558\({}_{0.228}\) & 0.931\({}_{0.078}\) & 0.492\({}_{0.27}\) & 0.993\({}_{0.011}\) & 0.78 \\ \hline RF & 0.889\({}_{0.07}\) & 0.75\({}_{0.163}\) & 0.944\({}_{0.076}\) & 0.928\({}_{0.061}\) & 0.757\({}_{0.202}\) & 0.746\({}_{0.113}\) & **0.98\({}_{0.04}\)** & 0.833\({}_{0.224}\) & 0.995\({}_{0.006}\) & 0.869 \\ RF-svd & 0.929\({}_{0.05}\) & 0.82\({}_{0.189}\) & 0.95\({}_{0.046}\) & 0.911\({}_{0.09}\) & 0.783\({}_{0.198}\) & 0.752\({}_{0.132}\) & 0.931\({}_{0.119}\) & 0.733\({}_{0.238}\) & 0.981\({}_{0.01}\) & 0.866 \\ RF-tsne & 0.653\({}_{0.134}\) & 0.41\({}_{0.202}\) & 0.568\({}_{0.14}\) & 0.70\({}_{1.141}\) & 0.22\({}_{1.409}\) & 0.411\({}_{0.119}\) & 0.497\({}_{0.172}\) & 0.092\({}_{0.142}\) & 0.992\({}_{0.008}\) & 0.505 \\ RF-umap & 0.919\({}_{0.075}\) & 0.77\({}_{0.155}\) & 0.964\({}_{0.048}\) & 0.911\({}_{0.087}\) & 0.807\({}_{1.946}\) & 0.66\({}_{0.122}\) & 0.913\({}_{0.081}\) & 0.558\({}_{0.244}\) & 0.998\({}_{0.005}\) & 0.833 \\ \hline SVM & 0.56\({}_{0.168}\) & 0.73\({}_{0.126}\) & 0.96\({}_{0.055}\) & 0.876\({}_{0.108}\) & 0.847\({}_{1.129}\) & 0.650\({}_{0.155}\) & 0.976\({}_{0.064}\) & **0.91\({}_{0.153}\)** & **0.999\({}_{0.004}\)** & 0.827 \\ SVM-avd & 0.919\({}_{0.075}\) & 0.71\({}_{0.145}\) & 0.910\({}_{0.167}\) & 0.76\({}_{0.048}\) & 0.768\({}_{0.197}\) & 0.765\({}_{0.197}\) & 0.940\({}_{0.112}\) & 0.854 \\ SVM-tsne & 0.507\({}_{0.148}\) & 0.395\({}_{0.21}\) & 0.512\({}_{0.145}\) & 0.633\({}_{0.125}\) & 0.177\({}_{0.188}\) & 0.386\({}_{0.112}\) & 0.34\({}_{0.029}\) & 0.033\({}_{0.1}\) & 0.995\({}_{0.006}\) & 0.442 \\ SVM-umap & 0.929\({}_{0.079}\) & 0.69\({}_{0.114}\) & 0.964\({}_{0.048}\) & 0.90\({}_{0.102}\) & 0.78\({}_{0.183}\) & 0.684\({}_{0.213}\) & 0.922\({}_{0.125}\) & 0.642\({}_{0.183}\) & 0.998\({}_{0.005}\) & 0.834 \\ \hline SpyCT & 0.939\({}_{0.05}\) & 0.675\({}_{0.157}\) & 0.951\({}_{0.063}\) & 0.944\({}_{0.043}\) & 0.57\({}_{0.074}\) & 0.638\({}_{0.137}\) & 0.96\({}_{0.066}\) & 0.342\({}_{0.058}\) & 0.808\({}_{0.081}\) & 0.759 \\ SpyCT-svd & **0.959\({}_{0.05}\)** & **0.88\({}_{0.133}\)** & 0.971\({}_{0.067}\) & **0.956\({}_{0.054}\)** & 0.86\({}_{0.155}\) & **0.7 cally significant difference. The standalone SVM is the third-ranked method, which performs similarly to RF and XGB-SVD. Meanwhile, the Decision Tree, a more straightforward method, failed to beat the other models. More granular comparison can be seen on Figure 1. We compared our method to the Random Forest and the SVM baseline using the _Bayesian t-test_[2]. We conducted 10 experiment runs for both methods, each testing the data on ten cross-validation folds. The probability of our model being better than the Random Forest was 90.57%, while the probability of both models being equal was 6.87%. By 'equal,' we mean they are within a 1% margin of difference. As for the SVM model, it is better than ours with probability 0.55%, equivalent 0.7%, and worst with a probability of 98.73% #### 4.2.2 Time efficiency Next, we compared the time efficiency of our method to the baselines and the semi-supervised feature enrichment method. We measured the time for constructing the representations for each fold, learning on the training data, and predicting the test data. The results of the comparison are shown in Figure 4: Bayesian comparison of selected algorithm pairs. Figure 5. Our method outperformed all of the baselines time-wise on all of the features. Even when we applied lower space projection, our method still showed superior performance compared to other methods, except the application of SVD. ### Abalation study #### 5.2.1 Latent Graph Structure In Table 3, we explore the latent graph structures determined by optimal thresholds for each dataset. The \(TCGA\) dataset, the largest one, has 801 nodes and 23,903 edges, showing that it's a dense network. This dataset has an assortativity of 0.74, meaning that nodes with many connections tend to connect to other nodes with many connections. Also, its transitivity is 0.81, showing a higher chance of triangle-shaped connections in the graph. Our approach achieved the most optimal scores on datasets \(DLBCL_{A}\), \(DLBCL_{B}\), and \(DLBCL_{C}\). These datasets share notable transitivity, clustering coefficients, and closeness centrality. Suggesting local well-connected latent graphs where nodes are not only well-connected but can also access other nodes with minimal hops. Only \(DLBCL_{A}\) and \(Multi_{A}\) were connected among all latent graphs. \(DLBCL_{A}\) had a diameter of 3 and an average shortest path of 1.71, while \(Multi_{A}\) had a diameter of 5 and an average shortest path of 2.48. However, datasets \(DLBCL_{D}\) and \(DLBCL_{C}\) stand out with lower clustering coefficients, hinting a reduced tendency towards cliquish behaviors. Our method exhibits versatility as it performs effectively on connected and unconnected components. Figure 5: Time comparison of the logarithm of the average time (in seconds) needed to learn a model, averaged across datasets. To enhance visibility, a logarithmic scale is applied. Our method is colored in blue, the lower the score the better. #### 3.3.2 Latent Graph Similarity Thresholding To assess the thresholding parameter, \(\theta\), we adopted a methodology based on selecting the parameter yielding the lowest training loss through early stopping. For each dataset, we observed that different thresholding parameters appeared optimal. However, the optimal \(\theta\) per dataset rendered graphs that contained 10% to 15% of the initially constructed edges in the full graph. Further insights regarding the thresholding parameter can be found in Figure 6. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & & & Global & Avg. & Avg. & Avg. & Num. \\ Dataset & Nodes & Edges & Homophily & Interrophy & Transitivity & Assortativity & Clustering & Degree & Eigenvector & Closeness & Connected \\ & & & & & & Coefficient & Centrality & Centrality & Centrality & Components \\ \hline \(Multin\) & 32 & 14 & 0.36 & 0.64 & 0.69 & -0.46 & 0.15 & 0.03 & 0.08 & 0.04 & 25 \\ \(Bresal_{2}\) & 49 & 63 & 0.70 & 0.30 & 0.52 & 0.49 & 0.35 & 0.05 & 0.07 & 0.08 & 15 \\ \(DLBCL_{c}\) & 58 & 10 & 0.90 & 0.10 & 0.27 & 0.18 & 0.04 & 0.01 & 0.04 & 0.01 & 49 \\ \(Breast_{A}\) & 98 & 627 & 0.93 & 0.07 & 0.68 & 0.45 & 0.52 & 0.13 & 0.06 & 0.30 & 5 \\ \(Multi_{A}\) & 103 & 1002 & 0.84 & 0.16 & 0.73 & 0.48 & 0.69 & 0.19 & 0.05 & 0.41 & 1 \\ \(DLBCL_{0}\) & 129 & 41 & 0.59 & 0.41 & 0.40 & 0.30 & 0.05 & 0.00 & 0.02 & 0.01 & 100 \\ \(DLBCL_{4}\) & 141 & 3075 & 0.72 & 0.28 & 0.57 & 0.06 & 0.57 & 0.31 & 0.08 & 0.58 & 1 \\ \(DLBCL_{b}\) & 180 & 708 & 0.89 & 0.11 & 0.49 & 0.34 & 0.41 & 0.04 & 0.04 & 0.22 & 21 \\ \(TCGA\) & 801 & 23903 & 0.98 & 0.02 & 0.80 & 0.74 & 0.66 & 0.07 & 0.01 & 0.31 & 10 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of extracted latent graph structures, including metrics such as number of nodes, edges, homophily, heterophily, transitivity, assortativity, global clustering coefficient, average degree centrality, eigenvector centrality, closeness centrality, and number of connected components. Figure 6: The distribution of similarities and the threshold selected for constructing the latent graph based on cosine similarity. #### 4.2.3 Qualitative evaluation In Figure 10, the graphs are depicted based on their 2D low-dimensional projection and are color-coded by the macro F1-score. Next, we explore semantic similarities between latent graphs. We embed the graphs using the Graph2Vec [23] method with default parameters and subsequently embed them with UMAP [22] in two dimensions. We observe that graphs with similar foundational structures cluster closely and perform similarly. Figure 10: Visualization of the embedded graphs of each dataset. ## 6 Conclusions and Further Work In conclusion, our work presents a novel and time-efficient approach that leverages constructing a latent graph based on instance similarity and utilizes graph convolutional networks. The results obtained highlight the superior performance of our method in scenarios where instances are scarce and a large number of classes. Our method achieves this performance while maintaining computational efficiency, making it practical for real-world applications. For future work, we propose using physics-inspired methods to handle heterogeneity better. We also propose exploring approaches that perform automatic graph rewriting and thresholding, rather than the simple thresholding mechanism used in this work. ## Acknowledgements he authors acknowledge the financial support from the Slovenian Research Agency through research core funding (No. P2-0103). Additionally, the first author's work was supported by a Young Researcher Grant PR-12394. ## Availability The code and data to replicate the experiments are available on the following link [https://github.com/bkolosk1/latent_graph_tabular_data](https://github.com/bkolosk1/latent_graph_tabular_data).
2309.03135
Millimeter Wave Thin-Film Bulk Acoustic Resonator in Sputtered Scandium Aluminum Nitride
This work reports a millimeter wave (mmWave) thin-film bulk acoustic resonator (FBAR) in sputtered scandium aluminum nitride (ScAlN). This paper identifies challenges of frequency scaling sputtered ScAlN into mmWave and proposes a stack and new fabrication procedure with a sputtered Sc0.3Al0.7N on Al on Si carrier wafer. The resonator achieves electromechanical coupling (k2) of 7.0% and quality factor (Q) of 62 for the first-order symmetric (S1) mode at 21.4 GHz, along with k2 of 4.0% and Q of 19 for the third-order symmetric (S3) mode at 55.4 GHz, showing higher figures of merit (FoM, k2xQ) than reported AlN/ScAlN-based mmWave acoustic resonators. The ScAlN quality is identified by transmission electron microscopy (TEM) and X-ray diffraction (XRD), identifying the bottlenecks in the existing piezoelectric-metal stack. Further improvement of ScAlN/AlN-based mmWave acoustic resonators calls for better crystalline quality from improved thin-film deposition methods.
Sinwoo Cho, Omar Barrera, Pietro Simeoni, Emily N. Marshall, Jack Kramer, Keisuke Motoki, Tzu-Hsuan Hsu, Vakhtang Chulukhadze, Matteo Rinaldi, W. Alan Doolittle, Ruochen Lu
2023-09-06T16:15:29Z
http://arxiv.org/abs/2309.03135v1
# Millimeter Wave Thin-Film Bulk Acoustic Resonator in Sputtered Scandium Aluminum Nitride ###### Abstract This work reports a millimeter wave (mmWave) thin-film bulk acoustic resonator (FBAR) in sputtered scaium aluminum nitride (ScaIN). This paper identifies challenges of frequency scaling sputtered ScaIN into mmWave and proposes a stack and new fabrication procedure with a sputtered Sca3Ala3N on Al on Si carrier wafer. The resonator achieves electromechanical coupling (\(k^{2}\)) of 7.0% and quality factor (\(Q\)) of 62 for the first-order symmetric (S1) mode at 21.4 GHz, along with \(k^{2}\) of 4.0% and \(Q\) of 19 for the third-order symmetric (S3) mode at 55.4 GHz, showing higher figures of merit (FoM, \(k^{2}\)-\(Q\)) than reported AlN/ScaIN-based mmWave acoustic resonators. The ScaIN quality is identified by transmission electron microscopy (TEM) and X-ray diffraction (XRD), identifying the bottlenecks in the existing piezoelectric-metal stack. Further improvement of ScaIN/AIN-based mmWave acoustic resonators calls for better crystalline quality from improved thin-film deposition methods. acoustic resonators, piezoelectric devices, scaium aluminum nitride (ScaIN), millimeter-wave devices, thin-film bulk acoustic resonator (FBAR), thin-film devices ## I Introduction Radio frequency (RF) acoustic devices are widely used as sub-6 GHz front-end filters [1, 2, 3, 4]. Acoustic resonators, i.e., key building blocks for filters, piezoelectrically convert the electromagnetic (EM) energy to mechanical vibrations and efficiently store energy at resonances. Such transduction offers two key advantages over EM counterparts, namely, miniature footprints and better frequency selectivity [2]. Among different piezoelectric RF acoustic platforms, current commercial FBARs have been dominated by sputtered aluminum nitride (AIN) and scaium aluminum nitride (ScaIN) since they possess good acoustic properties high-quality factor (\(Q\)), and electromechanical coupling (\(k^{2}\)) and their well-established microfabrication process that can be integrated into semiconductor industries effortlessly [2, 5]. With the development of wireless communication into millimeter wave (mmWave, \(>\)30 GHz) bands, it would be great to frequency scale ScaIN/AIN devices while maintaining high performance for future RF front ends [3]. However, prior studies show that frequency scaling ScaIN/AIN FBARs is challenging (Fig. 1), marked by degraded figures of merit (FoM, \(k^{2}\)-\(Q\)) at higher frequencies [6, 7, 8, 9]. First, it is non-trivial to deposit a high-quality piezoelectric and metal stack, as the required thickness is sub-100 nm for mmWave operation, imposing challenges for conventional sputtering techniques while causing excessive acoustic loss from the thin-film structure [10]. Second, as the resonator's lateral dimensions significantly scale down for 50 \(\Omega\) systems, acoustic designs and microfabrication procedures for miniature devices are not well studied. More recently, FBARs using better ScaIN/AIN films synthesized by metal-organic vapor phase epitaxy (MOVPE) and molecular beam epitaxy (MBE) have been reported, but the fundamental thickness extensional FBARs show results comparable to their sputtered counterparts at mmWave (Fig. 1) [7, 8, 11, 12]. It is unclear whether the film quality or design/fabrication is the bottleneck for current ScaIN/AIN mmWave FBARs. In this work, we report a mmWave FBAR using sputtered Sca3Ala3N. The resonator achieves \(k^{2}\) of 7.0% and \(Q\) of 62 for the first-order symmetric (S1) mode at 21.4 GHz, along with \(k^{2}\) of 4.0% and \(Q\) of 19 for the third-order symmetric (S3) mode at 55.4 GHz, showing a higher FoM than reported AlN/ScaIN-based mmWave acoustic resonators. The results are enabled by both the acoustic design and a new fabrication procedure. Material-level analysis indicates that the bottleneck for further performance enhancement calls for better crystalline quality from improved thin-film deposition methods. Fig. 1: Survey of reported resonators above 15 GHz. Fig. 2: (a) Top and (b) cross-sectional view of ScaIN mmWave FBAR. ## II Design and Simulation The FBAR top and cross-sectional views are shown in Fig. 2 (a)-(b). The film stack consists of 85 nm thick Sc\({}_{0.3}\)Al\({}_{0}\)?N sandwiched between 37 nm thick aluminum (AI) top and bottom electrodes, with signal and ground traces on the top along with a floating bottom electrode. Such thickness is selected to enable S3 mode around 50 GHz. Due to the high capacitance density of the thin film, the lateral dimensions of the resonant body are designed as 7 \(\upmu\)m by 16 \(\upmu\)m. The buslines and probing pads are thickened to 300 nm for less routing resistance. A key differentiator for this work is that we start with uniformly sputtering ScAlN on sputtered Al on a silicon (Si) carrier wafer before passivating the majority of the substrate with silicon dioxide (SiO\({}_{2}\)) except for the active region. First, this allows sputtered films with better quality than those on patterned bottom electrodes, especially near the edge of the patterned bottom electrodes. Second, the SiO\({}_{2}\) reduces the feedthrough-induced parasitic capacitance and resistance, which is more pronounced at mmWave [13]. The fabrication process for such structures will be explained in Section III. The proposed FBAR is simulated [Fig. 3 (a)-(b)] using COMSOL finite element analysis (FEA) with a mechanical \(Q\) value of 50, estimated from earlier mmWave AlN FBARs [14]. In operation, the electric field between the top and bottom electrodes excites first-order symmetric (S1) and third-order symmetric (S3) modes via piezoelectric coefficient \(e_{gs}\). S1 at 15.6 GHz shows \(k^{2}\) of 9.5%, while S3 at 49.3 GHz shows \(k^{2}\) of 6.6%. \(k^{2}\) follows the equation definition in Fig. 7 (e) [15]. The mode shapes are plotted in Fig. 3, confirming that the stack selection maximizes \(k^{2}\) for S3, as the stress nodes lie in the Al-ScAlN interfaces. ## III Material Analysis and Fabrication The fabrication starts with sputtering 37 nm of Al and 85 nm of ScAlN onto a high-resistivity ( \(>\) 10,000 \(\Omega\)-cm) Si \(<\)100\(>\) wafer with an Evatec Clusterline 200 sputtering tool without breaking vacuum. The quantitative material analysis starts with X-ray diffraction (XRD) in Fig 4 (a). The full width at half maximum (FWHM) of the rocking curve is 7.6\({}^{\circ}\), indicating that the sputtered thin film has non-ideal crystal quality, given that it is sputtered on top of metal, while the overall thickness is sub-100 nm. Fig. 4 (b) shows the atomic force microscopy (AFM) and the surface roughness of the sputtered ScAlN film. The surface is generally flat, with a few spikes caused by the defects formed during Al deposition. The film quality of the stack is validated using transmission electron microscopy (TEM) images shown in Fig. 5. The crystal shows misorientation angles as large as 18\({}^{\circ}\) [Fig. 5 (a)], which is likely caused by the deformation of the Al layer [Fig. 5 (c)] during sputtering (350 \({}^{\circ}\)C process) and could be overcome in the future with platinum (Pt) electrodes upon further development. Such moderate film quality is a bottleneck for future mmWave ScAlN FBARs, calling for better deposition methods. The fabrication process is shown in Fig. 6 (a). First, the regions outside the active areas composed of ScAlN, Al, and Si layers are etched by AJA Ion Mill. The etched regions are then passivated with a low-temperature (100 \({}^{\circ}\)C) plasma-enhanced chemical vapor deposition (PECVD) deposition of 200 nm of SiO\({}_{2}\), providing electrical isolation while preventing top electrode disconnection due to the height steps. Next, release windows are defined and etched using the AJA Ion Mill. The top 37 nm Al electrodes and 300 nm thickened Al buslines are then deposited using a KJL e-beam evaporator. Structural release with a xenon difluoride (XeF\({}_{2}\)) Si isotropic etch is applied to ensure energy confinement within the FBAR. The optical image of the fabricated FBAR is shown in Fig. 6 (b). Fig. 4: (a) XRD symmetric rocking curve of 85 nm sputtered thin-film ScAlN. (b) ScAlN film surface roughness measurement by AFM. Fig. 5: (a) Cross-sectional TEM images and magnified views of the (b) ScAlN-Al and (c) Al-Si interfaces. Fig. 6: (a) Device fabrication process and (b) microscopic image of the FBAR. ## IV Measurement and Discussion The resonator is measured using a Keysight vector network analyzer (VNA) in room temperature air at \(-15\) dBm power level. Two-port measurement is performed [16]. The measured admittance amplitude and phase are plotted in Fig. 7 (a)-(b), showing S1 at 21.4 GHz and S3 at 55.4 GHz. The minor resonance between S1 and S3 is the second-order antisymmetric (A2) mode due to the slight thickness difference in the top and bottom electrodes. The admittance curves from Fig. 7 (a)-(b) are magnified and plotted in Fig. 7 (c) for S1 and Fig. 7 (d) for S3. To extract the resonator performance, a modified mmWave modified Butterworth Van Dyke (mBVD) model is used [Fig. 7 (e)], adding series routing resistance (\(R_{s}\)) and inductance (\(L_{s}\)) for capturing the EM effects. The EM parameters, i.e., \(R_{s}\), \(L_{s}\), and static capacitance \(C_{0}\), are first fitted from the admittance amplitude and phase [Fig. 7 (a)-(b)], before adding in the motional elements for extracting \(Q\) and \(k^{2}\) in Fig. 7 (e). The fitted curves are plotted in Fig. 7 (a)-(d). The extracted parameters are listed in Fig. 7. The resonator achieves \(k^{2}\) of 7.0% and \(Q\) of 62 for S1, along with \(k^{2}\) of 4.0% and \(Q\) of 19 for S3, leading to a FoM of 4.34 and 0.76 respectively. \(k^{2}\) is extracted via fitting in Fig. 7. Note that \(Q\) here is effectively the anti-resonance quality factor \(Q_{p}\). To further validate, Fig. 7 (f)-(g) display the Bode \(Q\)[17] for S1 and S3, respectively. The maximum Bode \(Q\) after smoothing is 66 for S1 and 17 for S3. To compare with the state of the art (SoA), a survey of FoM for reported resonators above 15 GHz is reported in Fig. 1, including both the AlN/ScAlN [6, 7, 8, 9] and lithium niobate (LiNbO\({}_{3}\)) demonstrations [18, 19, 20, 21, 22]. Despite moderate film quality, our work shows comparable FoM to earlier ScAlN/AlN work for the 21.4 GHz, while the 55.4 GHz devices show higher FoM than earlier ScAlN/AlN works, proving the effectiveness of the new stack. However, the FoM is lower than that of transferred thin-film LiNbO\({}_{3}\) based mmWave resonators with much better film quality (\(<\)100 arcsec FWHM) [16], implying that further improvement of ScAlN/AlN-based mmWave resonators requires better crystalline quality from improved thin-film deposition methods. ## V Conclusion We demonstrate a mmWave ScAlN FBAR operating at 21.4 GHz (S1 mode) and 55.4 GHz (S3 mode). The resonator achieves \(k^{2}\) of 7.0% and \(Q\) of 62 for S1 at 21.4 GHz, along with \(k^{2}\) of 4.0% and \(Q\) of 19 for S3 at 55.4 GHz, showing higher FoM than reported AlN/ScAlN-based mmWave acoustic resonators. Material-level analysis and device-level performance indicate that the bottleneck for further performance enhancement lies in better thin-film deposition methods.
2309.13326
SARS-CoV-2 Wastewater Genomic Surveillance: Approaches, Challenges, and Opportunities
During the SARS-CoV-2 pandemic, wastewater-based genomic surveillance (WWGS) emerged as an efficient viral surveillance tool that takes into account asymptomatic cases and can identify known and novel mutations and offers the opportunity to assign known virus lineages based on the detected mutations profiles. WWGS can also hint towards novel or cryptic lineages, but it is difficult to clearly identify and define novel lineages from wastewater (WW) alone. While WWGS has significant advantages in monitoring SARS-CoV-2 viral spread, technical challenges remain, including poor sequencing coverage and quality due to viral RNA degradation. As a result, the viral RNAs in wastewater have low concentrations and are often fragmented, making sequencing difficult. WWGS analysis requires advanced computational tools that are yet to be developed and benchmarked. The existing bioinformatics tools used to analyze wastewater sequencing data are often based on previously developed methods for quantifying the expression of transcripts or viral diversity. Those methods were not developed for wastewater sequencing data specifically, and are not optimized to address unique challenges associated with wastewater. While specialized tools for analysis of wastewater sequencing data have also been developed recently, it remains to be seen how they will perform given the ongoing evolution of SARS-CoV-2 and the decline in testing and patient-based genomic surveillance. Here, we discuss opportunities and challenges associated with WWGS, including sample preparation, sequencing technology, and bioinformatics methods.
Viorel Munteanu, Michael Saldana, Dumitru Ciorba, Viorel Bostan, Justin Maine Su, Nadiia Kasianchuk, Nitesh Kumar Sharma, Sergey Knyazev, Victor Gordeev, Eva Aßmann, Andrei Lobiuc, Mihai Covasa, Keith A. Crandall, Wenhao O. Ouyang, Nicholas C. Wu, Christopher Mason, Braden T Tierney, Alexander G Lucaci, Alex Zelikovsky, Fatemeh Mohebbi, Pavel Skums, Cynthia Gibas, Jessica Schlueter, Piotr Rzymski, Helena Solo-Gabriele, Martin Hölzer, Adam Smith, Serghei Mangul
2023-09-23T10:10:00Z
http://arxiv.org/abs/2309.13326v2
# SARS-CoV-2 Wastewater Genomic Surveillance: Approaches, Challenges, and Opportunities ###### Abstract The SARS-CoV-2 Wastewater Genomic Surveillance (MF1) is a viable solution for SARS-CoV-2 Water ###### Abstract During the SARS-CoV-2 pandemic, wastewater-based genomic surveillance (WWGS) emerged as an efficient viral surveillance tool that takes into account asymptomatic cases and can identify known and novel mutations and offers the opportunity to assign known virus lineages based on the detected mutations profiles. WWGS can also hint towards novel or cryptic lineages, but it is difficult to clearly identify and define novel lineages from wastewater (WW) alone. While WWGS has significant advantages in monitoring SARS-CoV-2 viral spread, technical challenges remain, including poor sequencing coverage and quality due to viral RNA degradation. As a result, the viral RNAs in wastewater have low concentrations and are often fragmented, making sequencing difficult. WWGS analysis requires advanced computational tools that are yet to be developed and benchmarked. The existing bioinformatics tools used to analyze wastewater sequencing data are often based on previously developed methods for quantifying the expression of transcripts or viral diversity. Those methods were not developed for wastewater sequencing data specifically, and are not optimized to address unique challenges associated with wastewater. While specialized tools for analysis of wastewater sequencing data have also been developed recently, it remains to be seen how they will perform given the ongoing evolution of SARS-CoV-2 and the decline in testing and patient-based genomic surveillance. Here, we discuss opportunities and challenges associated with WWGS, including sample preparation, sequencing technology, and bioinformatics methods. ## Introduction Although many laboratory methods and bioinformatics tools have been rapidly developed in response to the COVID-19 pandemic, ongoing efforts persist in advancing wastewater-based genomic surveillance (WWGS) approaches. These endeavors aim to harness the potential of wastewater analysis for monitoring and detecting viral genetic material, thereby offering valuable insights and enhancing our understanding of the pandemic's spread and dynamics. Wastewater-based monitoring of SARS-CoV-2 epidemiology has demonstrated its efficacy in tracking SARS-CoV-2 viral infection dynamics in numerous countries around the globe [1, 2, 3, 4, 5]. Wastewater became a promising core component of infectious disease monitoring, providing a lineage-specific, community-representative picture of public health trends that captures previously undetected spread and pathogen transmission links. Building on recent laboratory and analytical advances to identify the diverse pathogens present in sewage will be essential for ongoing efforts to understand disease risks and will transform infectious disease surveillance [6]. Importantly, wastewater-based surveillance has been shown to provide balanced estimates of viral prevalence rates and does not require patient interaction, and can monitor entire communities, including underserved and vulnerable populations and asymptomatic cases [7, 8, 9, 10]. SARS-CoV-2 WWGS can detect mutation patterns of virus lineages earlier than clinical monitoring [11, 12, 13]. Additionally, it allows for the detection of novel cryptic lineages, including those resistant to naturally acquired or vaccine-induced immunity, those rarely observed in clinical samples, and those from unsampled individuals with COVID-19 infections [5]. In contrast to clinical samples, wastewater sampling allows the development of community-level profiles encompassing positive, non-reporting, and asymptomatic viral loads. This non-invasive technique allows for analyzing a community within a given sewershed and can provide insight into rising mutations and potential lineages of emerging concern[14] (VOC/VOI/VUM). Typical WWGS comprises four steps (Figure 1) after the initial assay design: (i) wastewater sampling, viral particle concentration, and RNA extraction (Figure 1 A); (ii) SARS-CoV-2 targeting quantification (Figure 1 B); (iii) library preparation and sequencing (Figure 1 C,D); (iv) bioinformatics analysis, data sharing and outbreak investigation (Figure 1 D). WWGS involves a multitude of experimental and computational approaches, presenting researchers with a wide array of choices. Despite its seemingly straightforward _nature_, these approaches have inherent limitations due to potential experimental biases and the intricacies of computational analyses and interpretations. Here, we present a comprehensive overview that delves into best practices, challenges, and opportunities surrounding WWGS for SARS-CoV-2 by providing a thorough examination of the current status of WWGS, shedding light on the obstacles and prospects with both experimental and bioinformatics methodologies. We thoroughly evaluate the available choices and address the common challenges that arise at each step of WWGS. We thoroughly evaluate the available choices and address the common challenges that arise at each step of WWGS. The ultimate goal of this review is to motivate further advances in the field of WWGS, which has the significant potential to guide public health in the context of COVID-19 and other infectious diseases. **Figure 1.** A general WWGS pipeline. (A) Overall workflow of sample collection and preparation for sequencing. Wastewater samples are collected from water reclamation facilities, followed by subsequent concentration and extraction of viral RNA. (B) SARS-CoV-2 quantification using primers targeting different SARS-CoV-2 viral genes (such as N1, N2, and E-gene) to assess SARS-CoV-2 genome copy numbers quantitatively. Positive samples then proceed through library preparation and next-generation sequencing (NGS) technologies, usually via amplicon sequencing. (C) Data analysis pipeline of wastewater sequencing results. NGS reads are mapped to reference sequence and variant calling is performed. (D) Further, supplementary analysis is done to contribute to both lineage surveillance and outbreak investigation. **Foundations for wastewater genomic surveillance** In March 2020, the World Health Organization declared the outbreak of coronavirus infectious disease-2019 (COVID-19) caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) a global pandemic[15], forcing the public health system to develop efficient methods for SARS-CoV-2 surveillance in real time[16]. Clinical testing emerged as a valuable resource providing an accurate assessment of an individual's diagnosis and offering a means for contact tracing to map and control the spread of SARS-CoV-2[16]. Rapidly, it became evident that sustaining government-supported clinical testing is not economically feasible, particularly for developing nations, leading to a shift in the responsibility of testing and reporting onto individuals, as in the case of the United States[17]. This shift has been accompanied by the rise of at-home testing, which has excluded the reporting of positive COVID-19 diagnoses from the mandated requirements of clinical facilities, and consequently, this has led to the generation of inaccurate clinical data[18, 17]. In the meantime, early in the COVID-19 pandemic, the presence of SARS-CoV-2 RNA in the feces of individuals infected with the virus, including those who are asymptomatic or have recovered from respiratory symptoms[19, 20, 21, 22, 23], prompted researchers to explore the use of wastewater networks for community-wide surveillance of SARS-CoV-2 prevalence. From April to July of 2020, several teams submitted proof-of-concept findings to peer-reviewed publications outlining the use of WWGS[1, 24, 25, 26, 27, 28, 1, 4, 12, 13, 24, 13, 2]. The remarkably rapid dissemination of methods and results during that period facilitated the widespread adoption of WWGS as a valuable tool for tracking the pandemic in municipal settings worldwide[29, 30, 2, 2]. These accomplishments have emphasized the potential of wastewater testing for viral surveillance as a method to evaluate disease prevalence within the community and demonstrated that WWGS for SARS-CoV-2 can detect emerging lineages at an earlier stage compared to clinical monitoring[11, 12, 13]. In contrast to clinical samples, wastewater sampling allows the development of community-level profiles for SARS-CoV-2 loads encompassing positive and asymptomatic tested cases, as well as asymptomatic and symptomatic non-tested cases. This approach has also demonstrated its feasibility in monitoring the potential lineages of emerging concern (VOC)[1, 29, 4, 2], and can serve as a valuable warning system for detecting regional spikes for VOC[31, 32, 33]. To facilitate the coordination of SARS-CoV-2 surveillance data from wastewater reclamation facilities (WRFs), the Centers for Disease Control and Prevention (CDC) and the United States Department of Health collaborated to develop the National Wastewater Surveillance System (NWSS)[34]. The NWSS COVID Data Tracker[35] assists public health agencies in detecting outbreaks and making informed decisions about where prevention protocols should be implemented. There are currently two WWGS methodologies in use to track VOC, the genetic diversity of SARS-CoV-2 lineages and subineages, and estimate their prevalence in communities. First, the detection of SARS-CoV-2 is achieved using polymerase chain reaction (PCR)-based methods, namely RT-qPCR and more recent technologies, such as RT-digital droplet PCR (RT-ddPCR). PCR technologies are relatively inexpensive and well-established and allow for the direct quantification of SARS-CoV-2 in wastewater samples, presenting the following advantages: (1) an ability to probe a sample site at high frequency to generate real-time information; (2) ease of implementation by any lab running standard PCR assays; (3) short turn-around time, and (4) lower costs of reagents[36]. As with any PCR assay development, methods and results must be carefully scrutinized to minimize the chance of false positives or over-interpretation. The genomic sequence targets of RT-qPCR/RT-ddPCR methods are also limited by fluorophores and the detection instrument[37]. Most critically, these PCR methods lag in discovering the emergence of new lineages because they require a specific primer-probe design according to the details of the genomic information of new lineages[38], usually derived from sequencing and analyzing patient samples. Thus, PCR is not effective for detecting new lineages as they evolve. PCR-based techniques are limited to detecting and quantifying only known lineages circulating in communities[37]. High-throughput sequencing can be employed to overcome the limitation of pre-defined sequence targets and to identify emerging lineages[37, 39]. The use of sequencing technologies coupled with advanced bioinformatics methods for analyzing wastewater sequencing data (WWS data) has provided an unparalleled level of detail in assessing wastewater samples. Sequencing overcomes some of the limitations of PCR-based technologies, allowing for the comprehensive detection of SARS-CoV-2 mutation profiles present in wastewater samples, although the tiling amplicon sequencing methods primarily used in SARS-CoV-2 surveillance are still somewhat vulnerable to unexpected changes in primer binding sequences as new lineages emerge. Sequence data collected at sufficient depth can be deconvoluted to estimate lineage and sublineage proportions. The inclusion of high-throughput sequencing, with appropriate bioinformatics methods, is the foundation of fundamental transformations of environmental genomic surveillance and virology that promise to revolutionize our approaches to epidemiological data analysis and outbreak early detection and prevention[40, 41, 42, 43, 44]. To effectively use the wealth of information provided by WWS data, it is crucial to undertake targeted initiatives to develop robust and accurate bioinformatics algorithms and analytical pipelines. Additionally, comprehensive methodologies must be established to efficiently access SARS-CoV-2 viral genomic material, optimize adaptive sampling strategies, recover viral particles, and select appropriate sequencing technologies. Establishing such efforts is critical for the widespread adoption of WWGS as an all-encompassing approach for monitoring SARS-CoV-2 lineage prevalence and detecting novel cryptic strains. Overall, the true power of real-time SARS-CoV-2 tracking through WWGS comes from combining the two methodologies, qPCR and sequencing. By including sequencing approaches, samples can be explored for novel mutations and emerging lineages. When a concerning mutation profile or a new potential lineage are discovered, primers and probes can be adjusted for these new lineages to provide rapid turnaround monitoring via qPCR. ## Approaches for effective wastewater genomic surveillance Access to SARS-CoV-2 viral genomic material in wastewater infrastructure is provided through a highly variable and complex wastewater collection system rather than direct access to individual clinical specimens. Ambient conditions within the wastewater collection system are harsh to viral material because of changing chemistry and physicochemical conditions outside the human host. Additionally, ambient conditions may include non-ideal and fluctuating temperatures, variable pH, water quality parameters (e.g., presence of DNases and RNases) that promote the degradation of the viral capsid and nucleic acids, and extended time from release from the human host to WRFs[45, 46]. As a result, viral genetic material can be severely degraded and fragmented prior to sample collection. Before collection, SARS-CoV-2 viruses may travel through the sewer network for several days; however, in untreated wastewater, the SARS-CoV-2 virus can survive for up to 10 days at room temperature (below 37\({}^{\circ}\)C) and between 30 and 60 days at 4\({}^{\circ}\)C[47]. Several studies have taken different approaches to overcoming the challenges presented by wastewater for WWGS (Supplementary Table 1). Currently, there are over 1000 WRFs that have established wastewater surveillance programs and report their data to NWSS[34]. With this, there is access to current and historical SARS-CoV-2 viral loads from participating WRFs. Another public source of WRFs' viral wastewater tracking is wastewaterSCAN[48]. This public database was established by the collaborative efforts of Stanford University, the University of Michigan, and Emory University. There are three qualifying metrics for WRFs to participate: a sewershed encompassing at least 10,000 people, sampling three times a week for 18 months, and allowing the data to be displayed on wastewaterSCAN. The data that is displayed on wastewaterSCAN is also shared with the NWSS. All data and methods for analysis are open to the public. ### Wastewater Sampling Outside of the host cell, viruses cannot replicate. As a result, monitoring the concentrations excreted into the wastewater collection system over time can accurately represent the population within a sewershed[49]. It is important to consider when and where sampling will occur, for this can dictate the level of RNA degradation of SARS-CoV-2[50]. Sampling techniques include grab samples at peak flow times (typically occurring between 0800-1100)[40, 41, 24] and 24-hour time-weighted composite samples using refrigerated autosamplers[42, 43, 44, 24] (Figure 2). Wastewater sampling frequency varies across WWGS programs, e.g., ranging from once per week to daily. Clinical sampling data early during the pandemic was available daily and transitioned to a weekly basis later. One challenge in establishing relationships between wastewater SARS-CoV-2 RNA levels and disease prevalence is temporally matching the wastewater and clinical data. Data alignment necessitates aggregation so that both wastewater and clinical data are on the same time scale (e.g., weekly). In addition, the development of accurate models will also require an understanding of the progression of the disease and viral shedding for infected individuals. To reduce the short-term variability inherent in wastewater measurements and clinical case counts, moving averages (e.g., 7 days to 3 weeks) are also typically utilized to evaluate overall trends. The placement of sampling locations depends on the scale of SARS-CoV-2 monitoring. Collection at WRFs allows for monitoring SARS-CoV-2 from a potentially large population within the sewershed. Here, two different sample types can be collected: untreated wastewater and primary sludge. It has been demonstrated that primary sludge can provide higher sensitivity and less variance when compared to untreated wastewater[51]; however, primary sludge does not possess the same predictive capabilities as untreated wastewater, providing a much shorter lead-time to clinical diagnosis[12]. SARS-CoV-2 concentrations in untreated wastewater precede clinical data by 4-10 days[52]. It is important to note that the size of the surveyed population when collecting untreated wastewater at a WRF is dictated by the sewershed service area. Large sewershed service areas, that are typical of centralized WRFs in many urban areas, can make public health interventions challenging. Sub-sewershed sampling (e.g., from a manhole within the sewer network) or building-scale sampling allows for a more targeted spatio-temporal analysis of SARS-CoV-2 in a community and allows source tracking of outbreaks and VOCs Figure 2: An outline of different sampling types and locations for WWGS. more effectively. For example, several universities have implemented SARS-CoV-2 wastewater surveillance monitoring systems to ensure the health and safety of students and faculty. Typically, these sampling locations are established at sewer cutoffs, allowing access to the wastewater leaving campus living facilities (e.g., dorms and campus apartments) or frequently visited facilities (e.g., student unions, libraries, dining areas) [40, 44, 50]. Being able to specify locations can allow for targeted intervention and mitigation efforts. ### Virus concentration and RNA extraction methods Due to the complexity of wastewater matrices, recovering viral particles can be challenging. Without an effective recovery protocol, downstream quantification may significantly underestimate true SARS-CoV-2 levels. There are several methods to concentrate viral particles from wastewater; however, the most frequently used methods are polyethylene glycol (PEG) precipitation, electronegative membrane filtration, ultrafiltration, and ultracentrifugation [52, 53, 54, 41, 42, 43, 54, 24, 44, 52, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54] (Figure 3). PEG precipitation requires the amendment of wastewater samples with a solution of salt and PEG, resulting in a supernatant that contains concentrated SARS-CoV-2 particles. Recovery rates ranging from 46.6 to 62.2% are typical of this method [55, 56, 57]. This method provides a reliable and inexpensive option for viral particle concentration, but can be a severe bottleneck in the wastewater analysis workflow. PEG precipitation takes 2 to 6 hours for initial mixing, which requires overnight incubation and a lengthy centrifugation step. A rapid PEG approach, without an overnight incubation step, yields drastically lower recovery efficiencies between 18.8% and35% [56]. Figure 3: Different laboratory methods of concentrating SARS-CoV-2 viral particles from wastewater. Each section describes the most common viral particle concentration and RNA extraction methods employed. Electronegative membrane filtration in conjunction with a cation conditioning solution (e.g., NaCl or MgCl\({}_{2}\)) provides a simple, high-speed method to concentrate SARS-CoV-2 viral particles. Typically, the pore diameter of electronegative membranes is between 0.22 and 0.8 \(\upmu\)m, thereby accumulating larger particles on the membrane surface. Adding a cation conditioning solution results in the formation of salt bridges within the negatively charged membrane, promoting the adsorption of free-floating SARS-CoV-2 virus particles that are significantly smaller than the membrane pore size. This method boasts a high recovery efficiency of SARS-CoV-2, up to 65.7%[47, 58]. Ultrafiltration is a direct virus concentration method without conditioning treatment or a lengthy precipitation process. This method differs from electronegative membranes as it concentrates SARS-CoV-2 particles based on size exclusion rather than electrostatic forces, maintaining pore sizes ranging from 5 nm to 0.1 \(\upmu\)m down to 3 kDa. While this does seem promising, the viral particle recovery efficiencies are lower than other methods (28-56%)[58]. This method can only process small volumes of wastewater and is prone to clogging. The complexity of wastewater matrices necessitates multiple ultrafiltration units to overcome this, but the equipment and cartridges are expensive and concentrate potential PCR inhibitors alongside SARS-CoV-2 virus particles[58]. Ultracentrifugation is a long-standing method of concentrating viral material by centrifuging the wastewater sample at upwards of 100,000g to create a pellet[58, 59, 60]. Although this method provides a quick concentration of viral particles, it co-concentrates inhibitors and relies on larger sample volumes to achieve a pellet large enough to extract RNA[59]. Further, ultracentrifugation results in consistently low recovery rates of SARS-CoV-2, as low as 19%[58, 59]. Following sample concentration, it is necessary to lyse the concentrate via mechanical or chemical methods. Mechanical lysis is typically needed for targets with cell walls. Mechanical lysis is not recommended for virus detection due to the release of nucleic acids from cells, potentially interfering with analyses of viral targets. Chemical lysis through commercially available products (such as Zymo's DNA/RNA Shield) generally suffices for lysing the outer protein coat of viruses, releasing the viral genomic material while reducing interferences from cellular genomic material. Once the samples are lysed, they can be stored, if necessary, without considerable degradation. After lysis, samples undergo extraction to purify RNA. Several commercially available kits exist, including New England Biolabs Monarch RNA MiniPrep, Qiagen PowerViral DNA/RNA kit, and Zymo Environ Water RNA Kit. The indicated kits yield >70% extraction efficiency when using spiked concentrations of BCoV as a surrogate in wastewater[60]. However, they are column-based extraction kits that require manual extraction, which can lead to an increase in turn-around time based on the user's experience. Conversely, automated RNA extraction reduces risk of user error and drastically increases throughput. Instruments such as the Maxwell RSC, MagMAX, and KingFisher Flex system offer magnetic bead RNA extraction. Both magnetic bead and column-based extractions have demonstrated equitable numbers of usable sequencing reads[61]. Unfortunately, there is a lack of research investigating different viral concentration methods and sequencing quality. The research that has been conducted is prior to the COVID-19 pandemic, and the methods used in concentrating viral particles are no longer the most frequently used in labs. This knowledge gap can impede future work in WWGS and needs further investigation. ### Quantification methods for wastewater genomic surveillance As the COVID-19 pandemic progressed, several molecular tools were employed to quantify SARS-CoV-2. The gold standard of quantification is genomic-based methods, such as RT-qPCR and ddPCR. These methods focus on the gene-specific identification of targets. The detection of target genes is both accurate and highly sensitive, making PCR-based methods a cornerstone of WWGS projects. RT-qPCR emerged as a powerful tool for wastewater surveillance, allowing for the detection and quantification of SARS-CoV-2. With the addition of a fluorescent dye, a qPCR instrument can measure the fluorescence as the thermal cycler progresses and provide a real-time amplification curve with each cycle. This analysis compares the quantification cycle (Cq) value of a sample with an unknown concentration to a standard curve of known concentrations, allowing for the extrapolation of SARS-CoV-2 virus copy numbers; however, this provides an inherent quantification bias as this method is dependent on the accuracy of the standard curve. Further, due to the complexity of wastewater matrices, amplification and quantification can be affected by inhibitors [62, 63]. RT-ddPCR emerged as a strong alternative to RT-qPCR. Instead of comparing to a standard curve, this technique applies Poisson statistics to determine the absolute concentration of the target [64]. Each PCR reaction consists of an oil-water emulsion that partitions each sample into tens of thousands of droplets. Each droplet will either read with a positive or negative fluorescence, and the reader will detect the number of positive droplets. For wastewater, RT-ddPCR has demonstrated a stronger resilience to inhibitors and a higher sensitivity compared to RT-qPCR [62, 65, 66, 67]. With newer instruments, up to 6 different fluorescent dyes can be detected with ddPCR, enabling an amplitude multiplex of up to 12 targets. A variation of RT-qPCR developed during the pandemic for detecting SARS-CoV-2 is called Volcano 2nd Generation (V2G)-qPCR [68, 69]. The V2G-qPCR method uses a novel polymerase capable of reading both RNA and DNA templates and, therefore, it does not require a separate cDNA synthesis step. Results from V2G-qPCR and RT-qPCR measures are statistically equivalent [70]. Another employed methodology is proteomic quantification detection. Proteomics can provide insight into proteins and their role specific to the target [71]. SARS-CoV-2 encodes for at least 14 proteins and can be identified using several types of mass spectrometry analyses [72, 73]. While mass spectrometry may be less expensive, and can provide shorter, cheaper runs than RT-qPCR [72, 74, 75], RT-qPCR has displayed better sensitivity and specificity [76]. ELISA assays can provide semi-quantitative measurements of specific protein indicators of infection and immunity, such as SARS-CoV-2 specific IgA and IgG [77]. Several primer-probe sets are available to identify SARS-CoV-2, typically in the most conserved regions, such as the N gene [35]. Despite being a relatively conserved region, the N gene is not immune from mutations [78]. As VOC emerge, primer/probe sets become less specific and have a degrading ability to detect positive SARS-CoV-2 samples [27]. Compared to the Index reference sequence from Wuhan strain [79], over 1000 N gene nucleotide mutations have been detected, and more than 300 of them are in commonly used primer sets [80].micron contains several deletions in the N gene, which can hinder the ability to accurately detect SARS-CoV-2[78]. Therefore, updating primer sets is an ongoing need to adapt to VOC. Regardless of the specific protocols employed, quality control measures should be incorporated into workflows. This includes integrating non-template controls or blanks, recovery controls, extraction controls, and inhibition controls. Non-template controls are typically prepared as a sample, except the sample is replaced with nuclease-free water. These samples should be at below detection limits. Recovery controls are typically added prior to sample concentration and then measured at the end of processing to determine the fraction recovered. The targets chosen for recovery controls should not be those found naturally in the sample. Typical controls include those that correspond to specific animals that would normally not contribute towards wastewater, e.g., Bovine coronaviruses. Extraction controls are similar to recovery controls but added immediately before the extraction step to quantify potential losses during extraction. Inhibition controls are added after extraction and used to determine whether contaminants that co-accumulate during extraction impact the qPCR detection technology used. ### Advancements in wastewater sequencing technologies Sequencing approaches proved highly effective for detecting mutations and subsequently deconvoluting this information to estimate SARS-CoV-2 lineage and sublineage frequencies for WWGS[81, 82, 83]. RNA, extracted from wastewater samples, can be reverse-transcribed into complementary DNA (cDNA) and sequenced using various methodological approaches and sequencing technologies[84] to recover as much as possible of the entire viral genome from the wastewater: 1) metagenomics or -transcriptomics, 2) capture-based sequencing, and 3) amplicon-based sequencing. In metatranscriptomics, the RNA is recovered directly from wastewater samples without any further enrichment for SARS-CoV-2 or depletion of potentially contaminating material from other sources. While metatranscriptomics is a powerful approach for recovering information about whole communities from an environmental sample[85], the downside is that low levels of SARS-CoV-2 are difficult to detect, and much of the sequencing work will go into sequencing other RNA, such as that derived from humans. Previous wastewater metagenomics/-transcriptomics studies showed that genetic material derived from bacteria was more abundant despite additional depletion efforts via size exclusion[85]. As alternatives, capture-based and amplicon-based sequencing, also known as target enrichment approaches, can selectively capture or amplify specific regions of interest from a complex mixture of genetic material[84]. In capture-based sequencing, the target regions of interest (e.g., specific genes or the whole SARS-CoV-2 genome) are selected using capture probes or baits that are complementary to those regions. The capture probes are used to selectively bind and capture the SARS-CoV-2 RNA fragments of interest from a complex wastewater sample. Once the target regions are captured, they can be subjected to library preparation and sequencing. In amplicon-based sequencing, specific regions of interest are selected for amplification and sequencing. Primers are designed to target these specific regions (again, specific genes or the whole SARS-CoV-2 genome), and amplification is carried out using PCR. Such enrichment approaches are particularly useful when the analysis can be focused on specific genomic regions or genes that are known, such as those associated with a particular pathogen like SARS-CoV-2. In the clinical context and genomic surveillance of patient samples, tiled amplicon-based approaches are widely established for sequencing and constructing whole SARS-CoV-2 genomes, e.g., using open-source primer schemes developed and maintained by the ARTIC Network. Since similar protocols and primer schemes can also be used directly for sequencing SARS-CoV-2 from wastewater samples, amplicon sequencing has also become the main approach in WWGS. Amplicon sequencing generally provides adequate material for sequencing low-abundance viral material out of the wastewater matrix, but amplicon-based methods are vulnerable to primer failure and loss of coverage as new lineages arise, and reagents must be continually monitored and updated, similar to reagents used in qPCR and ddPCR assays. Several sequencing technologies can be used to sequence SARS-CoV-2 RNA from wastewater, each with its own set of advantages and disadvantages. Illumina sequencing is the most widely used sequencing technology for genomic surveillance of SARS-CoV-2 in general[86] and WWGS in particular[87, 88]. Short reads produced by Illumina sequencing have a high accuracy, and the platform can generate a large number of reads in a single run. In situations where genomes are reconstructed _de novo_, or large structural variations need to be detected, the major drawback is the limited read length, but this is not so critical when fragmented RNA from wastewater samples is sequenced anyway, and the main purpose is reference-based variant calling. A second short-read technology, more rarely used but also applied in WWGS, is lonTorrent sequencing[89, 90, 81]. As alternatives, single-molecule real-time sequencing (SMRT) technologies, such as those provided by Pacific Biosciences (PacBio) and Oxford Nanopore Technologies (ONT), can produce longer amplicon reads, e.g., approximately 400 bp reads based on an ARTIC Network Protocol[91, 4], which can be useful for resolving complex regions of the SARS-CoV-2 genome. In the specific context of WWGS, longer reads can help infer synteny information about mutations that belong to the same viral lineage because they are detected on the same read. However, it is challenging to derive long RNA fragments from wastewater samples, and the amplicon approach limits maximum achievable read lengths. Nevertheless, ONT is placed second among the most used sequencing technologies in clinical SARS-CoV-2 genomic surveillance[86] due to its lower initial costs, the putative option to sequence longer amplicons[92], and potential future applications regarding real-time and on-side sequencing. In addition, ONT can also sequence RNA natively without the need for cDNA transcription. SMRT technologies, and in particular ONT sequencing, had higher error rates than other technologies, which may affect accurate lineage detection. However, the technologies and thus their accuracy are constantly improving, making them more and more suitable also for accurate variant calling[93]. In addition to technology-related biases, the success of each sequencing technology in recovering most parts of the SARS-CoV-2 genome is highly dependent on the primer scheme used. As with sequencing of patient samples, mutations in the SARS-CoV-2 genome can lead to inefficient primer binding and thus reduced or even absent amplification of the target region, also known as amplicon drop-out. Primer schemes need to be constantly evaluated and adjusted, which is mainly done based on clinical genomic surveillance data. With the decreasing availability of SARS-CoV-2 genomes from clinical genomic surveillance, primer designs may become less accurate and lead to more frequent amplicon drop-outs. In WWGS, such failures may go unnoticed due to the mixture of SARS-CoV-2 lineages in the wastewater sample. An amplicon of a lineage that can still be sequenced with the used primer scheme could mask the failure of another amplicon of a different lineage that has accumulated one or more mutations in primer sites. Such problems with primer (or bait) designs can be circumvented by metagenomic or -transcriptomic sequencing, but with the other drawbacks already mentioned. ### Robust bioinformatics analysis for wastewater sequencing data #### Data processing The initial stages of a bioinformatics pipeline for wastewater sequencing data usually include quality control and filtering of the reads, error correction, then trimming of adapters, and subsequently mapping reads to a reference SARS-CoV-2 sequence, primer clipping, and calling mutations (Figure 4 A). Conventional error correction tools can be quite challenging when dealing with WWS data reads, as they were primarily optimized for human genome reads and may struggle to handle the subtle variations among viral lineages or sublineages[94]. To address this issue, several error correction methods tailored for viral sequencing have been proposed, such as KEK[95], ET[95], MultiRes[96] or Bayesian probabilistic clustering approach[97]. Quality control and filtering are supported by viral sequencing data bioinformatics pipelines such as V-pipe[98] or COVID-19 Viral Epidemiology Workflow[99] (C-VIEW), or performed with specialized tools such as Trimmomatic[100], fastp[101], often used for short reads, and Filtong[102] for filtering long reads by quality or custom scripts. Read mapping is done using scalable aligners such as BWA-MEM[103], Bowtie[104], or minimap2[105]. Paired-end reads may be merged before alignment using tools such as BBTools[106]. PCR typically amplifies the genetic material in the sample before sequencing. To avoid bias in mutation calling, removing the primers from the alignment is important, which is commonly done using iVar[107], BAMClipper[108], or custom scripts. Mutation calling can be performed by a variety of tools also depending on the used sequencing technology, such as iVar[107], SAMtools[109], ShoRAH[110], LoFreq[111], GATK[112], FreeBayes[113], BCFTools[109], Medaka[114], or custom scripts[90]. Comparative performance of some of these tools when applied to SARS-CoV-2 wastewater surveillance data has been the subject of published studies[115]. All these variant calling tools have different parameters for filtering according to metrics such as sequencing depth, quality, and allele frequency, impacting the final mutation calls. #### Estimation of lineages relative abundances The next step of the pipeline is to identify the lineages that are believed to be present in the sample and to estimate their relative abundances from a read alignment produced from NGS data of an RNA extract derived from a wastewater sample. In wastewater sequencing data, the full phasing information of mutations is lost. This is due to fragmentation of the genetic material in the sample, amplification protocols amplifying genomic regions in separate amplicons, and the length of sequencing reads being much shorter than the genome length. In contrast to clinical samples, where we typically assume low diversity and report a consensus sequence representing the dominant inferred lineage, this approach is unsuitable for environmental samples. Specifically, in wastewater samples, multiple lineages may coexist, stemming from individuals infected with different lineages. This sample heterogeneity must be considered, and is further complicated because these lineages often share mutations. A variety of computational tools have been developed for this task, based either on a classification approach, such as COJAC[82], VLQ pipeline[116], and expectation maximization EM algorithm for obtaining maximum likelihood estimates of the proportions of different haplotype in a sample[117], or a deconvolution approach, such as LCS[18], VaQuERGo[83], Alcov[119], PIGx[120], Freyja[11], LolliPop[121]. The classification approach works at the level of reads and assigns each read (probabilistically or deterministically) to the different reference lineages with signature mutations according to the mutations they display. Aggregating the counts of reads assigned to different lineages provides an estimation of their relative abundances (Figure 4 B). In contrast, the deconvolution approach takes as input the individual mutation frequencies computed from the alignment. In a mixed sample, the expected proportion of mutated reads at a given locus equals the sum of the relative abundances of lineages harboring this particular mutation. Again, using a reference set of lineages, their relative contributions to the observed distribution of mutation frequencies is then estimated by a constrained regression method (Figure 4 C). Some of these methods also allow for considering time dependency in the data by employing different nonparametric smoothing approaches[121, 11, 83]. Some methods additionally provide confidence intervals for the estimates of lineage relative abundances, which is done using bootstrap methods[121, 118, 11] or closed-form expressions[121]. To detect a novel lineage or to specify the lineage, haplotype reconstruction methods are used, in which tools classify the mixed read data using different types of methods, including multiple sequence alignment and clustering-based methods, such as ShoRAH[110] and PredictHaplo[122], QuasiRecomb[123] that is based on hidden Markov model, PEHaplo[124] that uses longest common substring, FM-index based search with overlap graph as in case of Savage[125], and rReference-guided assembly used VirGenA[126] tool. All of these methods are reference-based and rely on precise definitions of the lineages, which can be generated from clinical sequences generated since the beginning of the pandemic. Reference sets of lineage genomes may be constructed from existing databases, such as GISAID[127], Cov-Spectrum[128], UShER[129], or NextClade[130]. The selection of appropriate reference datasets is nontrivial, and the results of some deconvolution methods may vary significantly depending upon the reference dataset or classification scheme used[131]. ### Outbreak investigation One important application of bioinformatics analytics is the investigation of outbreaks, which is necessary to detect virus transmission and trace its evolutionary relation with existing VOCs. The existing methods of clustering and phylogenetic analysis can achieve both tasks. Phylogenetic analysis of genomic sequences can calculate the distance across the closest pairs to trace the evolutionary lineage of viruses. To assess the direction of transmission and detection for a superspreader event, a directional network of the viral outbreak is required. This is pivotal to finding transmission clusters and limiting them to the containment zone. QUENTIN[132] and VOICE[133] construct a Markov-type model using the distance between lineages and decide the direction of infection using the minimum evolution principle. Phyloscanner[134] is another tool that uses paraphyletic, monophyletic, and polyphyletic relations along with phylogenetic analysis of samples to detect the direction of transmission. Geographical transmission networks can also be inferred using the TNet tool[135]. ### Applications of wastewater genomic surveillance WWGS offers an additional, independent, non-invasive resource for tracking SARS-CoV-2 evolution, which is crucial for long-term adaptation to co-existence with this pathogen and its continuous control to decrease the COVID-19 health burden in the post-acute pandemic period[136]. This is of particular value during the phase of reduced clinical surveillance, lifted restrictions, and increasing genomic diversity, with different viral sublineages in side-by-side circulation and higher odds for co-infections and recombination events[137]. WWGS can detect the emergence or introduction of novel sublineages in particular regions weeks prior to their identification in clinical samples, subsequent monitoring of their contribution to SARS-CoV-2 infections at the population level, prediction of the reproductive advantage, and further accumulation of novel mutations[138, 11, 88]. This allows viral trees that have evolved over time and among various regions to be recognized and compared. Identifying novel mutation signals and potential sublineages through WWGS may even prompt their increased and targeted clinical surveillance[139], indicating that both approaches are complementary and can strengthen the viral monitoring network. However, WWGS is likely superior in regions with limited genomic surveillance of SARS-CoV-2 due to non-challenging sample collection and the ability to generalize data for a particular area without a need for mass sample sequencing, contrary to clinical surveillance[140]. Figure 4: Estimating the relative abundances of SARS-CoV-2 genomic lineages from wastewater sequencing. **A**: The lineages X, Y and Z each have unique but partially overlapping mutation profiles, situated on loci **a**, **b**, **c**, and **d**. The reads from a wastewater sequencing experiment are aligned to the reference genome, and mutations are called. **B**: In a classification approach, each read is assigned to the lineage that most likely generated it. The counts are then aggregated to estimate the relative abundance of lineages in the sample. **C**: In the deconvolution approach, the proportions of mutated reads at each variable locus are decomposed into the individual contribution of each lineage. Earlier characterization of amino acid substitutions in spike protein and other viral proteins through WWGS offers a more swift initiation of experimental studies on immune escape mutations and drug resistance, pivotal in vaccine-adaptation efforts and predicting the efficiency of authorized direct-acting antivirals. It also enables the initiation of _in vivo_ research on the clinical relevance of novel sublineages and particular mutations, which is of utmost importance considering that the intrinsic severity of future SARS-CoV-2 lineages remains uncertain [141]. Using WWGS to detect more severe viral lineages, e.g., harboring mutations enhancing fusogenicity, would allow for more targeted and rapid public health responses, translating into decreased morbidity and mortality. Furthermore, WWGS could be employed to track mutational signatures from exposure to mutagenic antivirals (i.e., molnupiravir authorized in selected world regions) [142], essential to explore the impacts of such treatments on the trajectory of sublineages generation and onward transmission. Moreover, WWGS is a tool to track the cryptic circulation of SARS-CoV-2 lineages that may appear entirely deescalated using clinical surveillance but may otherwise re-emerge or lead to the generation of new lineage, e.g., through recombination events [143]. Last but not least, WWGS can support the early detection of spillback of mutated lineages that could arise during viral circulation in non-human reservoirs that SARS-CoV-2 has already established (e.g., free-ranging white-tailed deer [144]). The clinical consequences of such retransmission to the human population are challenging to predict since mutation-driven adaptations to a new host may lead to decreased adaptation to the human environment but also to improved evasion of acquired immunity, including cellular response, and thus higher susceptibility to severe disease [145, 146]. Therefore, detecting such events as soon as possible can guide other surveillance systems and is necessary to implement effective containment measures. As SARS-CoV-2 is far from eradication and continues to evolve, while the risk of the emergence of novel, clinically relevant viral lineages remains high, implementing WWGS to detect them ahead of their effective spread in the community is essential. Although WWGS is increasingly applied in this regard, the results are primarily made available through peer-reviewed literature. Ultimately, WWGS should be used as an early warning indicator of the rise of novel mutations and associated sublineages. However, considering its value and ongoing transition from the acute phase of the COVID-19 pandemic, it is pivotal to establish a global public repository of SARS-CoV-2 sequences generated with WWGS over time in various world regions, enabling genomic epidemiology and real-time surveillance to monitor the emergence and spread of viral sublineages in a fashion similar to GISAID. This would increase the relevance of WWGS to global COVID-19 research and guidance of public health measures and policy, including recommendations on maintaining or updating COVID-19 vaccine composition for primary or booster doses. Wastewater-based epidemiology is an established early warning tool for viral spread in the community, identifying new outbreaks and monitoring infection trends, with the potential to guide public health actions and policy decisions. Contrary to clinical surveillance, it is not biased toward symptomatic infections and not affected by individual engagement in testing. Instead, it can be applied to estimate the temporal and spatial trends of total (including undiagnosed) infection load at the community level [147]. Since the beginning of the COVID-19 pandemic, wastewater epidemiology, particularly based on quantitative assessment of genomic copies, has been applied to detect SARS-CoV-2 for community-wide surveillance as well as in smaller cathments for more targeted surveillance[148, 149, 150, 151, 152]. The role of such an approach in routine monitoring of infection trends and outbreak identification is even increasing during the transition from the acute phase of the pandemic when clinical surveillance is no longer as extensive, restrictions are lifted, and the public is generally less concerned about the COVID-19 threat. Under such conditions, routine quantitative assessment of wastewater should become a primary source of epidemiological information on trends of SARS-CoV-2 circulation in various communities. Forecasting models derived from wastewater-based epidemiology can accurately predict the weekly new hospital admissions due to COVID-19, providing a 1-4 weeks window for introducing mitigation measures[153]. The qualitative assessment offers additional advantages in this regard. Tracking the dynamics of the contribution of particular sublineages in wastewater is a powerful early warning tool to understand viral shifts that occur at the community level. Their spatial and temporal spread can be tracked, real-time or retrospectively, by integrating data derived from various catchment areas, allowing for the identification of hot spots of specific viral sublineages[154, 155, 156]. Foremost, qualitative WWGS can detect them much earlier than clinical testing, ahead by weeks or even months[157, 158, 159, 160], enabling expedition of the effective outbreak response by guiding public health policies regarding face masking, booster vaccinations, and/or decreased social mobility. Ultimately, WWGS, coupled with sublineages-oriented risk assessments, can become a robust tool to decrease infection rates, long-term consequences of COVID-19, hospital admissions, and mortality. Moreover, WWGS has the potential to screen cross-border SARS-CoV-2 spread. Applied to aircraft wastewater samples, it can effectively monitor viral sublineages carried by onboard passengers, enriching data on viral diversity in departure areas and enforcing mitigation strategies in arrival regions. In the past, selected SARS-CoV-2 lineages were detected in clinical samples from returning overseas travelers[89, 161]. Therefore, establishing a global aircraft-based WWGS network is postulated with use in the context of COVID-19 and future viral threats[162]. Such a network could compensate for limited genomic surveillance in various world regions, particularly low- and middle-income countries, which is essential to counter the threat of future viral lineages[140]. In the post-acute pandemic era, COVID-19 vaccination remains an essential and primary public health intervention to decrease SARS-CoV-2 morbidity and mortality. Omicron sublineages are clinically milder, but their infections can lead to severe outcomes in selected patient groups, causing health and economic burdens, management of which requires appropriate preparedness[163, 164]. However, vaccine-induced humoral immunity is short-lived, while the virus accumulates immune escape mutations, justifying booster dose recommendations and vaccine updates. At least one booster dose will likely be recommended annually, particularly for the elderly, patients with comorbidities and immune deficiencies, and healthcare workers[165]. Wastewater-based epidemiology can be employed to assess the effectiveness of vaccinations as successfully demonstrated in the initial phase of mass COVID-19 vaccination, showing a decline in SARS-CoV-2 RNA positivity in response to immunization[166, 167]. Surprisingly, opportunities created by such analyses have not been fully exploited in the context of vaccination. Similar studies following subsequent booster administration, integrating data on vaccination coverage in particular areas, could reinforce confidence in COVID-19 vaccinations, especially when resources for real-time tracking of vaccine effectiveness are available to the public in limited form. Such an approach could also be employed in specific settings, e.g., hospitals or nursing homes, before, during, and after booster vaccination campaigns, enabling a better understanding of the effect of immunization on virus spread in the community. WWGS provides further opportunities, as it can offer to track the effect of vaccination on particular sublineages, which are in concurrent circulation but may differ in sensitivity to neutralization antibodies elicited by vaccines as observed currently within the Omicron lineage[168, 169]. By employing WWGS, such data could be obtained earlier than through clinical surveillance and epidemiological analyses. This is of particular use if one considers that even with an mRNA platform, the time needed to develop and authorize an updated vaccine may be enough for SARS-CoV-2 to generate progenitors that diverge from the selected antigen, causing public concern over the vaccine's effectiveness. Therefore, WWGS may be the first to provide an initial assessment of its performance on the population level, which may be valuable in decreasing vaccine hesitancy. Furthermore, WWGS can provide a more accurate assessment of vaccine effectiveness on the population level than analyses based on cases of breakthrough infections with presenting clinical symptoms. Last but not least, since SARS-CoV-2 eradication is highly unlikely with currently available vaccines, WWGS could generate data on which viral sublineages are positively selected under increased immunization levels due to booster administration. In addition, data generated through WWGS need to be integrated into the system of continued monitoring of the evolution of SARS-CoV-2, which is pivotal in guiding antigen selection for updated COVID-19 vaccines. Of note, none of the authorized COVID-19 vaccines is based on attenuated live SARS-CoV-2; thus, shedding of the vaccine-derived virus will not confound WWGS with false positive signals[170], although such a possibility needs to be considered if replication-competent vaccines would become available. Main applications of wastewater genomic surveillance are briefly described in Figure 5. ### Challenges in wastewater genomic surveillance The acknowledged advantages of utilizing wastewater samples in epidemiology arise from its ability to yield near real-time insights, reflecting a comprehensive snapshot of the disease state within a community [171, 172]. It promises a holistic view of the disease prevalence by measuring virus RNA theoretically excreted by all viable shedders within the sewer catchment. However, the actual accuracy and representativeness of measurements acquired from wastewater are contingent upon multiple influencing factors, notably including observable factors like sample dilution by exogenous hydrological flows and partially observable ones like in-network analyte decay/degradation [173, 174, 150]. Variability poses a significant challenge in wastewater-based epidemiology, especially for diseases like COVID-19, where different measured virus RNA concentrations can be observed for the same proportion of infected individuals in the population. This variability is intricately linked to uncertainties between the target analyte, such as RNA, and its representation of disease prevalence or incidence. Complications such as rainfall or snow melt entering a combined sewer network during or after wet weather events can further introduce unwanted variability by diluting the analyte concentrations [150]. Comparative analyses between different geographical locations using WWGS measurements also encounter variations, potentially causing disparities in concentration measurements due to differences in hydraulic residence times among catchments. Strategies Figure 5: Main applications of wastewater genomic surveillance and their impacts on risk assessment, public health guidance, and mitigation strategies. are imperative to account for the myriad of factors causing unwanted variability and uncertainty, including large-scale processes like transient populations and smaller scale ones like laboratory-specific methods[174]. Employing raw wastewater measurements without compensating for influencing factors like wastewater dilution or signal decay can significantly impact decision-making, especially when integrated with other disease prevalence data[150]. Addressing the drivers of variability, including population factors, in-network characteristics, sampling strategies, and sample analysis, is pivotal for the effective management of variability and mitigation of uncertainty. This involves embracing strategies like population normalization, measurement correction, and meticulous design and implementation of sampling[150]. The complex nature of wastewater is also reflected in the WWS data quality. Amplicon dropout due to RNA degradation or outdated primer designs lead to uneven coverage and depth of the sequenced genomic regions. If not corrected by bioinformatic methods, this can bias lineage detection and lineage abundance estimates and might lead to misleading interpretation of the data. A comprehensive benchmarking of bioinformatic WWGS methods and data should also provide requirements for WWS data quality to ensure a certain performance quality. Furthermore, WWGS based on selected genomic regions (e.g., the _spike_ gene) instead of the whole genome might require different WWS data quality standards to keep up the performance of bioinformatic analysis[116, 175]. The robustness of both SNV-based and sequence-based methods for bioinformatic analysis of WWS data heavily relies on the wastewater sample composition. Sequence similarities among related sublineages can cause ambiguity in lineage detection. Thus, the set of reference data, i.e., the considered lineages and selected characteristic mutations/sequences that the WWS data are compared against, impacts which lineages and sub-lineages can be identified and how specific variant calls/reads can be assigned. Current bioinformatic methods already implement various approaches for reference reconstruction. VLQ[116] selects reference lineages based on the spatio-temporal context of the wastewater sample and samples a specific number of genomic sequences for every lineage according to a predefined threshold for the genomic variation that should be captured[116]. Freyja reconstructs a set of characteristic lineage mutations based on the UShER phylogenetic tree[11], while other SNV-based tools like wastewaterSPAdes and SAMRefiner rely on a rule-based selection of characteristic sets of mutations considering lineage-differentiating power[176, 177]. However, the reference bias remains strong and requires continuous awareness and manual review[131]. Because of the fast evolutionary changes of the virus, reference data need to be re-evaluated for every sample and pandemic timeframe. Specifically, convergent evolution and novel lineages challenge the current strategies for reference reconstruction: depending on the circulating lineages of interest, it becomes more challenging to represent genomic variation and still guarantee sufficient differentiation power between sub-lineages. Furthermore, most currently applied tools rely on a large amount of clinical sequence data to reconstruct their reference data sets. Decreased clinical sampling poses a challenge for bioinformatic WWGS and should be considered for further research in method development, especially in terms of identifying and quantifying unknown lineages. Early identification of unknown lineages based on novel genomic signals represents one desired benefit and also a great challenge for WWGS. Currently, novel lineage detection is mostly conducted retrospectively, while real-time cryptic lineage detection represents an ongoing bioinformatic research topic where slowly the first approaches are published. Previously, CryKey was developed as one of the first tools for non-retrospective cryptic lineage detection[178]. CryKey identifies cryptic lineages based on sets of mutations that co-occur on the same reads but have not been observed to co-occur before in clinical sequence data. The tool addresses bias and artifacts in WWS data by rule-based filtering of mutations and reconstructs a reference table mapping SNP information and lineage assignments from clinical sequence data. Overall, biases of WWS data and their epidemic context should be continuously monitored and considered during bioinformatic method development. ## Conclusions Genomic sequencing of wastewater samples coupled with effective computational tools can complement clinical or epidemiological methods or even independent means for SARS-CoV-2 surveillance. To make it feasible, bioinformatics methods that can address wastewater-specific genomic data should be developed. There are a plethora of tools developed for similar problems in genomics, but it is imperative to perform comprehensive benchmarking before they can be applied to genome-based wastewater sequencing. Benchmarking will allow not only an understanding of the quality of state-of-the-art methods, but will help to determine the future direction for methods development. Genome-based wastewater surveillance is an excellent supplement to clinical or epidemiological monitoring of pathogens' spread. However, it is not mainstream yet. Currently, only 70 out of 194 countries use wastewater surveillance[179]. For example, in India, with a population of more than 1.3B, five wastewater-based surveillance sites are in effect. Developing countries do not have the resources to sequence several samples of the population to trace emerging lineages of SARS-CoV-2[180]. An appealing alternative to that can be collecting and sequencing viral samples from wastewater, which is significantly more cost-effective and expands the coverage of a surveilled population. A typical COVID-19 wastewater surveillance program is a powerful epidemiological tool that provides quantification of SARS-CoV-2 and acts as an early warning system for community infections[181, 182, 183]. Wastewater genomic surveillance provides the same assurances as a typical surveillance program while generating sequencing data, which can be used for novel lineage or VOC detection. To make it more cost-effective, pooled sequencing and advanced algorithmic processing can be used. Pooling will increase the number of samples sequenced in a single run. It should be noted that computational methods for inference of heterogeneous viral populations from pooling data exist[184, 185], but should be benchmarked and adjusted to the specifics of wastewater surveillance data. Novel bioinformatics pipelines specific to wastewater surveillance can be developed to detect novel lineages and their abundance quantification. Currently, universal guidelines are not established to collect wastewater samples, concentrate viral particles, extract RNA, and quantify viral loads. As such, standard operating procedures (SOP) should be defined, and data can be shared on public repositories just like clinical data repositories[186]. That data can further help us detect novel lineages before they appear in a large population, and preventive measures can be taken. Wastewater data can help identify the relative abundance of existing VOC and potentially assemble a novel one. Additionally, wastewater can be used to monitor other viruses without a significant increase in the cost of monitoring, including Influenza A and B, monkeypox, and norovirus[187, 188, 189, 190, 191, 192, 193, 194, 195, 196]. All the current initiatives in exploring possibilities of wastewater-based surveillance indicate its tremendous potential for reliable viral surveillance. Wastewater-based genomic surveillance can be a powerful supplement or even a main methodology for cost-efficient and reliable surveillance of current and future viral pandemics. ## Supporting Information **Supplementary Table 1.** Collection of studies and methods used for the genomic surveillance of SARS-CoV-2. [https://docs.google.com/spreadsheets/d/1UcJEczHSdDYsAKUMliCNZitgeaFB5vL4Vu1mMYSp](https://docs.google.com/spreadsheets/d/1UcJEczHSdDYsAKUMliCNZitgeaFB5vL4Vu1mMYSp) M50/edit#qid=654029053 (XLSX)
2307.16719
Binaries masses and luminosities with Gaia DR3
The recent Gaia third data release (DR3) has brought some new exciting data about stellar binaries. It provides new opportunities to fully characterize more stellar systems and contribute to enforce our global knowledge of stars behaviour. By combining the new Gaia non-single stars catalog with double-lined spectroscopic binaries (SB2), one can determine the individual masses and luminosities of the components. To fit an empirical mass-luminosity relation in the Gaia G band, lower mass stars need to be added. Those can be derived using Gaia resolved wide binaries combined with literature data. Using the BINARYS tool, we combine the astrometric non-single star solutions in the Gaia DR3 with SB2 data from two other catalogs : the 9th Catalogue of Spectroscopic Binary orbits (SB9) and APOGEE. We also look for low mass stars resolved in Gaia with direct imaging and Hipparcos data or literature mass fraction. The combination of Gaia astrometric non-single star solutions with double-lined spectroscopic data enabled to characterize 43 binary systems with SB9 and 13 with APOGEE. We further derive the masses of 6 low mass binaries resolved with Gaia. We then derive an empirical mass-luminosity relation in the Gaia G band down to 0.12 Msun.
S. Chevalier, C. Babusiaux, T. Merle, F. Arenou
2023-07-31T14:39:49Z
http://arxiv.org/abs/2307.16719v1
# Binaries masses and luminosities with Gaia DR3 ###### Abstract Context:The recent Gaia third data release (DR3) has brought some new exciting data about stellar binaries. It provides new opportunities to fully characterize more stellar systems and contribute to enforce our global knowledge of stars behaviour. Aims:By combining the new Gaia non-single stars catalog with double-lined spectroscopic binaries (SB2), one can determine the individual masses and luminosities of the components. To fit an empirical mass-luminosity relation in the Gaia \(G\) band, lower mass stars need to be added. Those can be derived using Gaia resolved wide binaries combined with literature data. Methods:Using the BINARYS tool, we combine the astrometric non-single star solutions in the Gaia DR3 with SB2 data from two other catalogs : the 9th Catalogue of Spectroscopic Binary orbits (SB9) and APOGEE. We also look for low mass stars resolved in Gaia with direct imaging and Hipparcos data or literature mass fraction. Results:The combination of Gaia astrometric non-single star solutions with double-lined spectroscopic data enabled to characterize 43 binary systems with SB9 and 13 with APOGEE. We further derive the masses of 6 low mass binaries resolved with Gaia. We then derive an empirical mass-luminosity relation in the Gaia \(G\) band down to \(0.12\,\mathcal{M}_{\odot}\). Conclusions: ## 1 Introduction The 3\({}^{\rm rd}\) data release (DR3) from the Gaia mission (Gaia Collaboration et al. 2023b) provides for the first time non-single star solutions for hundreds of thousands sources (Gaia Collaboration et al. 2023a). This brings a new exciting dataset to fully characterize new binary systems in particular their dynamical masses and luminosities. The estimation of stellar masses is a fundamental process to improve the understanding of their behaviour (luminosity, evolution, etc.). This can be mainly achieved by characterizing binary systems, which is the aim of this paper. The stars that can be fully characterized like the ones studied in this paper are not that many, but they are crucial since they enable to calibrate fundamental physical relations. These ones will then make possible to estimate the parameters of single stars or less reachable objects. The main example is the mass-luminosity relation. It is set on fully characterized star systems, for which masses and luminosities of both components are known. Knowing the dynamical masses also enables to constrain other characteristics of the stars, such as their age through isochrone fitting. One of the main purpose of this paper is to use these new Gaia DR3 data to provide new dynamical masses and a first mass luminosity relation in the G band. Empirical mass luminosity relations are mostly provided in the near-infrared due to its lower dependency on the metallicity than the visible (e.g. Delfosse et al. 2000; Benedict et al. 2016; Mann et al. 2019). In the visible empirical mass luminosity relations are provided in the V band (e.g. Delfosse et al. 2000; Benedict et al. 2016). Masses of double-lined spectroscopic binaries (SB2) have been obtained so far mainly through eclipsing binaries and a smaller sample of visually resolved binaries, the latter having the advantage of providing also a measure of the parallax of the system (e.g. Pourbaix 2000). Masses can also be estimated together with the luminosity of the stars through the astrometric motion of the photocentre. However this motion was in general too small to be detected by Hipparcos (ESA 1997) (see e.g. Jancart et al. 2005), except in a few cases (Arenou et al. 2000). An observing program of SB2 has been initiated since 2010 to allow the determination of masses at the 1% level using future Gaia astrometric orbits (Halbwachs et al. 2020). The new Gaia DR3 astrometric orbits allow the determination of new masses of SB2 systems, as already done between the Gaia SB2 and astrometric orbital solutions in Gaia Collaboration et al. (2023a). However the astrometric motion impacted the epoch radial velocity measures of Gaia (Babusiaux et al. 2023), leading to bad goodness of fit of the solutions. Those will therefore not be considered here. Here we combine Gaia astrometric data with double-lined spectroscopic data from APOGEE (Apache Point Observatory Galactic Evolution Experiment, Kounkel et al. 2021) and the 9th Catalogue of Spectroscopic Binary orbits (SB9, Pourbaix et al. 2004) to derive the dynamical mass of each component as well as their flux in the \(G\) band. Section 2 presents the data used: the double-lined spectroscopic data (Sect. 2.1) and the astrometric solutions from Gaia (Sect. 2.2). Then the method used to determine the binary masses is explained in Sect. 3. The results obtained are discussed in Sect. 4. Section 5 is dedicated to the mass-luminosity relation, presenting first 6 low-mass stars resolved by Gaia with direct imaging data (Sect. 5.1) and then the fit of mass-luminosity relation (Sect. 5.2). ## 2 Data We determine binary system features using astrometric data combined with double-lined spectroscopy. ### The double-lined spectroscopic data Spectroscopic data has been obtained by measuring the Doppler effect of the star system. The stellar motion induces a periodic translation of their spectrum depending on their motion in the line-of-sight from the Earth. For double-lined spectroscopy, the motion of the two sources of the binary are well identified and the ratio of the amplitude of the radial velocity motion provides the mass ratio of the star system. The estimation of the mass requires the knowledge of the inclination which cannot be obtained from the spectroscopic orbit. We used double-lined spectroscopic data originating from two catalogs: the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and the 9th Catalog of Spectroscopic Orbits (SB9). SB9 (Pourbaix et al. 2004) is a huge compilation of spectroscopic orbits from the literature over the past decades. It counts about 5000 orbits together with the input radial velocities used to derive the orbit. From those, 55 have epoch radial velocities available and a Gaia astrometric orbit counterpart. The compatibility between the orbital solutions provided by Gaia astrometry and SB9 spectroscopy has been checked first to remove triple systems. We required a consistency at \(10\sigma\) for the periods \(P\) : \[|P_{Gaia}-P_{SB9}|<10\sqrt{\sigma_{P_{Gaia}}^{2}+\sigma_{P_{SB9}}^{2}}. \tag{1}\] We note that a consistency at \(5\sigma\) would have removed the well behaved solution Gaia DR3 1528045017687961856 (HIP 62935). The compatibility of the eccentricity within 10 \(\sigma\) was also checked (like in Eq. 1) but does not remove any star system, leading to 43 SB9 binaries selected. APOGEE (Majewski et al. 2017) is a survey conducted with two high resolution spectrographs, covering the spectral band between 1.51 and 1.7 \(\upmu\)m. Data used in this paper originates from Kounkel et al. (2021), in which 7273 double-lined spectroscopic systems have been detected. 183 star systems have been found to have both double-lined SB2 spectroscopy data in APOGEE and an astrometric orbital solution in Gaia non-single star (NSS) catalog, but only 126 that had more than one APOGEE epoch have been kept. Among them, only one star system orbit could be solved through spectroscopy only by Kounkel et al. (2021), Gaia DR3 702393458327135360 (HD 80234). The direct combination of those spectroscopic parameters with Gaia astrometric ones has already been achieved by Gaia Collaboration et al. (2023a) to derive the masses of this system. For the other stars with a Gaia NSS solution counterpart, the constrains from the astrometric orbit can be used to extract the mass ratio from the raw radial velocity curves. SB9 radial velocity curves have a much larger observation time range than APOGEE. There are enough data from various epochs to perform an independent fit of the orbit with spectroscopy only. The orbital parameters are directly given with their associated errors in the catalog. This is not the case for our APOGEE sample, except for Gaia DR3 702393458327135360 (HD 80234). ### The Gaia DR3 astrometric orbits Astrometric data is obtained by observing the corkscrew-like motion of the photocentre i.e. the apparent light source of the binary system. It provides the needed inclination but also the parallax and, combined with an SB2 solution, the flux ratio. Here the astrometric data is given by the orbital solutions from the Gaia 3\({}^{\rm rd}\) data release non-single star solutions catalog. This new catalog enabled to increase significantly the number and the precision of binary system solutions (Gaia Collaboration et al. 2023a). The catalog provides several types of solutions depending on the collected data and the detection method or instrument used, that is Eclipsing, Spectroscopic and Astrometric solutions and potential combination of those. In this paper, we only use the astrometric solutions. Among our final sample, 21 have a combined AstroSpectroSB1 solution for which only the astrometric part is taken into account. In Gaia DR3, the orbit is not described by the Campbell elements (semi-major axis, inclination, node angle, periastron angle) but by the Thiele-Innes coefficients (Halbwachs et al. 2023). ## 3 Data processing While for SB9 the orbital parameters are known and the computation of the masses can be derived directly (Annex A), we go back here to the raw spectroscopic data to improve how the correlations between the parameters are considered. The combination of spectroscopy and astrometry is achieved using BINARYS (orBIt determiNAtion with Absolute and Relative astrometRY and Spectroscopy, Leclerc et al. 2023). BINARYS can combine Hipparrocs and/or Gaia absolute astrometric data with relative astrometry and/or radial velocity data. It has been updated to handle Gaia NSS solutions and its heart which computes the likelihoods is available online1. It needs initial values and uses the automatic differentiation code TMB (Template Model Builder, Kristensen et al. 2016) to find the maximum likelihood. It gives in output the estimated orbital parameters with the associated covariance matrix together with a convergence flag. Due to the fact that Monte-Carlo techniques cannot be used with the astrometric Thiele Innes coefficients of Gaia DR3 (see Section 6.1 of Babusiaux et al. 2023), the MCMC (Markov Chain Monte Carlo) option of BINARYS cannot be used in this study while the TMB automatic differentiation is consistent with the local linear approximation result. Footnote 1: [https://gricad-gitlab.univ-grenoble-alpes.fr/ipag-public/gaia/binarys](https://gricad-gitlab.univ-grenoble-alpes.fr/ipag-public/gaia/binarys) BINARYS provides among all the orbital parameters the primary semi-major axis \(a_{1}\), the mass ratio \(q=\mathcal{M}_{2}/\mathcal{M}_{1}\) and the period \(P\) with their associated covariance matrix. This enables to deduce the primary and secondary masses (see Eq. 2 and 3) with the associated errors (Annex B). \[\mathcal{M}_{1}=\frac{a_{1}^{3}\left(1+q\right)^{2}}{P^{2}\ q^{3}} \tag{2}\] \[\mathcal{M}_{2}=\frac{a_{1}^{3}\left(1+q\right)^{2}}{P^{2}\ q^{2}} \tag{3}\] with the period \(P\) in years, \(a_{1}\) in au and the masses \(\mathcal{M}\) in solar masses \(\mathcal{M}_{\odot}\). It also gives the flux fraction of the secondary \(\beta\) in the G spectral band: \[\beta=\frac{F_{2}}{F_{1}+F_{2}}=\frac{q}{1+q}\left(1-\frac{a_{0}}{a_{1}}\right) \tag{4}\] with \(a_{0}\) the semi-major axis of the photocentre in the same unit as \(a_{1}\) (see Annex A). ### The 9th catalog of spectroscopic orbit - SB9 BINARYS uses here as input the radial velocity epoch data for each component, the orbital astrometric solution from Gaia NSS and initial parameters, chosen for SB9 to be the result of the direct calculation process (Annex A). Inflation of the raw radial velocities uncertainties is quite often needed, either due to an under-estimation of the formal errors, template mismatch or stellar variability effects. We therefore apply a procedure similar to Halbwachs et al. (2020) to correct the uncertainties. We first apply the variance weight factors \(w\) provided in the SB9 database. Those have been provided by some studies combining different observations and gives their relative weights. We therefore start from the weighted uncertainties \(\sigma=\frac{\sigma_{0}}{\sqrt{w}}\). Then those uncertainties are adjusted using the goodness of fit estimator F2 (Wilson & Hilferty, 1931): \[\mathrm{F2}=\left(\frac{9\nu}{2}\right)^{1/2}\left[\left(\frac{\chi^{2}}{\nu }\right)^{1/3}+\frac{2}{9\nu}-1\right] \tag{5}\] with \(\nu\) the number of degrees of freedom and \(\chi^{2}\) the weighted sum of the squares of the differences between the predicted and the observed values. The radial velocity uncertainties are scaled to obtain F2 = \(0\)_i.e._\(\chi^{2}=\chi_{0}^{2}\): \[\chi_{0}^{2}=\nu\left(1-\frac{2}{9\nu}\right)^{3} \tag{6}\] The corrected uncertainties are then \[\sigma_{\mathrm{corr}}=\sqrt{\frac{\chi^{2}}{\chi_{0}^{2}}}\times\sigma. \tag{7}\] This correction factor is applied 3 times: the uncertainties are adjusted once independently for each component with a SB1 correction, and then adjusted again together with a SB2 correction over the whole system. The process requires the number of degree of freedom to be positive _i.e._ to have more epochs than parameters to fit. This is always the case except for Gaia DR3 1480959875337657088 (HIP 69885), which has only 2 epochs for the primary and the secondary. No uncertainty-correction at all is applied for this one. In the literature, an orbit fit could have been achieved using additional blended radial velocity epochs which could not be used here. For this star the orbital parameters are mainly driven by the Gaia NSS solution. Four other star systems do not have enough radial velocity epochs for the secondary to have a SB1 solution necessary to apply the correction process: Gaia DR3 1441993625629660800, 1517927895803742080, 4145362250759997952 and 4354357901908595456, (respectively HIP 66511, HIP 61436, HD 163336 B and HIP 81170). For them, the two other error correction factors, SB1 primary and SB2, are still applied and the \(\chi^{2}\) of the final solution on the secondary has been checked to be small. Around 10 star systems have had a significant SB1 correction factor over the primary and/or the secondary, with \(\sqrt{\frac{\chi^{2}}{\chi_{0}^{2}}}>1.3\). ### The Apache Point Observatory Galactic Evolution Experiment - APOGEE Similarly, BINARYS is provided the APOGEE radial velocity epoch data for each component, the orbital astrometric solution from Gaia NSS. But since for APOGEE the spectroscopic orbit is not known, the BINARYS code must be initialized with various initial parameters. We used sampled initial values of \(\mathcal{M}_{1}\) from 0.6 to 1.4 solar mass with a 0.2 step, \(\mathcal{M}_{2}\) from 0.6 to 1.4 solar mass with a 0.2 step (keeping \(\mathcal{M}_{2}\leq\mathcal{M}_{1}\)), \(\beta\) from 0 to 0.5 with a 0.1 step. Since the direction of motion is set by the spectroscopy, we also try different configurations for the node angle, adding or not a \(\pi\) angle to both the node angle \(\Omega\) and the argument of periastron \(\omega\) to the Gaia astrometric orbit values. For each system, there is then \(15\times 6\times 2=180\) initial configurations tested for each star system. The convergence of TMB towards a good solution is not expected for every system: many will be triple systems with the short period binary seen by APOGEE and the longer period one seen by Gaia. Each TMB output corresponding to an initial configuration of a given star system must then fit the following criteria to be kept: it must converge, with a goodness-of-fit estimator \(\mathrm{F2}<5\) and the flux fraction of the secondary should be within the interval [0; 0.5] at 3 \(\sigma\). The star system is kept only if those conditions are met by at least 10% of the 180 initial configurations tested. Then, for each star system, only the solution obtained for more than 80% of the cases where TMB converged is kept. If no such solution exists, the star system is rejected. From the 126 star systems studied, 35 remain at this point. Due to the small number of radial velocity epochs, the solution may have a too low precision on the masses to be interesting despite a good convergence. For the final selection, we keep only stars with \(\frac{\sigma_{4}}{\sigma_{4}}<0.5\), \(\frac{\sigma_{41}}{\mathcal{M}_{1}}<0.5\) and \(\sigma_{\mathcal{M}_{1}}<1\,\mathcal{M}_{\odot}\) leading to 13 systems. This selection step is only applied for APOGEE, since the selection over SB9 star systems has been performed through the compatibility of periods. ## 4 Results We have obtained the dynamical masses and flux fraction of the individual stars of 56 binary systems through the combination of Gaia DR3 385 astrometric solutions with SB2 solutions from SB9 (43 systems) and APOGEE (13 systems). Figure 1 provides the position of the binaries we have characterized in the HR diagram. The results obtained through the orbit fitting process are given in Table 1 for SB9. The uncertainties on the masses of the binary system Gaia DR3 3954536956780305792 (HIP 61816) are extremely large, with \(\frac{\sigma_{41}}{\mathcal{M}_{1}}\approx 1\). These results are therefore unusable. We expect this result to be a consequence of the lack of constrains over the inclination for this system which is \(i=27.5\pm 12.7^{\circ}\). Being compatible with 0 at less than \(3\sigma\), the masses are much less constrained too, leading to the large uncertainties. The characterization of the binaries from APOGEE is detailed in Table 2. A particular case to be considered is Gaia DR3 \(839401128363085568\) (LP 129-155). The results lead to \(\mathcal{M}_{1}=1.49\pm 0.44\,\mathcal{M}_{\odot}\), \(\mathcal{M}_{2}=1.34\pm 0.37\,\mathcal{M}_{\odot}\) and \(\beta=0.40\pm 0.03\), with F\(2=2.5\). These parameters make the system an outlier in the mass-luminosity relation. Although a MCMC is not adapted to the Thiele-Innes handling, we tested a short MCMC on this star that went to lower mass values, indicating the presence of another solution. This binary system is the only one of our sample with a low eccentricity consistent with zero at 1 \(\sigma\). The system has then been additionally tested with an eccentricity fixed to 0, corresponding to a circular orbit. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Gaia DR3 ID & \(q\) & \(\sigma_{q}\) & \(\mathcal{M}_{1}\) & \(\sigma_{\mathcal{M}_{1}}\) & \(\mathcal{M}_{2}\) & \(\sigma_{\mathcal{M}_{1}}\) & \(\beta\) & \(\sigma_{\beta}\) & F2 & Ref \\ \hline \hline [MISSING_PAGE_POST] ({}^{o}\) & 3.311 & 0.0047 & 0.806 The result obtained fits much better the mass-luminosity relation despite its slightly larger F2, and is the solution kept in Table 2. Since Gaia provides the \(G\) magnitude of the binary system, the individual absolute magnitude in the \(G\)-band can be deduced for each star using \(A_{G}\) the extinction in the \(G\) band derived from the 3D extinction map of Lallement et al. (2022) and using the Gaia DR3 extinction law2: Footnote 2: [https://www.cosmos.esa.int/web/Gaia/edr3-extinction-law](https://www.cosmos.esa.int/web/Gaia/edr3-extinction-law) \[M_{G_{1}}=G+2.5\log_{10}\left(\frac{1}{1-\beta}\right)+5+5\log_{10}\left(\frac {\varpi}{1000}\right)-A_{G} \tag{8}\] \[M_{G_{2}}=G+2.5\log_{10}\left(\frac{1}{\beta}\right)+5+5\log_{10}\left(\frac{ \varpi}{1000}\right)-A_{G} \tag{9}\] where \(\varpi\) is the parallax in mas. To estimate the uncertainties of those absolute magnitude (see Appendix B), a 10% relative error on the extinction with a minimum error of 0.01 mag is assumed and a 0.01 mag error is quadratically added to the \(G\) formal magnitude errors. The extinction term \(A_{G}\) remain negligible for 90% of our sample, with a maximum \(A_{G}\) of 0.03 for APOGEE and 0.15 for SB9. Figure 2 for SB9 and Figure 3 for APOGEE give the position of all the individual stars we have characterized in the mass-luminosity diagram. They are overplotted on top of the PARSEC solar-metallicity isochrones (Padova and TRieste Stellar Evolution Code, Bressan et al. 2012). Almost all the stars are compatible with the isochrones at \(3\sigma\). For SB9, one star can be considered as an outlier, Gaia DR3 \(3427930123268526720\) (GJ 220), which is the only system of the SB9 sample with F2 \(>5\). For APOGEE, one main outlier exists, Gaia DR3 \(528507195483330638\) (HD 50199), still compatible with the isochrones at less than \(5\sigma\). Nothing specific has been found about this system to justify its surprising position in the diagram. 11 systems have a goodness-of-fit of their Gaia DR3 astrometric solution higher than 10, but none are outliers in the mass-luminosity relation nor in the F2 of the combined fit nor in the relation between the flux fraction and the mass ratio. We therefore decided to keep those systems in our sample for the Section 5 mass-luminosity relation study. Figure 1: H-R diagram of the characterized binaries. The absolute magnitude of the unresolved binary in the \(G\)-band \(M_{G}\) is plotted as a function of the colour \(G_{BP}-G_{BP}\). The blue dots correspond to the APOGEE binaries, and the red dots correspond to the SB9 binaries. They are overplotted on the low extinction Gaia DR3 HR diagram (in grey). Figure 3: Mass-luminosity diagram of the characterized stars from the combination of Gaia with APOGEE. The error bars at \(1\,\sigma\) are given in grey. The absolute magnitude of the individual stars in the \(G\)-band \(M_{G}\) is plotted as a function of the star mass (in \(\,\mathcal{M}_{\odot}\)). The black dots represent the stars, with the associated error bars at \(1\sigma\) in grey. The red triangles represent the outlier Gaia DR3 ID 5285071954833306368. They are overplotted on the isochrones (in green). Figure 2: Mass-luminosity diagram of the characterized stars from the combination of Gaia with SB9. The error bars at \(1\sigma\) are given in grey. The absolute magnitude of the individual stars in the \(G\)-band \(M_{G}\) is plotted as a function of the star mass (in \(\,\mathcal{M}_{\odot}\)). The black dots represent the stars, with the associated error bars at \(1\sigma\) in grey. The red triangles represent Gaia DR3 \(3954536956780305792\), for which the uncertainties are really large and not represented here. The blue diamonds represent Gaia DR3 \(3427930123268526720\) for which F2 \(>5\). They are overplotted on the solar-metallicity PARSEC isochrones (in green). ### Comparison with the direct calculation method for SB9 As a sanity check, we compare the masses obtained with the orbit fitting process (Table 1) to those obtained through direct calculation, that is using directly the orbital parameters provided by SB9 to derive the mass functions without going back to the raw data, as detailed in Annex A (Table A.1). Going back to the raw spectroscopic data allows to take into account the correlations between the spectroscopic parameters and then have a better estimation of the orbital parameters and their uncertainties. Moreover, a correction process is applied to the uncertainties of the radial velocity epochs making them more realistic. Figure 4 presents the distribution of the compatibility (in \(\sigma\)) between the masses obtained for the orbit fitting process and the direct calculation. The compatibility is defined as: \[\mathrm{compatibility}=\frac{\mathcal{M}_{OF}-\mathcal{M}_{DC}}{\mathrm{max}( \sigma_{OF},\sigma_{DC})} \tag{10}\] where \(\mathcal{M}_{OF}\) is the mass obtained through orbit fitting and \(\mathcal{M}_{DC}\) the mass obtained through direct calculation. It shows that as expected the results are nicely compatible, but with a difference that is not fully negligible in some cases. Figure 5 provides the uncertainties over the masses obtained with one method with respect to the other. The majority of the mass uncertainties are under the identity line, meaning that the uncertainties coming from the direct calculation process are generally higher than the ones from orbit fitting. It confirms that the uncertainties over the orbital parameters are often reduced when accounting for the correlations between them. A few points have slightly bigger uncertainties through the orbit fitting process. This is the result of the correction process applied to the uncertainties (see Eq. 7). One noteworthy case is Gaia DR3 414536225075997952 (HD 163336 B), for which the difference between the uncertainties is quite large. The radial velocity data contains only 3 epochs for the secondary. Thus, we can expect a strong correlation between the SB9 orbital parameters. We performed a MCMC over the raw spectroscopic data only and confirm that strong correlations appear and that the distribution is strongly asymmetric, explaining the strong improvement obtained by including the knowledge of the Gaia orbit in the spectroscopic fit. ### Reference comparison Three star systems from our SB9 sample are identified as SB2 in the Gaia DR3 catalogue with direct masses derived by Gaia Collaboration et al. (2023a). While our mass estimates are consistent with Gaia DR3 1067685718250692352 (HIP 45794) and Gaia DR3 2035577729682322176 (HIP 97640), Gaia DR3 595390807776621824 HIP 42418) is a 5\(\sigma\) outlier. It has a photocentre semi-major axis \(a_{0}\) of 3.4 mas while the other two stars have a smaller \(a_{0}\sim 0.8\) mas, suggesting that the astrometric motion impacted the spectroscopic measure, as suggested by Babusiaux et al. (2023). This is due to the fact that the expected position of the spectra on the Gaia Radial Velocity Spectrometer (RVS) detectors is predicted by the standard 5-parameter astrometric motion instead of the epoch astrometric one which would not be precise enough. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Gaia DR3 ID & \(q\) & \(\sigma_{q}\) & \(\mathcal{M}_{1}\) & \(\sigma_{\mathcal{M}_{1}}\) & \(\mathcal{M}_{2}\) & \(\sigma_{\mathcal{M}_{3}}\) & \(\beta\) & \(\sigma_{B}\) & F2 \\ \hline \hline 839401128363085568\({}^{\rm o}\) & 0.9143 & 0.0993 & 0.6225 & 0.1162 & 0.5692 & 0.0918 & 0.3865 & 0.0266 & 4.03 \\ 683525873153063680 & 0.9376 & 0.0496 & 0.5124 & 0.1037 & 0.4804 & 0.0951 & 0.3985 & 0.0167 & 1.72 \\ 702393458327135360 & 0.9398 & 0.0252 & 1.4875 & 0.5001 & 1.3980 & 0.4663 & 0.3871 & 0.0186 & 4.53 \\ 790545256897189760 & 0.8549 & 0.0846 & 0.4236 & 0.0870 & 0.3621 & 0.0618 & 0.2850 & 0.0263 & 1.20 \\ 794359875050204544 & 0.8824 & 0.1294 & 1.1746 & 0.3294 & 1.0365 & 0.2566 & 0.2676 & 0.0360 & 0.99 \\ 82431548253160592 & 0.9035 & 0.3744 & 0.9467 & 0.2152 & 0.8554 & 0.2087 & 0.3104 & 0.1036 & 3.92 \\ 9011702414164592 & 0.8396 & 0.1134 & 0.9309 & 0.2506 & 0.7815 & 0.1951 & 0.2330 & 0.0404 & 0.46 \\ 126790706306737344 & 0.6633 & 0.0586 & 1.1378 & 0.2014 & 0.7547 & 0.0912 & 0.0703 & 0.0157 & 0.61 \\ 16361320061580415488 & 0.7176 & 0.1342 & 1.1046 & 0.4127 & 0.7927 & 0.2574 & 0.1488 & 0.0446 & 3.24 \\ 213482954477683280 & 0.7721 & 0.0680 & 0.8751 & 0.1245 & 0.6757 & 0.0738 & 0.1505 & 0.0251 & 0.73 \\ 2705239237909520128 & 0.5293 & 0.1035 & 0.2336 & 0.0683 & 0.1236 & 0.0326 & 0.1993 & 0.0427 & 0.53 \\ 3847995791877023104 & 0.8387 & 0.0788 & 1.1446 & 0.2029 & 0.9600 & 0.1550 & 0.3167 & 0.0244 & 1.05 \\ 5285071954833306368\({}^{\rm o}\) & 0.7905 & 0.1019 & 0.4609 & 0.1058 & 0.3643 & 0.0853 & 0.1499 & 0.0416 & 1.58 \\ \hline \end{tabular} 1 \end{table} Table 2: Solutions from the combination of Gaia NSS astrometric solutions with APOGEE double-lined spectroscopy. \(q\) is the mass ratio, \(\mathcal{M}_{1}\) is the mass of the primary, \(\mathcal{M}_{2}\) is the mass of the secondary (in \(\mathcal{M}_{\odot}\)) and \(\beta\) is the flux fraction of the secondary. \(\sigma_{\varpi}\), \(\sigma_{\mathcal{M}_{1}}\), \(\sigma_{\mathcal{M}_{\rm b}}\) and \(\sigma_{\beta}\) are their associated uncertainties. F2 is the goodness-of-fit estimator. Figure 4: Compatibility density of the masses obtained by direct calculation and by orbit fitting for SB9 binaries. The compatibility is in \(\sigma\). It is given in orange for primary masses and in purple for secondary masses. Masses were obtained combining a visual orbit with an SB2 orbit for Gaia DR3 2129771310248902016 (HIP 95575) by Picotti et al. (2020) (\(\mathcal{M}_{\rm 1}=0.670\pm 0.069\), \(\mathcal{M}_{\rm 2}=0.602\pm 0.061\), compatible with our results within \(1\sigma\)), for Gaia DR3 2067948245320365184 (HIP 101382) by Kiefer et al. (2018) (\(\mathcal{M}_{\rm 1}=0.8420\pm 0.0014\), \(\mathcal{M}_{\rm 2}=0.66201\pm 0.00076\), compatible with our results within \(2\sigma\)) and for Gaia DR3 23832387685219328 (HIP 20601) by Halbwachs et al. (2020) (\(\mathcal{M}_{\rm 1}=0.9798\pm 0.0019\)\(\mathcal{M}_{\rm 0}\), \(\mathcal{M}_{\rm 2}=0.72697\pm 0.00094\)\(\mathcal{M}_{\rm 0}\), compatible with our results within \(1\sigma\)). Halbwachs et al. (2020) combined the raw relative astrometry and spectroscopic data. They obtained a parallax at \(4.4\sigma\) from the Gaia NSS one. We tested a combined fit with the radial velocity, interferometry from Halbwachs et al. (2020) and Gaia NSS solution and obtained a goodness of fit of F2 = 1.5, a parallax of \(\varpi=16.573\pm 0.017\) and masses \(\mathcal{M}_{\rm 1}=0.9816\pm 0.0014\)\(\mathcal{M}_{\odot}\) and \(\mathcal{M}_{\rm 2}=0.72808\pm 0.00076\)\(\mathcal{M}_{\odot}\). This new parallax is at \(3.4\sigma\) from the Halbwachs et al. (2020) one and reduced to \(2.6\sigma\) if we take into account the Gaia DR3 parallax zero point (Lindegren et al. 2021), highlighting that SB2 stars with direct imaging will be excellent test cases for Gaia DR4 epoch data validation. The orbit fit for this binary is given on the Figure 6. For APOGEE, the only star system for which masses have been obtained in the literature is the binary Gaia DR3 702393458327135360 which has been discussed in Section 2.1. This star has been solved through spectroscopy only (Kounkel et al. 2021) and then combined with Gaia astrometry through a direct calculation process by Gaia Collaboration et al. (2023a) to obtain \(\mathcal{M}_{\rm 1}=1.14\pm 0.38\)\(\mathcal{M}_{\odot}\), \(\mathcal{M}_{\rm 2}=1.06\pm 0.35\)\(\mathcal{M}_{\odot}\) and \(F_{\rm 2}/F_{\rm 1}=0.567\pm 0.071\), fully compatible with our results. The uncertainties reported here are larger than the one of Gaia Collaboration et al. (2023a). This may be due to a slight discrepancy between the orbital parameters of APOGEE and Gaia, specifically on the eccentricities which are at \(4\sigma\) from each other. This discrepancy leads to a high F2 of 4.6 in our solution and to higher uncertainties than what is obtained by a direct calculation. The orbit fit for the binary Gaia DR3 702393458327135360 (HD 80234) is given on the Figure 7. The difference of observation time clearly appears on the Figures 6 and 7, where the lack of radial velocity epochs for APOGEE is rather obvious. Figure 5: Comparison of the uncertainties on the masses obtained by direct calculation (\(\sigma_{DC}\)) or by orbit fitting (\(\sigma_{OF}\)) for binaries from SB9. The primaries are given in orange, and secondaries in purple. The outlier star system Gaia DR3 4145362250759997952 is represented in green for its primary and secondary. The grey line is the identity line \(y=x\). Figure 6: Radial velocity fit for the binary system Gaia DR3 3283823387685219328 (HIP 20601) from SB9. The radial velocity (km/s) is plotted as a function of the phase. The orange dots correspond to the radial velocity epochs of the primary, and the purple dots give the radial velocity epochs for the secondary. The black curves represent the corresponding fits obtained combining the SB9 epoch radial velocity with the Gaia DR3 NSS astrometric orbital solution with BINARYS. Figure 7: Radial velocity fit for the binary system Gaia DR3 702393458327135360 (HD 80234) from APOGEE. The radial velocity (km/s) is plotted as a function of the phase. The orange dots correspond to the radial velocity epochs of the primary, and the purple dots give the radial velocity epochs for the secondary. The black curves represent the corresponding fits obtained combining the APOGEE epoch radial velocity with the Gaia DR3 NSS astrometric orbital solution with BINARYS. ## 5 The Mass-Luminosity relation The mass calculations presented above could enable to perform an empirical fit of the mass-luminosity relation in the G-band using the Gaia photometry. However, the masses calculated do not provide satisfying constrains in the interesting region of the relation for low-mass stars (\(\mathcal{M}<0.7\,\mathcal{M}_{\odot}\)). To fill in this part of the H-R diagram, we looked for low mass stars resolved by Gaia and direct imaging data from the literature, following the work of Leclerc et al. (2023) on HIP 88745. ### Low-mass systems resolved by Gaia with direct imaging data We found three star systems spatially resolved studied with direct imaging in Mann et al. (2019) which also have Hipparcos (van Leeuwen 2007) transit data (TD) and Gaia resolved observations consistent with the direct imaging data: Gl 330, Gl 860 and Gl 277. Gl 568 is not in the Mann et al. (2019) sample but has direct imaging data from McAlister et al. (1989) and Mason et al. (2018) and has been added to our sample. Those four stars are analysed by BINARYS by combining the direct imaging data with Hipparcos TD and Gaia astrometric parameters of both components following the methodology detailed in Leclerc et al. (2023). All Gaia solutions have a 5-parameter solution except Gl 330 for which the secondary component is a 2-parameter solution and Gl 860 for which both components are 2-parameter solutions only. One Hipparcos TD outlier at 5\(\sigma\) had to be removed for both Gl 568 and Gl 227. Two stars from the Mann et al. (2019) sample, Gl 65 and Gl 473, are resolved by Gaia with separations consistent with the visual orbit, but no Hipparcos data exist for them. However those two stars have literature mass fraction \(B=\frac{\mathcal{M}_{\mathrm{G}}}{\mathcal{M}_{\mathrm{G}}+\mathcal{M}_{ \odot}}\): for Gl 65 \(B=0.494\pm 0.04\) from Geyer et al. (1988) and for Gl 473 \(B=0.477\pm 0.008\) from Torres et al. (1999). We incorporated this information within BINARYS for those stars. As our sample contains very nearby stars, we added the perspective acceleration terms in BINARYS following the description detailed in ESA (1997) and Halbwachs et al. (2023). We first compute the radial proper motion, that is the relative change in distance per year, \(\mu_{r}=V_{r}\varpi/A_{Z}\) in yr\({}^{-1}\) with \(A_{Z}=9.7779222\times 10^{8}\) mas yr km s\({}^{-1}\). The perspective acceleration changes the along-scan abscissa \(\nu\) (in mas) by adding : \[\Delta\nu=-\mu_{r}\Delta T\left(\frac{\partial\nu}{\partial\varpi}\varpi+ \frac{\partial\nu}{\partial\mu_{\alpha}}\mu_{\alpha^{\prime}}+\frac{\partial \nu}{\partial\mu_{\delta}}\mu_{\delta}\right) \tag{11}\] with \(\Delta T\) the epoch in years relative to the reference epoch for the astrometric parameters (that is 1991.25 for Hipparcos and 2016.0 for Gaia DR3). This \(\Delta\nu\) is subtracted to the Hipparcos abscissa residuals and added to the Gaia simulated abscissa. However one has to take into account that the perspective acceleration has been taken into account for DR3 for stars with a Gaia DR2 radial velocity or in the table of nearby Hipparcos stars with radial velocity used for DR23. Here only the radial velocity for the A component of Gl 860 was applied for the DR3 processing, using the same \(V_{r}=-33.94\) km s\({}^{-1}\) as we used. For Gl 65 we used \(V_{r}=39.04\) km s\({}^{-1}\) (Kervella et al. 2016) while for Gl 473 all literature \(V_{r}\) are consistent with zero. We checked that taking into account the perspective acceleration for our stars only change marginally the \(\chi^{2}\). Footnote 3: [https://gea.esac.esa.int/archive/documentation/GDRZ/Data_processing/chap_cu3ast/sec_cu3ast_cali/ssec_cu3ast_cali_source.html#Ch3.T3](https://gea.esac.esa.int/archive/documentation/GDRZ/Data_processing/chap_cu3ast/sec_cu3ast_cali/ssec_cu3ast_cali_source.html#Ch3.T3) The results obtained for these six stars are given in Table 3. Our mass estimates are consistent with Kervella et al. (2016), Delfosse et al. (2000) and Benedict et al. (2016) for Gl 65, but we all use the same literature mass fraction. For Gl 473, using again the same literature mass fraction, our masses are consistent with Delfosse et al. (2000) but are at 2.5\(\sigma\) from the values of Benedict et al. (2016) which is driven by the difference with the RECONS parallax they used. Our mass estimate for Gl 860 is consistent with Delfosse et al. (2000). We derive the dynamical masses for the first time, to our knowledge, of the components of Gl 277, Gl 330 and Gl 568. ### Fitting the Mass-Luminosity relation To fit the mass-luminosity relation, we implemented a TMB function that allows to take into account the uncertainties on both the mass and the magnitude, and more importantly, the correlations between the parameters of the two components of the same system (the calculation of the covariance is detailed in Anne B). The true magnitudes of the stars are used as a random parameter: they are marginalized, that is integrated out of the likelihood. The initial value of TMB is provided by a classical polynomial fit (R 1m function). We selected only components with \(M_{G}>5\) to be used as input for the fit as for fainter magnitudes the age dependency is well known to be too large. We tested several degree for the polynomial and fitting the logarithm of the mass instead of the mass itself and used the Bayesian Information Criterion (BIC) to compare the models. The BIC favoured a polynomial of degree 4 fitting the log of the mass. The coefficients are given in Table 4. The fit uncertainties have been estimated through a bootstrap, leading to uncertainties smaller than \(0.015\,\mathcal{M}_{\odot}\) for magnitudes higher than \(M_{G}>6\) corresponding to masses \(\mathcal{M}<0.77\) \(\mathcal{M}_{\odot}\). The fit is displayed on Fig. 8 together with the PARSEC, the Baraffe et al. (2015)4 and the BASTI (A BAg of Stellar Tracks and Isochrones, Hidalgo et al. 2018)5 isochrones. The age dependency of the mass-luminosity relation starts to be significant for all isochrones at \(\gtrsim 0.6\) \(\mathcal{M}_{\odot}\) and our fit follows the oldest isochrones. For masses \(<0.5\,\mathcal{M}_{\odot}\) our empirical relation indicate lower masses for a given luminosity than the PARSEC isochrones. Our results are consistent with the Baraffe et al. (2015) isochrones except in the low mass region where we find slightly higher masses for a given luminosity. Footnote 4: [http://perso.ens-lyon.fr/isabelle.baraffe/BHAC15dir/](http://perso.ens-lyon.fr/isabelle.baraffe/BHAC15dir/) Footnote 5: [http://basti-iac.oa-teramo.inaf.it/](http://basti-iac.oa-teramo.inaf.it/) ## 6 Conclusion We have estimated the masses of binary systems by combining the astrometric orbits from Gaia DR3 with spectroscopy, 43 star systems from SB9 and 13 from APOGEE. While the spectroscopic orbit was already known for the SB9 stars, it was the case for only one APOGEE star. We tested on SB9 the difference between a direct calculation of the masses using the orbital parameters and a combined fit using the raw radial velocity measures, the later estimating better the parameters, their uncertainties and their correlations. We also estimated the masses of 6 stars resolved by Gaia DR3 with literature direct imaging and either Hipparcos data or literature mass fraction. Three of those stars have dynamical masses derived for the first time. The BINARYS tool have been used to perform the combined fits. BINARYS was extended for this study to handle Gaia DR3 NSS solutions and perspective acceleration within the Hipparcos and Gaia observation time. Using the derived masses and the \(G\) magnitudes we derived a first empirical mass-luminosity relation in the \(G\) band taking into account all the correlations between the component masses and magnitudes. This empirical relation is found to be in better agreement with the Baraffe et al. (2015) isochrones than with the PARSEC ones. We expect Gaia DR4 to significantly increase the sample of stars that could be used in such a study. Moreover Gaia DR4 will provide access to the epoch astrometry. It will enable to make a full combined fit on the raw data of both spectroscopic or direct imaging data with Gaia astrometry. It will also enable to dig into systems with a too low astrometric signal to have a full Gaia DR4 orbital solution but with a good spectroscopic or direct imaging one. We can therefore expect a much more in depth study of the mass-luminosity relation with Gaia DR4, in particular a study of the metallicity dependency should be conducted with a larger sample. ###### Acknowledgements. We thank J.B. Le Bouquin for helping debugging our test on HIP 20601 and X. Deffosse for providing lists of close-by M-dwarfs to dig into. We thank S. Cassisi for his prompt feedback on the BASTI isochrones. T.M. is granted by the BELSPO Belgian federal research program FED-WIN under the research profile Pfr-2020-033_BISTRO. This work has made use of data from the European Space Agency (ESA) space mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
2309.09395
Formalizing two-level type theory with cofibrant exo-nat
This study provides some results about two-level type-theoretic notions in a way that the proofs are fully formalizable in a proof assistant implementing two-level type theory such as Agda. The difference from prior works is that these proofs do not assume any abuse of notation, providing us with more direct formalization. Moreover, some new notions, such as function extensionality for cofibrant types, are introduced. The necessity of such notions arises during the task of formalization. In addition, we provide some novel results about inductive types using cofibrant exo-nat, the natural number type at the non-fibrant level. While emphasizing the necessity of this axiom by citing new applications as justifications, we also touch upon the semantic aspect of the theory by presenting various models that satisfy this axiom.
Elif Uskuplu
2023-09-17T23:19:46Z
http://arxiv.org/abs/2309.09395v1
# Formalizing two-level type theory ###### Abstract This study provides some results about two-level type-theoretic notions in a way that the proofs are fully formalizable in a proof assistant implementing two-level type theory such as Agda. The difference from prior works is that these proofs do not assume any abuse of notation, providing us with more direct formalization. Moreover, some new notions, such as function extensionality for cofibrant types, are introduced. The necessity of such notions arises during the task of formalization. In addition, we provide some novel results about inductive types using cofibrant exo-nat, the natural number type at the non-fibrant level. While emphasizing the necessity of this axiom by citing new applications as justifications, we also touch upon the semantic aspect of the theory by presenting various models that satisfy this axiom. **Keywords.** two-level type theory, homotopy type theory, proof assistant, Agda, category with families. ###### Contents * 1 Introduction * 2 Review about two-level type theory * 2.1 Types & exo-types * 2.2 Isomorphisms & Equivalences * 2.3 Fibrant exo-types * 2.4 Cofibrant exo-types * 2.5 Sharp exo-types * 3 Lifting cofibrancy from exo-nat to other types * 3.1 List exo-types * 3.2 Exo-type of binary trees * 4 Semantics of two-level type theory * 4.1 Category with families * 4.2 Type formers in CwFs * 4.3 Two-level CwFs * 5 Models with cofibrant exo-nat Future directions ###### Abstract We consider the 2LTT case of a 2LTT model with a about 2LTT and some of its applications. This was one of the first attempts to use these features of Agda. Although the initial goal was to formalize the content of the paper _The Univalence Principle_[1], the basics of 2LTT had to be built first because the study in the mentioned paper is based on 2LTT. Within this experience, some modifications to the definitions and some additional tools were needed. One of our goals is to emphasize these changes and additions that make 2LTT applicable in Agda easily. During our Agda project, we encountered situations where certain proofs required the formalization of a new auxiliary tool, which we refer to as _function extensionality for cofibrant types_. Function extensionality is a fundamental property of dependent functions, asserting that two functions are equal if and only if they produce equal results for every input. The specific notion of equality may vary depending on different contexts and levels, but in the case of traditional function extensionality, the equality notion remains consistent both in the domain and the range. However, when dealing with cofibrant types, the situation is different. Here, the equality notions for the input terms and the output terms may differ. Therefore, in our study, we introduce a novel function extensionality property tailored for such cases, and we rigorously establish its validity. Furthermore, our project led us to uncover novel results related to certain inductive types, notably _List_ and _Binary-Trees_, which had not been explored within the context of 2LTT before. What initially started as a foundation for another study has opened up exciting new directions for further research. One of the original motivations for 2LTT was to define semisimplicial types. However, although plain 2LTT allows defining the type of \(n\)-truncated semisimplicial types for any exo-natural number \(n\), a term of \(\mathbb{N}\) in the second level, it does not seem possible to assemble these into a type of untruncated semisimplicial types. Voevodsky's solution [15] was to assume that exo-nat, \(\mathbb{N}\) in the second level, is fibrant (isomorphic to a type in the first level), which works for simplicial sets but may not hold in all infinity-toposes. However, assuming cofibrancy, a weaker notion than fibrancy, of exo-nat also allows for defining a fibrant type of untruncated semisimplicial types with a broader syntax, including models for all infinity-toposes. After giving the overview of the models of 2LTT in Section 4, we provide such models in Section 5. **Structure of this work.** In Section 2, we begin with giving the basics of 2LTT. Our basic objects, _types_ and _exo-types_ are explained. We then give the three classifications about exo-types, which are _fibrancy_, _cofibrancy_, and _sharpness_. Note that these concepts are the basic building blocks of the mentioned study [1]. We also provide new results about the cofibrancy and sharpness of some inductive types. Proposition 2.14 and the entire Section 3 are new in this field. Throughout the paper, we point to the relevant codes in the Agda library and talk about how, if any, things that differ from previous works contribute to Agda formalization. In Section 4, in order to present the complete picture, we also explore the semantic aspect of the study and introduce the meaning of 2LTT's model, providing results about the general models of the theory we are concerned with. As far as we know, there have been no previous studies on non-trivial models of 2LTT with cofibrant exo-nat. By _non-trivial_, we mean the proposed model indeed satisfies cofibrant exo-nat but does not satisfy fibrant exo-nat. Theorem 5.3 proves the existence of models we desired. **Drawback and limitations.** Although the proofs in the paper are logically valid and complete, the formalization of 2LTT heavily depends on new, experimental, and undocumented features of Agda. As such, there are some bugs emerging from the previously untested interactions of these features, and there might be more than we encountered. There are some efforts by Agda developers to fix these bugs in the Agda source code. We expect the study with these experimental features to produce documentation on what we need to avoid bugs. **Acknowledgements**. We would like to thank Michael Shulman and Nicolai Kraus for many interesting discussions and insightful comments. The work is partially supported by NSF grant DMS-1902092, the Army Research Office W911NF-20-1-0075, and the Simons Foundation. The work is also based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0009. ## 2 Review about two-level type theory ### Types & exo-types The primitive objects of a type theory are **types** and **terms**. These are similar to sets and elements in set theory. For 2LTT, there are two different _kinds_ of types: one kind in HoTT and other kind in meta level. We reserve the word "types" for ones in HoTT (as usual) while we use the word "exo-type2" for ones in meta level, as in [1]. According to this distinction, we should define each type and type formers twice: one for types, one for exo-types. Footnote 2: This term was originally suggested by Ulrik Buchholtz. In type theory, we define **universe** as a type of types. In order to avoid paradoxes a la Russell, we assume a universe hierarchy. Thus, a universe is again a type, but in a different sense than its terms. In our setting, we have a hierarchy of universes of types, denoted by \(\mathcal{U}\), and exo-universes of exo-types, denoted by \(\mathcal{U}^{e}\). We always make the distinction between types and exo-types using the superscript \(-^{e}\). After having universes and exo-universes, it is easy to define types and exo-types. We are assuming all definitions in HoTT Book [13], and hence we have basic type and type formers. Exo-type and exo-type formers are defined exactly in the same way, but these are defined in the exo-universe. **Definition 2.1**.: * For a type \(A:\mathcal{U}\) and a type family \(B:A\to\mathcal{U}\), we define the **dependent function type** (briefly \(\prod\)-type) \[\prod_{a:A}B(a)\] as usual. If \(B\) is a constant family, then the dependent function type is the ordinary function type: \[\prod_{a:A}B:=A\to B\,.\] For an exo-type \(A:\mathcal{U}^{e}\) and an exo-type family \(B:A\to\mathcal{U}^{e}\), we have the **dependent function exo-type** (briefly \(\prod\)-exo-type) \[\prod_{a:A}^{e}B(a)\] in a similar way. If \(B\) is constant, then we have the ordinary function exo-type \[\prod_{a:A}^{e}\!B:=A\to^{e}B\,.\] It should be noted that the notation for maps between exo-types "\(\to^{e}\)" can be used throughout this paper to emphasize distinction. However, we omit the notation and use usual arrows for any cases since the domain and the codomain can be derived from the context, or we can specify whether we have type or exo-type. * For a type \(A:\mathcal{U}\) and a type family \(B:A\to\mathcal{U}\), we define the **dependent sum type** (briefly \(\sum\)-type) \[\sum_{a:A}B(a)\] as usual, and its terms are of the form \(\mathsf{pair}(a,b)\) for \(a:A\) and \(b:B(a)\). The projection maps are \(\pi_{1}:\sum_{a:A}B(a)\to A\) and \(\pi_{2}:\sum_{a:A}B(a)\to B(a)\). When \(B\) is a constant family, we call it the **product type** and denote it by \(A\times B\). For an exo-type \(A:\mathcal{U}^{e}\) and an exo-type family \(B:A\to\mathcal{U}^{e}\), we have the **dependent sum exo-type** (briefly \(\sum\)-exo-type) \[\sum_{a:A}^{e}\!B(a)\] in a similar way, and its terms are of the form \(\mathsf{pair}^{e}(a,b)\) for \(a:A\) and \(b:B(a)\). The projection maps are \(\pi_{1}{}^{e}:\sum_{a:A}{}^{e}\!B(a)\to A\) and \(\pi_{2}{}^{e}:\sum_{a:A}{}^{e}\!B(a)\to B(a)\). When \(B\) is a constant family, we have the **product exo-type**\(A\times^{e}B\). Note that we will use the notation \((\mathbf{a},\mathbf{b})\) for \(\mathsf{pair}(a,b)\) or \(\mathsf{pair}^{e}(a,b)\) when the context is clear. We prefer the comma notation in this paper due to easier reading. _This choice and the choice for arrows may seem to be contradictory with our claim of "no abuse of notation". However, the choices are only for aesthetic purposes, and the notation difference is precise in the formalization._ * For a pair of types \(A,B:\mathcal{U}\), we define the **coproduct type**\(A+B:\mathcal{U}\) as usual, constructed by the maps \(\mathsf{inl}:A\to A+B\) and \(\mathsf{inr}:B\to A+B\). For a pair of exo-types \(A,B:\mathcal{U}^{e}\), we define the **coproduct exo-type**\(A+^{e}B:\mathcal{U}^{e}\) similarly, constructed by the maps \(\mathsf{inl}^{e}:A\to A+^{e}B\) and \(\mathsf{inr}^{e}:B\to A+^{e}B\). * While the **unit type**, denoted by \(\mathbf{1}:\mathcal{U}\), is constructed by a single term \(\star:\mathbf{1}\), the **unit exo-type**, denoted by \(\mathbf{1}^{e}:\mathcal{U}^{e}\), is constructed by a single exo-term \(\star^{e}:\mathbf{1}^{e}\). * We have both the **empty type**, denoted by \(\mathbf{0}:\mathcal{U}\), and the **empty exo-type**, denoted by \(\mathbf{0}^{e}:\mathcal{U}^{e}\). Both have no constructors, and hence no term by definition. * The **natural number type**, denoted by \(\mathbb{N}:\mathcal{U}\), is constructed by a term \(\mathbf{0}:\mathbb{N}\) and a function term \(\mathsf{succ}:\mathbb{N}\to\mathbb{N}\). The **natural number exo-type** (briefly exo-natural or exo-nat), denoted by \(\mathbb{N}^{e}:\mathcal{U}^{e}\), is constructed by \(\mathbf{0}^{e}:\mathbb{N}^{e}\) and \(\mathsf{succ}^{e}:\mathbb{N}^{e}\to\mathbb{N}^{e}\). * The **finite type** having \(n\) terms, denoted by \(\mathbb{N}_{<n}\), defined inductively (on \(n:\mathbb{N}\)) as \[\mathbb{N}_{<0}:=\mathbf{0}\quad\text{ and }\quad\mathbb{N}_{<n+1}:=\mathbb{N}_{<n}+ \mathbf{1}\,.\] Similarly **exo-finite exo-type** having \(n\) terms, denoted by \(\mathbb{N}_{<n}^{e}\), is defined inductively (on \(n:\mathbb{N}^{e}\)) as \[\mathbb{N}_{<0}^{e}:=\mathbf{0}^{e}\quad\text{ and }\quad\mathbb{N}_{<n+1}^{e}:= \mathbb{N}_{<n}^{e}+^{e}\,\mathbf{1}^{e}\,.\] * For a type \(A:\mathcal{U}\) and \(a,b:A\), we define the **identity type** (or **path type**) \(a=b:\mathcal{U}\) as usual, its constructor is \(\mathsf{refl}:a=a\). For an exo-type \(A:\mathcal{U}^{e}\) and \(a,b:A\), we have the **exo-equality**\(a\!=\!^{e}\!b:\mathcal{U}^{e}\) in a similar way; its constructor is \(\mathsf{refl}^{e}:a\!=\!^{e}a\). Note that these type/exo-type pairs may not coincide in the cases of \(+^{e}\), \(\mathbb{N}^{e}\), \(\mathbf{0}^{e}\), and \(=\!^{e}\). For example, even if \(A,B:\mathcal{U}\), we may not have \(A+^{e}B:\mathcal{U}\), namely, this is always an exo-type, but not generally a type. The difference in these cases is that the elimination/induction rules of fibrant types cannot be used unless the target is fibrant. For example, we can define functions into any exo-type by recursion on \(\mathbb{N}^{e}\), but if we want to define a function \(f:\mathbb{N}\to A\) by recursion, we must have \(A:\mathcal{U}\). **Remark 2.2**.: We assume the univalence axiom (UA) only for the identity type. Thus, we also have the function extensionality (\(\mathsf{funext}\)) for it because UA implies \(\mathsf{funext}\) (Theorems 4.9.4 & 4.9.5 in [13]). For the exo-equality, we assume the \(\mathsf{funext}^{e}\) and the axiom called Uniqueness of Identity Proofs (UIP). In other words, we have the following * \(\mathsf{UA}:\prod_{A,B:\mathcal{U}}(A\simeq B)\to(A=B)\) * \(\mathsf{funext}:(f,g:\prod_{A}B(a))\to(\prod_{a:A}f(a)=g(a)\to(f=g))\) * \(\mathsf{funext}^{e}:(f,g:\prod_{A}^{e}B(a))\to(\prod_{a:A}^{e}f(a)\!=\!^{e}g(a )\to(f\!=\!^{e}g))\) * \(\mathsf{UIP}:\prod_{a,b:A}^{e}\left(\prod_{(p,q:a\!=\!^{e}b)}^{e}p\!=\!^{e}q\right)\) In the applications or examples, we often make use of both versions of function extensionality. Note also that UIP says that for any terms \(a,b\) in an exo-type \(A\), if they are exo-equal, namely, there is an exo-equality between them, then the equality term is unique. **Remark 2.3**.: Just as there is a type hierarchy in terms of path types such as **contractible** types, **propositions**, and **sets**, we can define **exo-contractible** exo-type, **exo-propositions**, and **exo-sets** similarly with respect to \(=\!^{e}\). Since we assume UIP for exo-equality, this yields that all exo-types are exo-sets. As another note, any property of \(=\) can be defined for \(=\!^{e}\) similarly by its elimination rule. For example, we have both transport (tr) and exo-transport (tr\({}^{e}\)), we have both path-type homotopies (\(\sim\)) of functions between types and exo-equality homotopies (\(\sim^{e}\)) of functions between exo-types, and so on. For this kind of properties, not defined here, we refer to the HoTT Book [13]. **Agda Side.** The folders Types and Exo-types in our Agda library [14] contain all the definitions above. The main file Primitive.agda (Figure 1) has the definition of the universe and the exo-universe. The flag --two-level enables a new sort called SSet. This provides two distinct universes for us. Note that while it is common to assume typical ambiguity3 in papers, the formalization works with polymorphic universes. Footnote 3: HoTT Book, Section 1.3 ### Isomorphisms & Equivalences Considering these twin definitions in the previous section, it's natural to ask whether there is a correspondence between them. We obtain such a correspondence according to the relation between types and exo-types. In [4], it is assumed that there is a coercion map \(c\) from types to exo-types, for any type \(A:\mathcal{U}\) we have \(c(A):\mathcal{U}^{e}\). Another approach, as in [1], is taking \(c\) as an inclusion, in other words, assuming every type is an exo-type. In this work, the second approach is assumed. Therefore, we can apply exo-type formers to types. For example, both \(\mathbb{N}+\mathbb{N}\) and \(\mathbb{N}+^{e}\mathbb{N}\) make sense, but both are still exo-types. We will later prove some isomorphisms related to such correspondences. However, what an isomorphism between exo-types means should be defined beforehand. **Definition 2.4**.: * A function \(f:A\to B\) between exo-types is called an **isomorphism** (or **exo-isomorphism**) if there is a function \(g:B\to A\) such that \(g\circ^{e}f=^{e}\mathsf{id}_{A}\) and \(f\circ^{e}g=^{e}\mathsf{id}_{B}\) where \(\mathsf{id}_{A}:A\to A\) is the identity map. We define the exo-type of exo-isomorphisms as \[A\cong B:=\sum\nolimits_{f:A\to B}^{e}\sum\nolimits_{g:B\to A}^{e}(f\circ^{e}g= ^{e}\mathsf{id}_{B})\times^{e}(g\circ^{e}f=^{e}\mathsf{id}_{A}).\] It can be read as \(A\cong B\) consists of exo-quadruples \((f,g,p,q)\) such that \(f:A\to B\) and \(g:B\to A\) are functions, and \(p,q\) are witnesses for the relevant identities. Note that \(\circ^{e}\) means that the composition is between two functions of exo-types. * A function \(f:A\to B\) between types is called an **equivalence** if its fibers are contractible. We define the type of equivalences as \[A\simeq B:=\sum_{f:A\to B}\left(\prod_{b:B}\mathsf{is-Contr}\left(\sum_{a:A}f( a)=b\right)\right).\] Figure 1: Agda code for two kinds of universes. It can be read as \(A\simeq B\) consists of pairs \((f,p)\) where \(f\) is a function and \(p\) is a witness of that all fibers of \(f\) is contractible. In other words, the preimage of each term in \(B\) is unique up to the identity type. * A function \(f:A\to B\) between types is called **quasi-invertible** if there is a function \(g:B\to A\) such that \(g\circ f=\mathsf{id}_{A}\) and \(f\circ g=\mathsf{id}_{B}\) where \(\mathsf{id}_{A}:A\to A\) is the identity map. **Remark 2.5**.: In these definitions, one can use \(\mathsf{funext}^{e}\) or \(\mathsf{funext}\), and instead of showing, for example, \(g\circ^{e}f=^{e}\mathsf{id}_{A}\), it can be showed that \(g(f(a))=a\) for any \(a\in A\). Moreover, a map is an equivalence if and only if it is quasi-invertible. Therefore, we can use both interchangebly. For practical purposes, when we need to show f is an equivalence, we generally do it by showing that it is quasi-invertible. Assuming that each type is an exo-type, and considering all definitions so far, the correspondence between exo-type formers and type formers can be characterized4 as follows: Footnote 4: This is the same as Lemma 2.11 in [4]. **Theorem 2.6**.: _If \(A,C:\mathcal{U}\) are types and \(B:A\to\mathcal{U}\) is a type family, we have the following maps. The first three maps are exo-isomorphisms._ 1. \(\mathbf{1}^{e}\to\mathbf{1}\)_,_ 2. \(\sum_{a:A}^{e}B(a)\to\sum_{a:A}B(a)\)_,_ 3. \(\prod_{a:A}^{e}B(a)\to\prod_{a:A}B(a)\)_,_ 4. \(A+^{e}C\to A+C\)_,_ 5. \(\mathbf{0}^{e}\to\mathbf{0}\)_,_ 6. \(\mathbb{N}^{e}\to\mathbb{N}\)_,_ 7. _For any_ \(a,b:A\)_, we have_ \((a=^{e}b)\to(a=b)\)_._ Proof.: For each one, the definition follows from the elimination rule of the corresponding exo-types. \(i\). The map \(x\mapsto\star\) is an isomorphism with the inverse \(x\mapsto\star^{e}\). _ii_. The map \(\mathsf{pair}^{e}(a,b)\mapsto\mathsf{pair}(a,b)\) is an isomorphism with the inverse \(\mathsf{pair}(a,b)\mapsto\mathsf{pair}^{e}(a,b)\). _iii_. The map \(f\mapsto(a\mapsto f(a))\) is an isomorphism with the inverse as denoted the same. Note that we do not take the identity map because domain and codomain are not the same. _iv_. The map is defined as \(\mathsf{nil}^{e}\,a\mapsto\mathsf{nil}\,a\) and \(\mathsf{inr}^{e}\,b\mapsto\mathsf{inr}\,b\). \(v\). The map is the usual null map that corresponds to the principle _ex falso quodlibet_. _vi_. The map, say \(f\), is defined as \(\mathsf{0}^{e}\mapsto\mathbf{0}\) and \(\mathsf{succ}^{e}(n)\mapsto\mathsf{succ}(f(n))\). _vii_. The map is defined as \(\mathsf{refl}^{e}\mapsto\mathsf{refl}\). **Remark 2.7**.: It is worth emphasizing that the inverses of the maps _iv_, \(v\), and _vi_ can be assumed to exist. There are some models where these hold (for the details, see the discussion below Lemma 2.11 [4]). However, a possible inverse for the map _vii_ would yield a contradiction because the univalence axiom is inconsistent with the uniqueness of identity proofs. This conversion from the exo-equality (\(=^{e}\)) to the identity (\(=\)) has still an importance in many proofs later. Thus we denote it by \[\mathsf{eqtoid}:\prod_{a,b:A}{}^{e}(a\!=^{e}\!b)\to(a=b)\,.\] One of its useful corollaries is the following lemma. **Lemma 2.8**.: _Let \(A,B:\mathcal{U}\) be two types. If \(A\cong B\), then \(A\simeq B\)._ Proof.: Let \(f:A\to B\) and \(g:B\to A\) be such that \[p:f\circ^{e}g=^{e}\mathsf{id}_{B}\quad\text{ and }\quad q:g\circ^{e}f=^{e} \mathsf{id}_{A}\,.\] Since, \(A\to B\) and \(B\to A\) are also (function) types, we get \[\mathsf{eqtoid}(p):f\circ^{e}g=\mathsf{id}_{B}\quad\text{ and }\quad\mathsf{eqtoid}(q):g\circ^{e}f=\mathsf{id}_{A}\,.\] Since \(f\circ^{e}g=f\circ g\) and \(g\circ^{e}f=g\circ f\) hold by definition, we are done. **Agda Side.** The file C.agda contains the complete proof of Lemma 2.6. In addition to the flag --two-level, we should also use another flag, which is --cumulativity. This enables the subtyping rule \(\mathsf{Set}\ \mathsf{i}\leq\mathsf{SSet}\ \mathsf{i}\) for any level \(\mathsf{i}\). Thanks to the flag, we can take types as arguments for the operations/formations that are originally defined for exo-types, as we discussed at the beginning of this section. There is a problem with the usage of these two flags together, and Figure 2 gives an example of this. As in the example, Agda has a feature that prevents elimination from a fibrant type to a non-fibrant type. However, when we lift \(\mathbb{N}\) to a term in \(\mathcal{U}^{e}\), which is obtained by cumulativity, it is not a fibrant type anymore, and Agda allows the elimination of its constructors. As noted in Remark 2.7, such maps are not entirely wrong and can be added to our theory. However, there are essential models where they do not exist, so they should not be definable in an implementation. Also, there are other problems emerging due to the cumulativity itself5. Figure 2: An example of the interaction between the flags —two-level and —cumulativity. ### Fibrant exo-types **Definition 2.9**.: An exo-type \(A:\mathcal{U}^{e}\) is called a **fibrant exo-type** if there is a type \(RA:\mathcal{U}\) such that \(A\) and \(RA\) are exo-isomorphic. In other words, \(A\) is fibrant when the following exo-type is inhabited \[\mathsf{isFibrant}(A):=\sideset{}{{}^{e}}{\sum}_{RA:\mathcal{U}}{}^{e}(A\cong RA )\,.\] **Proposition 2.10** ([4]).: _The following are true:_ 1. _Any type_ \(A:\mathcal{U}\) _is a fibrant exo-type._ 2. _The unit exo-type_ \(\mathbf{1}^{e}\) _is fibrant._ 3. _Let_ \(A:\mathcal{U}^{e}\) _and_ \(B:A\to\mathcal{U}^{e}\)_, if_ \(A\) _is fibrant, and each_ \(B(a)\) _is fibrant, then both_ \(\sum_{a:A}^{e}B(a)\) _and_ \(\prod_{a:A}^{e}B(a)\) _are fibrant._ 4. _If_ \(A,B:\mathcal{U}^{e}\) _are exo-isomorphic types, then_ \(A\) _is fibrant if and only if_ \(B\) _is fibrant._ 5. _If_ \(A:\mathcal{U}^{e}\) _is fibrant, and there are two types_ \(B,C:\mathcal{U}\) _such that_ \(A\cong B\) _and_ \(A\cong C\)_, then_ \(B=C\)_._ Proof.: 1. This is trivial because we can take \((A,\mathsf{id}_{A}):\mathsf{isFibrant}(A)\). 2. By Theorem 2.6, we know that there is an exo-isomorphism \(e:\mathbf{1}^{e}\cong\mathbf{1}\). 3. Let \(RA:\mathcal{U}\) and \(RB:A\to\mathcal{U}\) such that \(A\cong RA\) and \(B(a)\cong RB(a)\) for each \(a:A\). Take \(r_{A}:A\to RA\) with inverse \(s_{A}:RA\to A\). Using the functoriality of \(\sum\)-exo-types and the map _ii_ in Theorem 2.6, we have \[\sideset{}{{}^{e}}{\sum}_{a:A}{}^{e}B(a)\cong\sideset{}{{}^{e}}{\sum}_{c:RA} RB(s_{A}(c))\cong\sideset{}{{}^{e}}{\sum}_{c:RA}RB(s_{A}(c)).\] Similarly, the functoriality of \(\prod\)-exo-types and the map _iii_ in Theorem 2.6 imply that \[\sideset{}{{}^{e}}{\prod}_{a:A}{}^{e}B(a)\cong\sideset{}{{}^{e}}{\prod}_{c:RA} RB(s_{A}(c))\cong\sideset{}{{}^{e}}{\prod}_{c:RA}RB(s_{A}(c)).\] 4. If \(A\) is fibrant, namely, \(A\cong RA\) for a type \(RA:\mathcal{U}\), since \(\cong\) is transitive, we get \(B\cong A\cong RA\). The reverse is the same. 5. By transitivity, we have \(B\cong C\). Lemma 2.8 implies \(A\simeq B\), and the result follows from the univalence. Just as we have fibrant exo-types, we can consider the maps between fibrant exo-types as the maps between types. **Definition 2.11**.: Let \(A,B:\mathcal{U}^{e}\) be two fibrant exo-types, and \(f:A\to B\). Let \(RA,RB:\mathcal{U}\) be such that \(A\cong RA\) and \(B\cong RB\). We have the following diagram. \(A\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\(\cong\)\ ii. We need to prove that \(r_{C}\circ^{e}(g\circ^{e}f)\circ^{e}s_{A}\) is an equivalence. This map is path-homotopic to \((r_{C}\circ^{e}g\circ^{e}s_{B})\circ(r_{B}\circ^{e}f\circ^{e}s_{A})\). Since compositions and homotopies preserve equivalences, we are done. iii. By assumption \(r_{B}\circ^{e}f\circ^{e}s_{A}\) is an equivalence. For all \(c:RA\), we have \[r_{B}(f(s_{A}(c)))\!=^{e}r_{B}(f^{\prime}(s_{A}(c)))\] because \(f(x)\!=^{e}f^{\prime}(x)\) for all \(x:A\). Since homotopies preserve equivalences, we get \(r_{B}\circ^{e}f^{\prime}\circ^{e}s_{A}\) is an equivalence. We also have other useful properties for fibrant-equivalences. For example, consider the following diagram between fibrant exo-types. Then **2-out-of-3 property** says if two of the three maps \(f\), \(g\) and the composite \(g\circ^{e}f\) are fibrant-equivalences, then so is the third. For another example, consider the commutative diagram between fibrant exo-types. Then **3-out-of-4 property** says if three of the four maps \(f\), \(f^{\prime}\), \(g\), \(g^{\prime}\) are exo-equivalences, then so is the fourth. **Agda Side.** The folder Coercion in the library [14] contains the formalizations of all definitions and propositions in this section. Note that in [1], when an exo-type \(A\) is fibrant, its fibrant match is also assumed to be \(A\) because this paper uses isFibrant\((A):=\sum_{RA:\mathcal{U}}^{e}(A\!=^{e}RA)\) as the definition of fibrancy. If we assume axiom T36, these two are logically equivalent. The main purpose of our choices becomes clearer in the formalization. In the case \(A\!=^{e}\!B\) where \(A\) is an exo-type, and \(B\) is a type, we can still define an isomorphism between them by transporting their terms under this equality, but if we have \(A\cong B\), we get the isomorphism directly. In this way, we gain practical advantages. For example, the notion of fibrant-equivalence is new, and it is designed for this practical purposes. Also, with this approach, all proofs provided here become the same as in their formalizations. Thus, there is no gap between language and symbolism. Footnote 6: Axiom T3 says that if an exotype \(A\) is isomorphic to a type \(B\), then \(A\) is itself a (fibrant) type [4]. ### Cofibrant exo-types In this section, a weaker definition than fibrancy is given. We also provide a new (but logically equivalent) characterization of it. Note that if a property, that is defined for types initially, is attributed to a fibrant exo-type, we emphasize that the property belongs to the fibrant match of the exo-type. **Definition 2.13** ([4] Corollary 3.19(i)).: Let \(A:\mathcal{U}^{e}\) be an exo-type. We call it **cofibrant** if the following holds * For any type family \(Y:A\to\mathcal{U}\) over \(A\), the exo-type \(\prod_{a:A}^{e}Y(a)\) is fibrant, * In the case above, if \(Y(a)\) is contractible for each \(a:A\), then so is the fibrant match of \(\prod_{a:A}^{e}Y(a)\). The following gives a logically equivalent definition of cofibrancy. Attention should be paid to the use of \(=\) and \(=^{e}\) in order to indicate whether the terms belong to the type or the exo-type. **Proposition 2.14**.: _Let \(A:\mathcal{U}^{e}\) be an exo-type such that for any type family \(Y:A\to\mathcal{U}\) over \(A\), the exo-type \(\prod_{a:A}^{e}Y(a)\) is fibrant. Then the following are equivalent:_ 1. _In the case above, if_ \(Y(a)\) _is contractible for each_ \(a:A\)_, then so is its fibrant match of_ \(\prod_{a:A}^{e}Y(a)\)_, namely_ \(A\) _is cofibrant._ 2. _(Funext for cofibrant types)._ _In the case above, for any_ \(f,g:\prod_{a:A}^{e}Y(a)\) _if_ \(f(a)=g(a)\) _for each_ \(a:A\)_, then_ \(r(f)=r(g)\) _where_ \(FM:\mathcal{U}\) _and_ Proof.: (i \(\Rightarrow\) ii) Let \(f,g:\prod_{a:A}^{e}Y(a)\) be such that \(t_{a}:f(a)=g(a)\) for each \(a:A\). Consider another type family \(Y^{\prime}:A\to\mathcal{U}\) defined as \[Y^{\prime}(a):=\sum_{b:B(a)}b=f(a).\] Then both \(f^{\prime}:=\lambda a.(f(a),\mathsf{refl})\) and \(g^{\prime}:=\lambda a.(g(a),t_{a}^{-1})\) are terms in \(\prod_{a:A}^{e}Y^{\prime}(a)\). By our assumptions, there is a \(FM^{\prime}:\mathcal{U}\) such that \[\prod_{a:A}^{e}Y^{\prime}(a)\cong\raisebox{-14.226378pt}{\includegraphics[]{ ff-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f-f--f-f-f-f-f-f-f-f-f-f-f-f-f-f- Since the type of paths at a point is contractible (Lemma 3.11.8 in [13]), we have each \(Y^{\prime}(a)\) is contractible. By the assumption (i), we get \(FM^{\prime}\) is contractible, and hence \[r^{\prime}(f)=r^{\prime}(g). \tag{1}\] Using this, we have the following chain of identities: \[r(f)=r(\pi_{1}(s^{\prime}(r^{\prime}(f^{\prime}))))=r(\pi_{1}(s^{\prime}(r^{ \prime}(g^{\prime}))))=r(g).\] The first (and the third, by symmetry) identity is obtained as follows: Because \(r^{\prime}\) and \(s^{\prime}\) are exo-inverses of each other, we have \[(a:A)\to\pi_{1}(f^{\prime}(a))\!=^{e}\!\pi_{1}((s^{\prime}(r^{\prime}(f^{ \prime})))(a)).\] Thus, by \(\mathsf{funext}^{e}\), we get \(\pi_{1}(f^{\prime})\!=^{e}\!\pi_{1}((s^{\prime}(r^{\prime}(f^{\prime}))))\). Then we apply7\(r\) to the equality, and make it an identity via eqtoid because the terms are in \(FM^{\prime}\) which is a type. Footnote 7: We mean the usual ap operation in HoTT by saying “applying a function to an identity (or exo-equality)”. The second identity is obtained by the applying to the function \[\lambda x.r(\lambda a.\pi_{1}((s^{\prime}x)(a))):FM^{\prime}\to FM\] to the identity 1, so we are done. Note that even if \(r\) has exo-type domain and \(s^{\prime}\) has exo-type codomain, we can compose these in a way that the resulting map is from a type to a type. Thus, we can apply it to an identity. (ii \(\Rightarrow\) i) Suppose \(Y(a)\) is contractible for each \(a:A\). We want to show that \(FM\) is contractible where Let \(b_{a}:Y(a)\) be the center of contraction for each \(a:A\). Then this gives a function \(f:=\lambda a.b_{a}:\prod_{a:A}^{e}Y(a)\). For any \(x:FM\), since \(f(a)=b_{a}=s(x)(a)\) for each \(a:A\) by contractibility asumption, we get \(r(f)=r(s(x))\). Also, applying eqtoid to the exo-equality \(r(s(x))\!=^{e}\!x\), we get \(r(s(x))=x\), and transitivity of \(=\) yields \(r(f)=x\). Therefore, \(FM\) is contractible with the center of contraction \(r(f)\). Cofibrant exo-types have the following properties. **Proposition 2.15** ([4]).: 1. _All fibrant exo-types are cofibrant._ 2. _If_ \(A\) _and_ \(B\) _are exo-types such that_ \(A\cong B\)_, and if_ \(A\) _is cofibrant, then_ \(B\) _is cofibrant._ 3. \(\mathbf{0}^{e}\) _is cofibrant, and if_ \(A,B:\mathcal{U}^{e}\) _are cofibrant, then so are_ \(A+^{e}B\) _and_ \(A\times^{e}B\)_. In particular, all exo-finite exo-types are cofibrant._ 4. _If_ \(A:\mathcal{U}^{e}\) _is cofibrant and_ \(B:A\to\mathcal{U}^{e}\) _is such that each_ \(B(a)\) _is cofibrant, then_ \(\sum_{a:A}^{e}B(a)\) _is cofibrant._ Proof.: i. Let \(A\) be a fibrant exo-type, say \(A\xrightleftharpoons[s]{r}RA\). Then for any \(Y:A\to\mathcal{U}\), by Proposition 2.10, we know that \(\prod_{a:A}^{e}Y(a)\) is fibrant with the fibrant match \(\prod_{c:RA}Y(s(c))\). If each \(Y(a)\) is contractible, then so is \(\prod_{cRA}Y(s(c))\) because this holds for ordinary types. ii. Let \(f:A\to B\) be an exo-isomorphism with the inverse \(g:B\to A\). Let \(Y:B\to\mathcal{U}\). Consider the following diagram. The maps \(u,v\) form isomorphisms by the \(\prod\)-functoriality property. The isomorphisms \(r_{A},s_{A}\) are obtained by the cofibrancy of \(A\). Therefore, we get \(\prod_{b:B}^{e}Y(b)\) is fibrant. Also, if each \(Y(b)\) is contractible, then in particular \(Y(f(a))\) is contractible for each \(a:A\). So \(FM_{A}\) is contractible since \(A\) is cofibrant. This proves that \(B\) is cofibrant. iii. In the case of \(\mathbf{0}^{e}\), for any \(Y:\mathbf{0}^{e}\to\mathcal{U}\), it's easy to show that \(\prod_{c\mathbf{0}^{e}}^{e}Y(c)\cong\mathbf{1}\). This satisfies two conditions necessary for being cofibrant simultaneously. For an exo-coproduct, if \(A,B:\mathcal{U}^{e}\) are cofibrant, and \(Y:A+^{e}B\to\mathcal{U}\), we have the following diagram. Here, the maps \(u,v\) have their own obvious definitions, and they are isomorphisms. (Actually, the analogous statement is true for types8, and the exo-type version can be proven similarly in terms of \(=^{e}\).) The exo-isomorphisms \(r_{A}\) and \(r_{B}\) come from cofibrancy of \(A\) and \(B\), respectively. It is easy to see that \(r_{A}\times r_{B}\) is an exo-isomorphism. Therefore, \((r_{A}\times r_{B})\circ^{e}u\) is the exo-isomorphism we searched for. If each \(Y(c)\) is contractible, in particular, both \(Y(\mathsf{inl}(a))\) and \(Y(\mathsf{inr}(b))\) are contractible. By the assumption, this means that both \(FM_{A}\) and \(FM_{B}\) are contractible, and so is \(FM_{A}\times FM_{B}\). This proves that \(A+^{e}B\) is cofibrant. Footnote 8: See Exercise 2.9 in the HoTT Book [13] The case of an exo-product is a particular case of \(\sum^{e}\)-exo-types. For exo-finite exo-types, using induction on \(\mathbb{N}^{e}\), it follows from that both \(\mathbf{0}^{e}\) and \(\mathbf{1}^{e}\) are cofibrant, and cofibrancy is preserved under exo-coproducts. iv. Suppose \(A\) is cofibrant and each \(B(a)\) is cofibrant. Let \(Y:\sum_{a:A}^{e}B(a)\to\mathcal{U}\) and consider the following diagram. The maps \(u,v\) form the usual isomorphism by the expansion of \(\sum^{e}\)-exo-type. (Actually, the same is true for types, and the exo-type version can also be proven similarly in terms of \(=^{e}\).) The maps \(r_{B(a)}\) and \(s_{B(a)}\) form an isomorphism by the cofibrancy of \(B(a)\) and the \(\coprod\)-functoriality property. The last pair of maps \(r_{A},s_{A}\) form an isomorphism by the cofibrancy of \(A\). Therefore, the exo-type in the top left corner is fibrant. Furthermore, if each \(Y(c)\) is contractible, in particular \(Y(a,b)\) is contractible, then each \(FM_{B(a)}\) is contractible by the cofibrancy of \(B(a)\). This proves that \(FM_{A}\) is contractible by the cofibrancy of \(A\). This proves that \(\sum_{a:A}^{e}B(a)\) is cofibrant. **Agda Side.** The folder Cofibration in the library [14] is about the material in this section. The file Funext_for_cofibrant_types.agda is Proposition 2.14. This is one of the major contributions of Agda formalization to the theory. We need it while we try to formalize some proofs in the next section. We already know some functorial properties of dependent function types in the usual HoTT. When we have dependent exo-types \(A:B\to\mathcal{U}\) for \(A:\mathcal{U}^{e}\), the exo-type \(\prod_{a:A}B(a)\) can be fibrant or not. However, we still need the functoriality rules for it, which are very useful in the applications9. Then Proposition 2.14 addresses this need by giving another version of funext, and it works well enough. Footnote 9: For example, see Figure 4. ### Sharp exo-types Another class of exo-types is the class of sharp ones. This was given in [1] for the first time. As in the previous section, we'll give the proofs without any abuse of notation, and so their formalization is the same as in this paper. **Definition 2.16** (Def. 2.2 [1]).: An exo-type \(A\) is **sharp** if it is cofibrant, and and it has a "fibrant replacement", meaning that there is a fibrant type \(RA\) and a map \(r:A\to RA\) such that for any family of types \(Y:RA\to\mathcal{U}\), the precomposition map below is a fibrant-equivalence (recall Definition 2.11). \[(-\circ^{e}r):\prod_{c:RA}Y(c)\to\prod_{a:A}^{e}Y(r(a)) \tag{2}\] The following lemma gives another definition for sharp exo-types. First, we need an auxiliary definition. **Definition 2.17**.: Let \(A,B:\mathcal{U}^{e}\) be two fibrant exo-types, and \(f:A\to B\) be a map. Let \(RA,RB:\mathcal{U}\) be such that \(A\cong RA\) and \(B\cong RB\). Take \(s_{A}:RA\to A\) and \(r_{B}:B\to RB\) as these isomorphisms. Then \(f\) has a **fibrant-section** if \[r_{B}\circ^{e}f\circ^{e}s_{A}:RA\to RB\] has a section, namely, there is \(g:RB\to RA\) such that \((r_{B}\circ^{e}f\circ^{e}s_{A})\circ^{e}g=\mathsf{id}_{RB}\). **Lemma 2.18** ([1]).: _Let \(A\) be a cofibrant exo-type, \(RA\) a type, and \(r:A\to RA\) a map. The following are equivalent:_ 1. _The map (_2_) is a fibrant-equivalence for any_ \(Y:RA\to\mathcal{U}\)_, so that_ \(A\) _is sharp._ 2. _The map (_2_) has a fibrant-section for any_ \(Y:RA\to\mathcal{U}\)_._ 3. _The map (_2_) is a fibrant-equivalence whenever_ \(Y:=\lambda x.Z\) _for a constant type_ \(Z:\mathcal{U}\)_, hence_ \(RA\to Z\) _is equivalent to the fibrant match of_ \(A\to Z\)_._ Proof.: (\(1\Rightarrow 2\)) follows from the fact that an equivalence has a section. (\(1\Rightarrow 3\)) is trivial because \(3\) is a particular case of \(1\). (\(2\Rightarrow 1\)) Let \(Y:RA\to\mathcal{U}\) be a family of types over \(A\) and consider the relevant diagram. By the assumption, we know that \(\alpha\) has a section, say \(\beta:FM\to\prod_{c:RA}Y(c)\), so that we have \(\alpha\circ\beta=id_{FM}\). It is enough to show that \(\beta\circ\alpha=\mathsf{id}_{\prod_{RA}Y}\) We claim that for any \(f,g:\prod_{c:RA}Y(c)\), we have \[\alpha(f)=\alpha(g)\to f=g.\] If this is true, for any \(h:\prod_{c:RA}Y(c)\), taking \(f:=\beta(\alpha(h))\) and \(g:=h\) we are done by \(\mathsf{funext}\). Now, for the proof of the claim, consider another type family over \(RA\) defined as \(Y^{\prime}(c):=(f(c)=g(c))\), so that we have another diagram and again \(\alpha^{\prime}\) has a section \(\beta^{\prime}\). If we assume \(\alpha(f)=\alpha(g)\) for \(f,g:\prod_{c:RA}Y(c)\), this is \[q:r_{A}(f\circ^{e}r)=r_{A}(g\circ^{e}r)\] by the definition. Define \(T:=\lambda(a:A).p_{a}\) where \(p_{a}:f(r(a))=g(r(a))\) is obtained by \[f(r(a)) = (s_{A}(r_{A}(f\circ^{e}r)))(a)\] \[= (s_{A}(r_{A}(g\circ^{e}r)))(a)=g(r(a)).\] The first and the last ones follow from the fact that \(r_{A}\) and \(s_{A}\) are inverses of each other. The middle identity is obtained by the applying \(\lambda u.(s_{A}(u))(a)\) to the path \(q\). Finally, we are done by using \(\mathsf{funext}\) on \(\beta^{\prime}(s^{\prime}_{A}(T)):\prod_{c:RA}^{e}f(c)=g(c)\). (\(3\Rightarrow 2\)) Let \(Y:RA\to\mathcal{U}\) be a family of types over \(RA\) and define \(Z:=\sum_{c:RA}Y(c)\). By our assumptions, we have the following diagrams. Our aim is to show that \(\alpha\) has a section. \(RA\to Z\)\(\stackrel{{\alpha_{Z}:=r_{Z}\circ^{e}(-\circ^{e}r)}}{{\simeq}}\)\ To finish the proof, we should show \(\alpha\circ\beta=\mathsf{id}_{FM}\). Let \(x:FM\), then for all \(a:A\), we have \[\beta(x)(r(a))=s_{A}(x)(a). \tag{3}\] This comes from applying \(\pi_{2}\) to the identity \((r(a),\beta(x)(r(a)))=(s_{A}(x))^{\prime}(a)\) which is obtained by \[(r(a),\beta(x)(r(a))) = \beta_{Z}(r_{Z}(s_{A}(x)^{\prime}))\left(r(a)\right)\] \[= s_{Z}(\alpha_{Z}(\beta_{Z}(r_{Z}(s_{A}(x^{\prime}))))\left(a\right)\] \[= s_{Z}(r_{Z}(s_{A}(x)^{\prime}))(a)\] \[= s_{A}(x)^{\prime}(a)\] Here, the first identity is followed by the lifting property11 of the transport map, the others are obtained by exo-isomorphisms or equivalences. Now, by funext for cofibrant types, the identity (3) proves that \(r_{A}(\beta(x)\circ^{e}r)=r_{A}(s_{A}(x))\). However, the left side is equal to \((\alpha\circ\beta)(x)\) by definition, and the right side is equal to \(x\) due to the exo-isomorphism. Therefore, \(\beta\) is a section for \(\alpha\), which proves the statement 2. Footnote 11: Lemma 2.3.2 in the HoTT Book [13] As in the cofibrant exo-types, the notion of sharpness has its own preserve rules. The following proposition gives these rules. **Proposition 2.19** ([1]).: _The following are true:_ 1. _All fibrant exo-types are sharp._ 2. _If_ \(A\) _and_ \(B\) _are exo-types such that_ \(A\cong B\)_, and if_ \(A\) _is sharp, then_ \(B\) _is sharp._ 3. \(\mathbf{0}^{e}\) _is sharp, and if_ \(A\) _and_ \(B\) _are sharp exo-types, then so are_ \(A+^{e}B\) _and_ \(A\times^{e}B\)_._ 4. _If_ \(A\) _is a sharp exo-type,_ \(B:A\to\mathcal{U}\) _is such that each_ \(B(a)\) _is sharp, then_ \(\sum_{a:A}^{e}B(a)\) _is sharp._ 5. _Each finite exo-type_ \(\mathbb{N}^{e}_{<n}\) _is sharp._ 6. _If_ \(\mathbb{N}^{e}\) _is cofibrant, then it is sharp._ Proof.: For (i) if \(A\) is a fibrant exo-type, and \(RA:\mathcal{U}\) is such that \(A\cong RA\), then we can take \(RA\) as the fibrant replacement. By Proposition 2.15(i) \(A\) is cofibrant, and the map (2) is trivally a fibrant-equivalence. By Proposition 2.15(iii), \(\mathbf{0}^{e}\) is cofibrant, and we can take \(\mathbf{0}\) as fibrant replacement. Also, the \(\prod\)-type and \(\prod^{e}\)-exo-type in the map (2) are contractible and exo-contractible, respectively. Thus, it is trivially a fibrant-equivalence. The statement (ii) can be shown as in Proposition 2.15(ii). Since \(\mathbf{1}^{e}\) is fibrant, and hence sharp, the sharpness of finite types follows from that sharpness is preserved under an exo-coproduct. The case of an exo-product is a particular case of \(\sum^{e}\)-exo-types. Thus, it remains to show the exo-coproduct case, (iv) and (vi). For an exo-coproduct, let \(A\) and \(B\) be two sharp exo-types. By Proposition 2.15(iii), \(A+^{e}B\) is cofibrant. Let \(RA,RB:\mathcal{U}\) be the fibrant replacements of \(A,B\), respectively. Let \(r_{A}:A\to RA\) and \(r_{B}:B\to RB\) be the relevant maps. We claim that \(RA+RB\) is a fibrant replacement of \(A+^{e}B\) with the map \(r:A+^{e}B\to RA+RB\) defined by \[r(x):\equiv\begin{cases}\mathsf{inl}\,r_{A}(a)&\text{ if }x=\mathsf{inl}^{e}\,a, \\ \mathsf{inr}\,r_{B}(b)&\text{ if }x=\mathsf{inr}^{e}\,b.\end{cases}\] For \(Y:RA+RB\to\mathcal{U}\) consider the commutative diagram in Figure 3. In the diagram, the equivalences \(\simeq_{1}\) and \(\simeq_{2}\), and the isomorphism \(\cong_{1}\) follow from the universal property of (exo)coproducts [13]. The three pairs of maps \((u_{A},v_{A})\), \((u_{B},v_{B})\), and \((u,v)\) are obtained by cofibrancy of \(A\), \(B\), and \(A+^{e}B\), and hence these are all exo-isomorphisms. Since \(A\) and \(B\) are sharp, the map \(\alpha\) and \(\beta\) are equivalences. It is then easy to see that \(\alpha\times\beta\) is also an equivalence. The compostion of three arrows on the left is the precomposition map \((-\circ^{e}r)\). Observe that the composition of three arrows on the right is an equivalence. Indeed, the first and second are already equivalences. Since we have the isomorphism \(\cong_{2}\) between types \(FA\times FB\) and \(FM\), this is also an equivalence by Proposition 2.12(i). Therefore, the right composition of arrows is an equivalence, so \((-\circ^{e}r)\) is a fibrant-equivalence. For a dependent sum, let \(A\) be a sharp exo-type, \(B:A\to\mathcal{U}\) be such that each \(B(a)\) is sharp. By Proposition 2.15(iv), we have \(\sum_{a:A}^{e}B(a)\) is cofibrant. It remains to find a fibrant replacement. Let \(r_{A}:A\to RA\) be the fibrant replacement of \(A\), and \(r_{a}:B(a)\to RB(a)\) be the fibrant replacement of each \(B(a)\) for \(a:A\). We have \(RB:A\to\mathcal{U}\). Consider the diagram obtained by the sharpness of \(A\). Figure 3: The diagram about sharpness of exo-coproduct. \(RA\to\mathcal{U}\)\(\xrightarrow[\alpha]{\simeq}_{\beta}\)\(\xrightarrow[u]{\simeq}_{v}\) Define \(\widetilde{RB}:=\beta(u(RB)):RA\to\mathcal{U}\), and we will make \(\sum_{c:RA}\widetilde{RB}(a)\) the fibrant replacement of \(\sum_{a:A}^{e}B(a)\). Define \(r:\sum_{a:A}^{e}B(a)\to\sum_{c:RA}\widetilde{RB}(a)\) by \[r(a,b):=(\,r_{A}(a)\,,\,e_{a}(r_{a}(b))\,)\] where \(e:\prod_{a:A}^{e}RB(a)\simeq\widetilde{RB}(r_{A}(a))\). The family of equivalences \(e\) is obtained by the identity, for all \(a:A\) \[\widetilde{RB}(r_{A}(a))=v(u(\widetilde{RB}\circ^{e}r_{A}))(a) = v(\alpha(\widetilde{RB}))(a)\] \[= v(\alpha(\beta(u(RB))))(a)=v(u(RB))(a)=RB(a).\] It remains to show that \((-\circ^{e}r)\) is a fibrant-equivalence. Consider the commutative diagram in Figure 4 for the dependent type \(Y:\sum_{c:RA}\widetilde{RB}(a)\to\mathcal{U}\). In Figure 4, the first and the second rows contain two types, and there is a canonical equivalence \(\phi_{0}\) between them obtained by a universal property. Similarly, the map \(\phi_{4}\) is a canonical isomorphism. Also, the maps \(\phi_{1}\), \(\phi_{2}\), and \(\phi_{3}\) have their own obvious definitions. The types from \(FM_{1}\) to \(FM_{4}\) are obtained by the cofibrancy of \(A\), \(B(a)\) and \(\sum_{a:A}^{e}B(a)\). Therefore, the pairs of maps from \((u_{1},v_{1})\) to \((u_{4},v_{4})\) are isomorphisms. The pair \((\alpha_{1},\beta_{1})\) is an equivalence since \(A\) is sharp. The pair \((\alpha_{2},\beta_{2})\) is an equivalence by a functoriality rule12 since \(RB(a)\simeq\widetilde{RB}(r_{A}(a))\) for any \(a:A\). The pair \((\alpha_{3},\beta_{3})\) is an equivalence by a similar functoriality rule and the fact that \(B(a)\) is sharp for each \(a:A\). The pair \((\alpha_{4},\beta_{4})\) is an isomorphism since the other sides of the last square are all isomorphisms, and hence it is an equivalence. Footnote 12: If \(A\simeq B\) and \(P:A\to\mathcal{U}\), \(Q:B\to\mathcal{U}\) such that \(P\simeq Q\), then \(\prod_{A}P\simeq\prod_{B}Q\). We have similar rule for exo-types with \(\cong\). Now, while the composition of the maps on the left is the precomposition \((-\circ^{e}r)\), the composition of the maps on the right is an equivalence. Thus, it finishes the proof that \(\sum_{a:A}^{e}B(a)\) is sharp. As for the statement (vi), suppose \(\mathbb{N}^{e}\) is cofibrant. We take \(\mathbb{N}\) as a fibrant replacement, and the transfer map \(r:\mathbb{N}^{e}\to\mathbb{N}\) defined by \(r(\mathcal{0}^{e}):=\mathcal{0}\) and \(r(\mathsf{succ}^{e}n)=\mathsf{succ}\,r(n)\). For any \(Y:\mathbb{N}\to\mathcal{U}\), consider the diagram: \(\prod_{n:\mathbb{N}}Y(n)\)\(\xrightarrow[u_{Y}]{\simeq}\)\(\prod_{n \(\prod_{z:\sum_{c:RA}\widetilde{RB}(c)}Y(z)\)\(\prod_{z:\sum_{c:RA}\widetilde{RB}(c)}Y(z)\)\(\prod_{c:RA}\prod_{y:\widetilde{RB}(c)}Y(c,y)\)\(\prod_{a:A}\prod_{x:RB(a)}Y(r_{A}(a),e_{a}(x))\)\(\prod_{a:A}\prod_{x:RB(a)}Y(r_{A}(a),e_{a}(x))\)\(\prod_{a:A}\prod_{b:B(a)}Y(r_{A}(a),e_{a}(x))\)\(\prod Using Lemma 2.18, we will show that \((-\circ^{e}r)\) has a fibrant-section, namely, \(\alpha_{Y}=u_{Y}\circ^{e}(-\circ^{e}r)\) has a section. First, we define an auxiliary type \[S:\prod_{n:\mathbb{N}}\ \prod_{Y:\mathbb{N}\to\mathcal{U}}\ \prod_{x:FM_{Y}}Y(n)\] by \(S(\mathbf{0},Y,x):=v_{Y}(x)(\mathbf{0}^{e})\) and \(S(\mathsf{succ}(n),Y,x):=S(n,Y^{\prime},x^{\prime})\) where \[Y^{\prime}(n) := Y(\mathsf{succ}(n)),\] \[x^{\prime} := u_{Y^{\prime}}(\lambda a.\,v_{Y}(x)\left(\mathsf{succ}^{e}(a) \right)).\] We then define the section map \(\beta_{Y}:FM_{Y}\to\prod_{n:\mathbb{N}}Y(n)\) as \[\beta_{Y}(x)(n):=S(n,Y,x).\] To finish the proof, it remains to show that \(\alpha_{Y}\circ\beta_{Y}=\mathsf{id}\). By \(\mathsf{funext}\), it suffices to show that for any \(x:FM_{Y}\), we have \(\alpha_{Y}(\beta_{Y}(x))=x\). Using \(\mathsf{funext}\) for cofibrant types, it is enough to show that for any \(m:\mathbb{N}^{e}\), we have \[\left(\beta_{Y}(x)\right)(r(m))=v_{Y}(x)\,(m). \tag{4}\] Indeed, if we have this equality, then by \(\mathsf{funext}\) for cofibrant types (used at the second equality), we get \[\alpha_{Y}(\beta_{Y}(x))=u_{Y}(\beta_{Y}(x)\circ^{e}r)=u_{Y}(v_{Y}(x))=x.\] We will prove the identity (4) by induction on \(m:\mathbb{N}^{e}\). For \(m=\mathbf{0}^{e}\), we get \[\beta_{Y}(x)(r(\mathbf{0}^{e}))=\beta_{Y}(x)(\mathbf{0})=S(\mathbf{0},Y,x)=v_ {Y}(x)(\mathbf{0}).\] For \(m=\mathsf{succ}^{e}\,m^{\prime}\), we get by induction \[\beta_{Y^{\prime}}(x^{\prime})(r(m^{\prime}))=v_{Y^{\prime}}(x^{\prime})(m^{ \prime}).\] Using this, we obtain \[\beta_{Y}(x)(r(m)) = \beta_{Y}(x)(\mathsf{succ}(r(m^{\prime})))\] \[= S(\mathsf{succ}(r(m^{\prime})),Y,x)\] \[= S(r(m^{\prime}),Y^{\prime},x^{\prime})\] \[= \beta_{Y^{\prime}}(x^{\prime})(r(m^{\prime}))\] \[= v_{Y^{\prime}}(x^{\prime})(m^{\prime})\] \[= v_{Y^{\prime}}(u_{Y^{\prime}}(\lambda a.\,v_{Y}(x)\left( \mathsf{succ}^{e}(a)\right)))\,(m^{\prime})\] \[= \left(\lambda a.\,v_{Y}(x)\left(\mathsf{succ}^{e}(a)\right)\right) (m^{\prime})=v_{Y}(x)\,(m)\quad\Box\] **Agda Side.** The folder Sharpness in the library [14] contains all the definitions and proofs in this section. ## 3 Lifting cofibrancy from exo-nat to other types In this section, we give a new result about other inductive types. Using cofibrant exo-nat, we will show that some other inductive types preserve cofibrancy. ### List exo-types **Definition 3.1**.: For an exo-type \(A:\mathcal{U}^{e}\) we define the exo-type \(\mathsf{List}^{e}(A):\mathcal{U}^{e}\) of **finite exo-lists** of terms of \(A\), which has constructors * \(\llbracket e\rrbracket^{e}:\mathsf{List}^{e}(A)\) * \(::^{e}:A\to\mathsf{List}^{e}(A)\to\mathsf{List}^{e}(A)\) Similarly, if \(A:\mathcal{U}\) is a type, the type \(\mathsf{List}(A)\) of **finite lists** of \(A\) has constructors \(\llbracket\) and \(::\). As in Theorem 2.6, we have an obvious map \(f:\mathsf{List}^{e}(A)\to\mathsf{List}(A)\) for a type \(A\) defined as \(f(\llbracket e\rrbracket^{e}):=\llbracket\) and \(f(a\,\mathrel{\mathop{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makeboxbox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makeboxbox[0.0pt]{\makebox[ 0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[0.0pt]{\makebox[ 0.0pt{\makebox[0.0pt]{\makeboxbox[0.0pt]{\makebox[0.0pt]{\makeboxbox[0.0pt]{ \makebox[0.0pt{\makebox[0.0pt]{\makeboxbox[0.0pt]{\makeboxbox[0.0pt]{\makebox[ 0.0pt{\makebox[0.0pt]{\makeboxbox[0.0pt]{\makeboxbox[0.0pt{\makeboxbox[[ 0.0]{\makeboxbox[{\makebox[[[{\ Proof.: Since \(A\) is sharp, it is cofibrant. By Proposition 3.3, we have \(\mathsf{List}^{e}(A)\) is cofibrant, so it remains to find the fibrant replacement of it. Since \(A\) is sharp, we have a type \(RA:\mathcal{U}\) and a map \(r_{A}:A\to RA\) such that the map \(2\) is a fibrant-equivalence for any \(Y:RA\to\mathcal{U}\). We claim that \(\mathsf{List}(RA)\) is a fibrant replacement of \(\mathsf{List}^{e}(A)\). Define \(r:\mathsf{List}^{e}(A)\to\mathsf{List}(RA)\) as \[r([\![^{e}) := [\![]\] \[r(a\,{::^{e}}\,l) := r_{A}(a)\,{::}\,r(l)\] Consider the following commutative diagram for any \(Y:\mathsf{List}(RA)\to\mathcal{U}\): The type \(FM_{Y}\) and the isomorphism \(u_{Y}\) are obtained by the cofibrancy of \(\mathsf{List}^{e}(A)\). We want to show that \(\alpha_{Y}\) is an equivalence. First, we define an auxiliary type \[S:\prod_{t:\mathsf{List}(RA)}\left(\prod_{Y:\mathsf{List}(RA)\to\mathcal{U}} \left(\prod_{x:FM_{Y}}Y(t)\right)\right)\] by \(S([\![],Y,x):=v_{Y}(x)([\![^{e})\) and \(S(c\,{::}\,l,Y,x):=S(l,Y^{\prime},x^{\prime})\) where \[Y^{\prime}(l) := Y(c\,{::}\,l),\] \[x^{\prime} := u_{Y^{\prime}}(T)\] for a \(T:\prod_{s:\mathsf{List}^{e}(A)}^{e}Y^{\prime}(r(s))\) defined as follows: For \(s:\mathsf{List}^{e}(A)\), consider the following diagram: The equivalence \(\alpha\) is obtained by the sharpness of \(A\). Now we define \[T(s):=\beta(u(\lambda\,a.v_{Y}(x)(a\,{::^{e}}\,s)))\,(c)\ :Y(c\,{::}\,r(s))=Y^{\prime}(r(s)).\] We also claim that for any \(s:\mathsf{List}^{e}(A)\), \(Y:\mathsf{List}(RA)\to\mathcal{U}\), and \(x:FM_{Y}\) we have \[S(r(s),Y,x)=v_{Y}(x)(s). \tag{5}\] It follows by induction on \(\mathsf{List}^{e}(A)\). If \(s=\left[\right]^{e}\), the \(\mathsf{refl}\) term satisfies the identity 5. If \(s=b\,{:}^{e}\,s^{\prime}\) for \(s^{\prime}:\mathsf{List}^{e}(A)\), then we have the following chain of identities: \[S(r(b\,{:}^{e}\,s^{\prime}),Y,x) = S(r_{A}(b)\,{:}\,r(s^{\prime}),Y,x)\] \[= S(r(s^{\prime}),Y^{\prime},x^{\prime})\] \[= v_{Y^{\prime}}(u_{Y^{\prime}}(T))(s^{\prime})\] \[= T(s^{\prime})\] \[= \beta(u(\lambda\,a.v_{Y}(x)(a\,{:}^{e}\,s^{\prime})))\,(r_{A}(b))\] \[= (\lambda\,a.v_{Y}(x)(a\,{:}^{e}\,s^{\prime}))\,(b)\] \[= v_{Y}(x)(b\,{:}^{e}\,s^{\prime})\] These are obtained by, respectively, the definition of \(r\), the definition of \(S\), the induction hypothesis, the fact that \(v^{\prime}_{Y}\) is the inverse of \(u^{\prime}_{Y}\), the definition of \(T\), the fact that \(\beta\) is the inverse of \(\alpha=u\,\circ^{e}(-\,\circ^{e}\,r)\), and the definition of the given function. Note that when we have exo-equalities of terms in types, we can use \(\mathsf{eqtoid}\) to make them identities. Now, define \(\beta_{Y}:FM_{Y}\to\prod_{t:\mathsf{List}(RA)}Y(s)\) as \(\beta_{Y}(x)(t):=S(t,Y,x)\). Then we obtain \[\alpha_{Y}(\beta_{Y}(x))=u_{Y}(\beta_{Y}(x)\,\circ^{e}\,r)=u_{Y}(v_{Y}(x))=x.\] These are obtained by, respectively, the definition of \(\alpha_{Y}\), the fact that \(\beta_{Y}(x)\,\circ^{e}\,r=v_{Y}(x)\) since we can use \(\mathsf{funext}^{e}\) for cofibrant exo-types and Equation 5, and the fact that \(v_{Y}\) is the inverse of \(u_{Y}\). This proves that \(\alpha_{Y}\) has a section for any \(Y:\mathsf{List}(RA)\to\mathcal{U}\). By Lemma 2.18, we conclude that \(\mathsf{List}^{e}(A)\) is sharp. **Agda Side.** The file Cofibrancy_of_List in our Agda library [14] is the formalization of Lemma 3.2 and Proposition 3.3. The file On_Sharpness_of_List is the formalization of Proposition 3.4. ### Exo-type of binary trees **Definition 3.5**.: For exo-type \(N,L:\mathcal{U}^{e}\) we define the exo-type \(\mathsf{BinTree}^{e}(N,L):\mathcal{U}^{e}\) of **binary exo-trees** with node values of exo-type \(N\) and leaf values of exo-type \(L\), which has constructors * \(\mathsf{leaf}^{e}:L\to\mathsf{BinTree}^{e}(N,L)\) * \(\mathsf{node}^{e}:\mathsf{BinTree}^{e}(N,L)\to N\to\mathsf{BinTree}^{e}(N,L)\to \mathsf{BinTree}^{e}(N,L)\) Similarly, if \(N,L:\mathcal{U}\) is a type, the type \(\mathsf{BinTree}(N,L)\) of **binary trees** with node values of type \(N\), leaf values of type \(L\), and constructors \(\mathsf{leaf}\) and \(\mathsf{node}\). We also have a definition for unlabeled binary (exo)trees. **Definition 3.6**.: The exo-type \(\mathsf{UnLBinTree}^{e}:\mathcal{U}^{e}\) of **unlabeled binary exo-trees** is constructed by * \(\mathsf{u-leaf}^{e}:\mathsf{UnLBinTree}^{e}\) * \(\mathsf{u}\mbox{-}\mathsf{node}^{e}:\mathsf{UnLBinTree}^{e}\to\mathsf{UnLBinTree}^{e} \to\mathsf{UnLBinTree}^{e}\) Similarly, the type \(\mathsf{UnLBinTree}:\mathcal{U}\) of **unlabeled binary trees** is constructed by \(\mathsf{u}\mbox{-}\mathsf{leaf}\) and \(\mathsf{u}\mbox{-}\mathsf{node}\). It is easy to see that if we take \(N=L=\mathbf{1}^{e}\), then \(\mathsf{BinTree}^{e}(N,L)\) is isomorphic to \(\mathsf{UnLBinTree}^{e}\). However, we have a more general relation between them. For any \(N,L:\mathcal{U}^{e}\), we can show \[\mathsf{BinTree}^{e}(N,L)\cong\sum\nolimits_{t:\mathsf{UnLBinTree}^{e}}^{e} \left(N^{\#\text{ of nodes of }t}\times^{e}L^{\#\text{ of leaves of }t}\right). \tag{6}\] Thanks to this isomorphism, we can determine the cofibrancy or the sharpness of \(\mathsf{BinTree}^{e}(N,L)\) using the cofibrancy or the sharpness of \(\mathsf{UnLBinTree}^{e}\). Indeed, we will show that if \(\mathbb{N}^{e}\) is cofibrant, then \(\mathsf{UnLBinTree}^{e}\) is not only cofibrant but also sharp. Since any finite product of cofibrant (sharp) exo-types is cofibrant (sharp), we can use isomorphism 6 to show \(\mathsf{BinTree}^{e}(N,L)\) is cofibrant (sharp) under some conditions. Thus, the main goal is to get cofibrant (sharp) \(\mathsf{UnLBinTree}^{e}\). We will construct another type that is easily shown to be cofibrant (sharp) to achieve this goal. Let \(\mathsf{Parens}:\mathcal{U}^{e}\) be the exo-type of parentheses constructed by \(\mathsf{popen}:\mathsf{Parens}\) and \(\mathsf{pclose}:\mathsf{Parens}\). In other words, it is an exo-type with two terms. Define an exo-type family \[\mathsf{isbalanced}:\mathsf{List}^{e}(\mathsf{Parens})\to\mathbb{N}^{e}\to \mathcal{U}^{e}\] where \(\mathsf{isbalanced}(l,n):=\mathbf{1}^{e}\) if the list of parentheses \(l\) needs \(n\) many opening parentheses to be a balanced parenthesization, and \(\mathsf{isbalanced}(l,n):=\mathbf{0}^{e}\) otherwise. For example, we have \[\mathsf{isbalanced}(\mathsf{popen}:^{e}\mathsf{pclose}:^{e}\|^{e}, \mathbf{0}^{e})=\mathbf{1}^{e}\] \[\mathsf{isbalanced}(\mathsf{popen}:^{e}\mathsf{pclose}:^{e}\mathsf{ pclose}:^{e}\mathsf{pclose}:^{e}\mathsf{pclose}:^{e}\llbracket^{e},\mathsf{succ}^{e}( \mathbf{0}^{e}))=\mathbf{1}^{e}\] \[\mathsf{isbalanced}(\mathsf{popen}:^{e}\mathsf{pclose}:^{e}\mathsf{ popen}:^{e}\llbracket^{e},\mathsf{succ}^{e}(\mathbf{0}^{e}))=\mathbf{0}^{e}\,.\] In other words, the first says that "()" is a balanced parenthesization, the second says that "()" needs one opening parenthesis, and the third says that "()" is not balanced if we add one more opening parenthesis. Since \(\mathbf{0}^{e}\) and \(\mathbf{1}^{e}\) are cofibrant (also sharp), we get \(\mathsf{isbalanced}(l,n)\) is cofibrant (sharp) for any \(l:\mathsf{List}^{e}(\mathsf{Parens})\) and \(n:\mathbb{N}^{e}\). Finally, for any \(n:\mathbb{N}^{e}\) define \[\mathsf{Balanced}(n):=\sum\nolimits_{l:\mathsf{List}^{e}(\mathsf{Parens})}^{e} \mathsf{isbalanced}(l,n).\] **Lemma 3.7**.: _If \(\mathbb{N}^{e}\) is cofibrant, then for any \(n:\mathbb{N}^{e}\), the exo-type \(\mathsf{Balanced}(n)\) is both cofibrant and sharp._ Proof.: By definition, \(\mathsf{Parens}\) is a finite exo-type. Therefore, it is both cofibrant (Proposition 2.15(iii)) and sharp (Proposition 2.19(v)). Since we assume \(\mathbb{N}^{e}\) is cofibrant, Proposition 3.3 shows that \(\mathsf{List}^{e}(\mathsf{Parens})\) is cofibrant, and Proposition 3.4 shows that \(\mathsf{List}^{e}(\mathsf{Parens})\) is sharp. Since \(\mathsf{isbalanced}(l,n)\) is both cofibrant and sharp for any \(l:\mathsf{List}^{e}(\mathsf{Parens})\) and \(n:\mathbb{N}^{e}\), the exo-type \(\mathsf{Balanced}(n)\) is cofibrant by Proposition 2.15(iv) and sharp by Proposition 2.19(iv). The exo-type that we use to show \(\mathsf{UnLBinTree}^{e}\) is both cofibrant and sharp, is \(\mathsf{Balanced}(\mathbf{0}^{e})\). The following result will be analogous to the combinatorial result that there is a one-to-one correspondence between full binary trees and balanced parenthesizations [6]. **Proposition 3.8**.: _There is an isomorphism \(\mathsf{UnLBinTree}^{e}\cong\mathsf{Balanced}(0^{e})\)._ Proof.: We define the desired map by explain its construction. For the proof that it is indeed an isomorphism, we refer to its formalization in our library. Define first \[\phi:\mathsf{UnLBinTree}^{e}\rightarrow(n:\mathbb{N}^{e})\rightarrow\mathsf{ Balanced}(n)\rightarrow\mathsf{Balanced}(n)\] as follows: \[\phi(\mathsf{u}\text{-}\mathsf{leaf}^{e},\,n,\,b) := b\] \[\phi(\mathsf{u}\text{-}\mathsf{node}^{e}(t_{1},t_{2}),\,n,\,b) := \phi(t_{2},\,n,\,b^{\prime})\] where \[b^{\prime}:=(\mathsf{open}::^{e}(\pi_{1}(\phi\,(t_{1},\, \mathsf{succ}^{e}(n),\,(\mathsf{close}::^{e}\pi_{1}(b),\pi_{2}(b)))))\;,\] \[\pi_{2}(\phi\,(t_{1},\,\mathsf{succ}^{e}(n),\,(\mathsf{close}::^ {e}\pi_{1}(b),\pi_{2}(b))))).\] Using this, the main map \(\Phi:\mathsf{UnLBinTree}^{e}\rightarrow\mathsf{Balanced}(0^{e})\) is defined as \[\Phi(t):=\phi(t,0^{e},(\left\lfloor{}^{e},\star^{e})).\] The construction basically maps each tree to a balanced parenthesization in the following way. A leaf is represented by the empty list of parentheses. If trees \(t_{1}\) and \(t_{2}\) have representations \(l_{1}\) and \(l_{2}\), then the tree \(\mathsf{u}\text{-}\mathsf{node}^{e}(t_{1},t_{2})\) is represented by the list \(l_{2}\,(\,l_{1}\,)\). Figure 5 provides some examples of this conversion. The inverse of \(\Phi\) is defined precisely by reversing this process, but it needs some auxiliary definitions. One can see the formalization for the details. This isomorphism provides the results we wanted. **Corollary 3.9**.: _If \(\mathbb{N}^{e}\) is cofibrant, then \(\mathsf{UnLBinTree}^{e}\) is both cofibrant and sharp._ Proof.: It follows from Lemma 3.7 and Proposition 3.8. **Corollary 3.10**.: _If \(\mathbb{N}^{e}\) is cofibrant, and \(N,L:\mathcal{U}^{e}\) are cofibrant (sharp) exo-types, then \(\mathsf{BinTree}^{e}(N,L)\) is cofibrant (sharp)._ Proof.: It follows from the isomorphism 6 and Corollary 3.9. **Agda Side.** The file BinTree in our Agda library [14] include the proof of the isomorphism 6. The file Cofibrancy_of_BinTree includes the Proposition 3.8 and the cofibrancy results about Figure 5: Examples of the conversion between binary trees and parenthesization. (unlabeled) binary trees. The file On_Sharpness_of_BinTree includes the sharpness results about (unlabeled) binary trees. ## 4 Semantics of two-level type theory In this section, we will examine the semantic aspect of the theory discussed in the previous sections. In order for the axiom we accept about natural numbers to have meaning, we will investigate its models, preferably a large number of them. To do this, we will first provide the necessary background information about the models of the two-level type theory and then introduce the additional conditions required for the fulfillment of the aforementioned axiom. During this section, we follow the conventions below. **Variables.**\(\Gamma\), \(\Delta\), \(\ldots\) stand for _contexts_, \(\sigma\), \(\theta\), \(\tau\), \(\ldots\) for _context morphisms_, \(P\), \(Q\), \(R\), \(\ldots\) for _presheaves_, \(A\), \(B\), \(C\), \(Y\), \(\ldots\) for _types_ and _type families_, and \(a\), \(b\), \(c\), \(\ldots\) for _terms_. **Substitution.** Whenever \(P:C^{\mathrm{op}}\to\mathsf{Set}\) is a presheaf, \(\sigma:\Delta\to\Gamma\) a morphism, and \(A:P(\Gamma)\), we write \(A[\sigma]\) instead of \(P(\sigma)(A)\). **Equality signs.** Recall 2LTT has two different equality signs: "\(=\)" and "\(=\)". Now, another equality comes forward, that is the equality in _metatheory_. We reserve "\(=\)" for the metatheory's equality, and use "\(\mathsf{Id}\)" for the identity type, "\(\mathsf{Eq}\)" for the exo-equality. ### Category with families **Definition 4.1**.: A category with families (CwF) consists of the following: * A category \(\mathcal{C}\) with a terminal object \(1_{\mathcal{C}}:\mathcal{C}\). Its objects are called _contexts_, and \(1_{\mathcal{C}}\) is called the _empty context_. * A presheaf \(\mathtt{Ty}:\mathcal{C}^{\mathrm{op}}\to\mathsf{Set}\). If \(A:\mathtt{Ty}(\Gamma)\), then we say \(A\)_is a type over \(\Gamma\)_. * A presheaf \(\mathtt{Tm}:(\int\mathtt{Ty})^{\mathrm{op}}\to\mathsf{Set}\). If \(a:\mathtt{Tm}(\Gamma,A)\), then we say \(a\)_is a term of \(A\)_. * For any \(\Gamma:\mathcal{C}\) and \(A:\mathtt{Ty}(\Gamma)\), there is an object \(\Gamma.A:\mathcal{C}\), a morphism \(p_{A}:\Gamma.A\to\Gamma\), and a term \(q_{A}:\mathtt{Tm}(\Gamma.A,A[p_{A}])\) with the universal property: for any object \(\Delta:\mathcal{C}\), a morphism \(\sigma:\Delta\to\Gamma\), and a term \(a:\mathtt{Tm}(\Delta,A[\sigma])\), there is a unique morphism \(\theta:\Delta\to\Gamma.A\) such that \(p_{A}\circ\theta=\sigma\) and \(q_{A}[\theta]=a\). This operation is called the _context extension_. Note that for all contexts \(\Gamma:\mathcal{C}\) and types \(A:\mathtt{Ty}(\Gamma)\), there is a natural isomorphism \[\mathtt{Tm}(\Gamma,A)\cong\mathcal{C}/\Gamma((\Gamma,\mathsf{id}_{\Gamma}),( \Gamma.A,p_{A})).\] Indeed, this follows from the universal property of the context extension by taking \(\Delta:=\Gamma\) and \(\sigma:=\mathsf{id}_{\Gamma}\). This observation says that the terms of \(A\) over \(\Gamma\) can be regarded as the sections of \(p_{A}:\Gamma.A\to\Gamma\). The proposition below is a useful fact for the rest of the section. **Proposition 4.2**.: _Let \(\sigma:\Delta\to\Gamma\) be a context morphism and \(A:\mathtt{Ty}(\Gamma)\). There exists a morphism \(\sigma^{+}:\Delta.A[\sigma]\to\Gamma.A\) that makes the following diagram into a pullback square:_ Proof.: The existence of a morphism \(\sigma^{+}\) follows from the universal property for the extension \(\Gamma.A\), using the morphism \(\sigma\circ p_{A[\sigma]}:\Delta.A[\sigma]\to\Gamma\). Consider another commutative diagram of the form Universal property for the extension \(\Delta.A[\sigma]\) gives a unique morphism \(\theta:\Theta\to\Delta.A[\sigma]\) such that \(p_{A[\sigma]}\circ\theta=\tau\). Since we have both \[p_{A}\circ\eta=\sigma\circ\tau\] and \[p_{A}\circ\sigma^{+}\circ\theta=\sigma\circ p_{A[\sigma]}\circ\theta=\sigma \circ\tau,\] by the universal property for the extension \(\Gamma.A\), we have \(\sigma^{+}\circ\theta=\eta\). Rather than presenting a specific instance of a CwF, we will offer a more extensive range of CwF examples in the subsequent section. #### Presheaf CwFs The category of presheaves is an archetypal example of a CwF [8]. Let \(\mathcal{C}\) be a (small) category, and \(\widehat{\mathcal{C}}\) be its category of presheaves. The CwF structure on \(\widehat{\mathcal{C}}\), denoted by \((\widehat{\mathtt{Ty}},\widehat{\mathtt{Th}})\), is defined in the following manner: * Contexts are presheaves \(\mathcal{C}^{\mathrm{op}}\to\mathsf{Set}\). * The constant presheaf that takes the value \(\star\) in the category of sets can be characterized as the terminal object \(1_{\widehat{\mathcal{C}}}\). * Recall that \(\widehat{\mathtt{Ty}}\) is a presheaf on \(\widehat{\mathcal{C}}\). If \(P:\widehat{\mathcal{C}}\), then \(\widehat{\mathtt{Ty}}(P)\) is the underlying set of the category of presheaves \(\widehat{\int P}\) over _the category of elements_\(\int P\). In other words, a type \(A:\widehat{\mathtt{Ty}}(P)\) is a functor \(\left(\int P\right)^{\mathrm{op}}\to\mathsf{Set}\). If \(\phi:Q\to P\) is a morphism in \(\widehat{\mathcal{C}}\) and \(A:\widehat{\mathtt{Ty}}(P)\), we define the type substitution \(A[\phi]:\widehat{\mathtt{Ty}}(Q)\) as \[A[\phi](\Gamma,x):=A(\Gamma,\phi_{\Gamma}(x))\] where \(x:Q_{\Gamma}\). * Recall that \(\widehat{\mathtt{Tm}}\) is a presheaf on \(\int\widehat{\mathtt{Ty}}\). For \(P:\widehat{C}\) and \(A:\widehat{\mathtt{Ty}}(P)\), we define \[\widehat{\mathtt{Tm}}(P,A):=\left\{a:\prod_{\Gamma:\mathcal{C},\,x:P_{\Gamma} }A(\Gamma,x)\mid\text{ if }\sigma:\Delta\to\Gamma,\,x:P_{\Gamma},\,\text{ then }a(\Gamma,x)[\sigma]=a(\Delta,x[\sigma]) \right\}.\] If \(\phi:Q\to P\) is a morphism in \(\widehat{C}\), \(A:\widehat{\mathtt{Ty}}(P)\), and \(a:\widehat{\mathtt{Tm}}(P,A)\), we define the term substitution \(a[\phi]:\widehat{\mathtt{Tm}}(Q,A[\phi])\) as \[a[\phi](\Gamma,x):=a(\Gamma,\phi_{\Gamma}(x))\] where \(x:Q_{\Gamma}\). * For \(P:\widehat{C}\) and \(A:\widehat{\mathtt{Ty}}(P)\), the context \(P.A\) is again a presheaf over \(\mathcal{C}\) defined by \[P.A(\Gamma):=\prod_{x:P(\Gamma)}A(\Gamma,x).\] If \(\sigma:\Delta\to\Gamma\) is a morphism in \(\mathcal{C}\) and \((x,a):P.A(\Gamma)\), then we define \[(x,a)[\sigma]=P.A(\sigma)(x,a):=(x[\sigma],a[\sigma]).\] The morphism \(p_{A}:P.A\to P\) is defined by the first projection. In other words, for \(\Gamma:\mathcal{C}\) and \((x,a):P.A(\Gamma)\), we have \((p_{A})_{\Gamma}(x,a)=x\). The term \(q_{A}:\widehat{\mathtt{Tm}}(P.A,A[p_{A}])\) is given by the second projection. In other words, for \(\Gamma:\mathcal{C}\) and \((x,a):P.A(\Gamma)\), we have \(q_{A}(\Gamma,(x,a))=a\). Note that \(A[p_{A}](\Gamma,(x,a))=A(\Gamma,p_{A}(x,a))=A(\Gamma,x)\). It remains to verify the universal property for the context extension. Let \(Q:\widehat{C}\), \(\tau:Q\to P\), and \(b:\widehat{\mathtt{Tm}}(Q,A[\tau])\). Define \(\theta:Q\to P.A\) as follows: for \(\Gamma:\mathcal{C}\) and \(x:Q_{\Gamma}\), we have \[\theta_{\Gamma}(x):=(\tau_{\Gamma}(x),b(\Gamma,x)).\] It is straightforward to verify the defining rules: \(\underline{p_{A}\circ\theta}=\tau\) because \[(p_{A}\circ\theta)_{\Gamma}(x)=p_{A}(\tau_{\Gamma}(x),b(\Gamma,x))=\tau_{ \Gamma}(x),\] and \(\underline{q_{A}[\theta]}=b\) because \[q_{A}[\theta](\Gamma,x)=q_{A}(\Gamma,\theta_{\Gamma}(x))=q_{A}(\Gamma,(\tau_{ \Gamma}(x),b(\Gamma,x)))=b(\Gamma,x).\] Since the defining properties of \(p_{A}\) and \(q_{A}\) determines the map \(\theta\), it is uniquely determined. Therefore \((\widehat{C},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}})\) satisfies the conditions in Definition 4.1. **Simplicial set CwF** Let \(\mathcal{C}=\triangle\) be the simplex category. The presheaf category \(\widehat{\triangle}\) is called _the category of simplicial sets_, denoted by \(\mathsf{SSet}\). Like any other presheaf category, \(\mathsf{SSet}\) has a CwF structure \((\widehat{\mathsf{Ty}},\widehat{\mathsf{Tm}})\), but it has another CwF structure. We only define a new type presheaf \(\mathsf{Ty}:\mathsf{SSet}^{\mathsf{op}}\to\mathsf{Set}\) as follows: \(\mathsf{Ty}(P)\) is a subset of \(\widehat{\mathsf{Ty}}(P)\) such that \(A:\mathsf{Ty}(P)\) if the display map \(P.A\to P\) is a _Kan fibration_. With the induced term presheaf \(\mathsf{Tm}\) obtained by \(\widehat{\mathsf{Tm}}\), we obtain a CwF that is \((\mathsf{SSet},\mathsf{Ty},\mathsf{Tm})\). It is easy to prove that this structure satisfies all axioms of being a CwF [5]. We always refer to this new CwF structure on \(\mathsf{SSet}\) unless stated otherwise. \(\mathsf{SSet}\) is not a unique example of a presheaf category having two CwF structures. There are many of them, and it will be helpful to talk about two-level structures in the later sections. ### Type formers in CwFs The objective of this section is to establish the meanings of particular type formers within a CwF and to examine the requirements that must be fulfilled for the CwF to possess these type formers. Although this analysis could be applied to various standard type formers, our focus will be solely on those indispensable for the subsequent sections. **Dependent function types** We say that a CwF \((\mathcal{C},\mathsf{Ty},\mathsf{Tm})\)_supports \(\Pi\)-types_[8] if * for any two types \(A:\mathsf{Ty}(\Gamma)\) and \(B:\mathsf{Ty}(\Gamma.A)\), there is a type \(\Pi(A,B):\mathsf{Ty}(\Gamma)\), * for each \(b:\mathsf{Tm}(\Gamma.A,B)\), there is a term \(\lambda(b):\mathsf{Tm}(\Gamma,\Pi(A,B))\), and * for each \(f:\mathsf{Tm}(\Gamma,\Pi(A,B))\) and \(a:\mathsf{Tm}(\Gamma,A)\), there is a term \(\mathsf{app}(f,a):\mathsf{Tm}(\Gamma,B[a])\) such that the following equations (with appropriate quantifiers) hold: \[\mathsf{app}(\lambda(b),a)=b[a]\] \[\lambda(\mathsf{app}(f,a),q_{A})=f\] \[\Pi(A,B)[\tau]=\Pi(A[\tau],B[\tau^{+}])\] \[\lambda(b)[\tau]=\lambda(b[\tau])\] \[\mathsf{app}(f,a)[\tau]=\mathsf{app}(f[\tau],a[\tau]).\] Note that using dependent function types, one can define simple function types. Indeed, if \(A,B:\mathsf{Ty}(\Gamma)\), then _the type of functions_ from \(A\) to \(B\) over \(\Gamma\), denoted by \(B^{A}:\mathsf{Ty}(\Gamma)\) is defined via \(\Pi(A,B[p_{A}])\). We will show that a presheaf CwF supports \(\Pi\)-types. **Proposition 4.3**.: _For any (small) category \(\mathcal{C}\), the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathsf{Ty}},\widehat{\mathsf{Tm}})\) supports \(\Pi\)-types._ Proof.: Let \(A:\widehat{\mathsf{Ty}}(P)\) and \(B:\widehat{\mathsf{Ty}}(P.A)\). First, we need to define \(\Pi(A,B):\widehat{\mathsf{Ty}}(P)\). Recall that \(\Pi(A,B)\) should be a presheaf over \(\int P\). For each \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), the type \(\Pi(A,B)(\Gamma,x)\) consists of the elements \(f\) in the categorical product \[f:\prod\nolimits_{\Delta:\mathcal{C},\,\sigma:\Delta\to\Gamma,\,a:A(\Delta,x[ \sigma])}B(\Delta,(x[\sigma],a))\] such that if \(\Theta:\mathcal{C}\) and \(\tau:\Theta\to\Delta\), then \[f(\Delta,\sigma,a)[\tau]=f(\Theta,\sigma\circ\tau,a[\tau]). \tag{7}\] If \(\tau:(\Upsilon,y)\to(\Gamma,x)\) is a morphism in \(\int P\), namely, \(\tau:\Upsilon\to\Gamma\) is a morphism in \(\mathcal{C}\) such that \(x[\tau]=y\), for each \(f:\Pi(A,B)(\Gamma,x)\), \(\Delta:\mathcal{C}\), \(\sigma:\Delta\to\Upsilon\), and \(a:A(\Delta,y[\sigma])\), we define \(f[\tau](\Delta,\sigma,a):=f(\Delta,\tau\circ\sigma,a)\). Using the compatibility condition 7, we indeed obtain a presheaf \(\Pi(A,B)\) on \(\int P\). Second, for each \(b:\widehat{\mathtt{Tm}}(P.A,B)\), we need to define a term \(\lambda(b):\widehat{\mathtt{Tm}}(P,\Pi(A,B))\). Recall that \(\lambda(b)\) should be an element in \[\prod\nolimits_{\Gamma:\mathcal{C},x:P_{\Gamma}}\Pi(A,B)(\Gamma,x).\] Now, for \(\Gamma:\mathcal{C}\), \(x:P_{\Gamma}\), \(\Delta:\mathcal{C}\), \(\sigma:\Delta\to\Gamma\), and \(a:A(\Delta,x[\sigma])\) we define \[\lambda(b)(\Gamma,x)(\Delta,\sigma,a):=b(\Delta,x[\sigma],a).\] This definition makes sense because the term \(b\) is in \(\prod\nolimits_{\Gamma:\mathcal{C},x:P.A_{\Gamma}}B(\Gamma,z)\). Third, for each \(f:\widehat{\mathtt{Tm}}(P,\Pi(A,B))\) and \(a:\widehat{\mathtt{Tm}}(P,A)\), we need to define a term \(\mathsf{app}(f,a):\widehat{\mathtt{Tm}}(P,B[a])\). Recall that it should be in \[\prod\nolimits_{\Gamma:\mathcal{C},x:P_{\Gamma}}B[a](\Gamma,x).\] Now, for \(\Gamma:\mathcal{C}\), \(x:P_{\Gamma}\), we define \[\mathsf{app}(f,a)(\Gamma,x):=f(\Gamma,x)(\Gamma,\mathsf{id},a).\] It is easy but straightforward to prove coherence rules [8]. **Dependent pair types** We say that a CwF \((\mathcal{C},\mathtt{Ty},\mathtt{Tm})\)_supports \(\Sigma\)-types_[8] if * for any two types \(A:\mathtt{Ty}(\Gamma)\) and \(B:\mathtt{Ty}(\Gamma.A)\), there is a type \(\Sigma(A,B):\mathtt{Ty}(\Gamma)\), * for each \(a:\mathtt{Tm}(\Gamma,A)\) and \(b:\mathtt{Tm}(\Gamma,B[a])\), there is a term \(\langle a,b\rangle:\mathtt{Tm}(\Gamma,\Sigma(A,B))\), and * for each \(z:\mathtt{Tm}(\Gamma,\Sigma(A,B))\), there are terms \(\pi_{1}(z):\mathtt{Tm}(\Gamma,A)\) and \(\pi_{2}(z):\mathtt{Tm}(\Gamma,B[\pi_{1}(z)])\) such that the following equations (with appropriate quantifiers) hold: \[\pi_{1}(\langle a,b\rangle)=a\] \[\pi_{2}(\langle a,b\rangle)=b\] \[\langle\pi_{1}(z),\pi_{2}(z)\rangle=z\] \[\Sigma(A,B)[\tau]=\Sigma(A[\tau],B[\tau^{+}])\] \[\langle a,b\rangle[\tau]=\langle a[\tau],b[\tau]\rangle\] \[\pi_{1}(z)[\tau]=\pi_{1}(z[\tau])\] \[\pi_{2}(z)[\tau]=\pi_{2}(z[\tau]).\] We will show that a presheaf CwF supports \(\Sigma\)-types. **Proposition 4.4**.: _For any (small) category \(\mathcal{C}\), the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}})\) supports \(\Sigma\)-types._ Proof.: Let \(A:\widehat{\mathtt{Ty}}(P)\) and \(B:\widehat{\mathtt{Ty}}(P.A)\). First, we need to define \(\Sigma(A,B):\widehat{\mathtt{Ty}}(P)\). Recall that \(\Sigma(A,B)\) should be a presheaf over \(\int P\). For each \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), we define \[\Sigma(A,B)(\Gamma,x):=\{(a,b)|\,a:A(\Gamma,x),\,b:B(\Gamma,(x,a))\}.\] For a morphism \(\sigma:(\Delta,y)\to(\Gamma,x)\) in \(\int P\) and \((a,b):\Sigma(A,B)(\Gamma,x)\), we define \[(a,b)[\sigma]:=(a[\sigma],b[\sigma]).\] One can define the operations \(\langle\_,\_,\rangle\), \(\pi_{1}\), and \(\pi_{2}\) in an obvious way, and it is easy to prove the coherence rules. For presheaf CwFs, there is a relation between \(\Pi(A,B)\) and \(\Sigma(A,B)\). **Proposition 4.5**.: _In the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}})\), if \(A:\widehat{\mathtt{Ty}}(P)\) and \(B:\widehat{\mathtt{Ty}}(P.A)\), then the type \(\Pi(A,B)\) is a pullback for the diagram_ _where \(1\) is the constant presheaf over \(\int P\) and \(\phi\) is given by the first projection._ Proof.: Note that all objects in the diagram are preheaves over \(\int P\). Define a natural transformation \(\psi:\Pi(A,B)\to\Sigma(A,B)^{A}\) as follows: for each \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), we have \(\psi_{\Gamma,x}(f):=\tilde{f}\) where \(\tilde{f}(a):=(a,f(a))\). It is easy to see that \(\phi\circ\psi=\mathsf{id}\). Consider another commutative diagram: (8) Define a natural transformation \(\tau:D\to\Pi(A,B)\) as follows: for each \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), we have \(\tau_{\Gamma,x}(d)(a):=\pi_{2}(u_{\Gamma,x}(d)(a))\). It is easy to see that \(\psi\circ\tau=u\), and \(\tau\) is unique. This finishes our claim. #### Extensional identity type We say that a CwF \((\mathcal{C},\mathtt{Ty},\mathtt{Tm})\)_supports extensional identity types_[8] if * for any type \(A:\mathtt{Ty}(\Gamma)\), there is a type \(\mathsf{Eq}_{A}:\mathtt{Ty}(\Gamma.A.A[p_{A}])\), * a morphism \(\mathsf{refl}_{A}^{e}:\Gamma.A\to\Gamma.A.A[p_{A}]\). \(\mathsf{Eq}_{A}\) such that \(p_{\mathsf{Eq}_{A}}\circ\mathsf{refl}_{A}^{e}\) equals the diagonal morphism \(\Gamma.A\to\Gamma.A.A[p_{A}]\), and * for each \(B:\mathtt{Ty}(\Gamma.A.A[p_{A}].\,\mathsf{Eq}_{A})\), a function \[J^{e}:\mathtt{Tm}(\Gamma.A,B[\mathsf{refl}_{A}^{e}])\to\mathtt{Tm}(\Gamma.A. A[p_{A}].\,\mathsf{Eq}_{A},B)\] such that these data are stable under substitution with respect to context morphisms and such that * if \(h:\mathtt{Tm}(\Gamma.A,B[\mathsf{refl}_{A}^{e}])\), then \(J^{e}(h)[\mathsf{refl}_{A}^{e}]=h\), and * if \(h:\mathtt{Tm}(\Gamma.A.A[p_{A}].\,\mathsf{Eq}_{A},B)\), then \(J^{e}(h[\mathsf{refl}_{A}^{e}])=h\). The last equality can be thought of as an \(\eta\) rule. Moreover, this rule holds if and only if \(\mathsf{refl}_{A}^{e}\) is an isomorphism with the inverse \(p_{A[p_{A}]}\circ p_{\mathsf{Eq}_{A}}\). We will show that a presheaf CwF supports extensional identity types. **Proposition 4.6**.: _For any (small) category \(\mathcal{C}\), the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}})\) supports extensional identity types._ Proof.: Let \(A:\widehat{\mathtt{Ty}}(P)\). Recall that \(\mathsf{Eq}_{A}\) should be a presheaf over \(\int\Gamma.A.A[p_{A}]\). For each \(\Gamma:\mathcal{C}\), \(x:P_{\Gamma}\), and \(a,b:A(\Gamma,x)\), we define \[\mathsf{Eq}_{A}(\Gamma,x,a,b):=\begin{cases}\{\star\}&\text{if }a=b\\ \emptyset&\text{if }a\neq b\end{cases}.\] The morphism \(\mathsf{refl}_{A}^{e}:P.A\to P.A.A[p_{A}].\,\mathsf{Eq}_{A}\) is defined for each \(\Gamma:\mathcal{C}\) as \((x,a)\mapsto(x,a,a,\star)\). Since \(p_{\mathsf{Eq}}:P.A.A[p_{A}].\,\mathsf{Eq}_{A}\to P.A.A[p_{A}]\) is the first projection, we indeed have \(p_{\mathsf{Eq}}\circ\mathsf{refl}_{A}^{e}\) is the diagonal map. If \(B:\widehat{\mathtt{Ty}}(P.A.A[p_{A}].\,\mathsf{Eq}_{A})\), the function \(J^{e}:\widehat{\mathtt{Tm}}(P.A,B[\mathsf{refl}_{A}^{e}])\to\widehat{\mathtt{Tm }}(P.A.A[p_{A}].\,\mathsf{Eq}_{A},B)\) is defined as follows: If \(\alpha:\widehat{\mathtt{Tm}}(P.A,B[\mathsf{refl}_{A}^{e}])\), this means we have \[\alpha:\prod_{\begin{subarray}{c}\Gamma:\mathcal{C}\\ (x,a):P.A_{\Gamma}\end{subarray}}B(\Gamma,(x,a,a,\star))\quad\text{ and }\quad J^{e}(\alpha):\prod_{ \begin{subarray}{c}\Gamma:\mathcal{C}\\ (x,a,b,p):P.A.A[p_{A}].\,\mathsf{Eq}_{\Gamma}\end{subarray}}B(\Gamma,(x,a,b,p)).\] Thus, we define \(J^{e}(\alpha)(\Gamma,(x,a,b,p)):=\alpha(\Gamma,(x,a))\) which makes sense because \(p:\mathsf{Eq}(a,b)\) means \(a=b\) and \(p=\star\). Now, it is easy to prove the coherence rules. **Intensional identity type** We say that a CwF \((\mathcal{C},\mathtt{Ty},\mathtt{Tm})\)_supports intensional identity types_[8] if * for any type \(A:\mathtt{Ty}(\Gamma)\), there is a type \(\mathsf{Id}_{A}:\mathtt{Ty}(\Gamma.A.A[p_{A}])\), * a morphism \(\mathsf{refl}_{A}:\Gamma.A\to\Gamma.A.A[p_{A}].\,\mathsf{Id}_{A}\) such that \(p_{\mathsf{Id}_{A}}\circ\mathsf{refl}_{A}\) equals the diagonal morphism \(\Gamma.A\to\Gamma.A.A[p_{A}]\), and * for each \(B:\mathtt{Ty}(\Gamma.A.A[p_{A}].\,\mathsf{Id}_{A})\), a function \[J:\mathtt{Tm}(\Gamma.A,B[\mathtt{refl}_{A}])\to\mathtt{Tm}(\Gamma.A.A[p_{A}]. \,\mathsf{Id}_{A},B)\] such that these data are stable under substitution with respect to context morphisms, and such that if \(h:\mathtt{Tm}(\Gamma.A,B[\mathtt{refl}_{A}])\), then \(J(h)[\mathtt{refl}_{A}]=h\). The last equality can be thought of as a \(\beta\) rule. Since \(\mathsf{Id}\) is a particular case of \(\mathsf{Eq}\), we can say that every presheaf CwF supports intensional identity types. If we also assume _Univalence_ for intensional identities, it is not true in general that every presheaf CwF supports such an identity. For example, the presheaf CwF on \(\mathsf{Set}\) supports the identity type and the uniqueness of identity proof (UIP), but UIP contradicts with univalence [13]. Nonetheless, it has been established that the simplicial set CwF, denoted as \(\mathsf{SSet}\), does provide support for univalent identity types [10]. Indeed, this category stands as one of the widely recognized models not only for Martin-Lof Type Theory but also for Homotopy Type Theory. In the case of intensional identity types, we can also establish a definition for what constitutes a "contractible" type. This notion serves as a crucial component in the overall definition of cofibrancy. **Definition 4.7**.: Let \(A:\mathtt{Ty}(\Gamma)\) be a type. We call \(A\) a _contractible type_ if there is a term in the following type over \(\Gamma\): \[\mathsf{isContr}(A):=\Sigma(A,\Pi(A[p_{A}],\mathsf{Id}_{A})).\] Having such a term means that there is a term \(c:\mathtt{Tm}(\Gamma,A)\) called _center of contraction_, such that for any term \(a:\mathtt{Tm}(\Gamma.A,A[p_{A}])\) there is a term \(p:\mathtt{Tm}(\Gamma,\mathsf{Id}_{A}[c^{+},a^{+}])\). In other words, we have a section map \(c:\Gamma\to\Gamma.A\) to \(p_{A}\) such that the following diagram, where \(h\) is the contracting homotopy, commutes: **Natural number type** We say that a CwF \((C,\mathtt{Ty},\mathtt{Tm})\)_supports a natural number type_ if * there is a type \(\mathbb{N}:\mathtt{Ty}(1_{C})\) where \(1_{C}\) is the terminal object of \(C\), * there is a term \(\mathtt{0}:\mathtt{Tm}(1_{C},\mathbb{N})\) which can be thought as a context morphism \(1_{C}\to 1_{C}.\mathbb{N}\), * there is a morphism \(\mathtt{succ}:\mathtt{Tm}(1_{C},\mathbb{N})\to\mathtt{Tm}(1_{C},\mathbb{N})\) which can be thought as a context morphism \(1_{C}.\mathbb{N}\to 1_{C}.\mathbb{N}\), and * for each \(\Gamma:C\), the unique morphism \(\sigma:\Gamma\to 1_{C}\), and \(B:\mathtt{Ty}(\Gamma.\mathbb{N}[\sigma])\) with two context morphisms \(b_{0}:\Gamma\to\Gamma.\mathbb{N}[\sigma].B\) and \(b_{s}:\Gamma.\mathbb{N}\to\Gamma.\mathbb{N}[\sigma].B\to\Gamma.\mathbb{N}[ \sigma].B\), there is a morphism \(J_{B}^{\mathbb{N}}:\Gamma.\mathbb{N}[\sigma]\to\Gamma.\mathbb{N}[\sigma].B\) such that \(J_{B}^{\mathbb{N}}\circ\mathbf{0}[\sigma]=b_{0}\) and \(J_{B}^{\mathbb{N}}\circ\mathsf{succ}[\sigma](\alpha)=b_{s}(\alpha,\,J_{B}^{ \mathbb{N}}(\alpha))\), and these data are stable under substitution with respect to context morphisms. The last morphism can be thought of as a usual induction rule on \(\mathbb{N}\). **Proposition 4.8**.: _For any (small) category \(\mathcal{C}\), the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathsf{Ty}},\widehat{\mathsf{Tm}})\) supports a natural number type._ Proof.: Since \(\mathbb{N}\) should be a presheaf over \(\int 1_{\widehat{\mathcal{C}}}\), for each \(\Gamma:\mathcal{C}\) and \(x:(1_{\widehat{\mathcal{C}}})_{\Gamma}\) we define \(\mathbb{N}(\Gamma,x):=\mathbf{N}\), the external set of natural numbers. The term \(\mathbf{0}:\widehat{\mathsf{Tm}}(1_{\widehat{\mathcal{C}}},\mathbb{N})\) is obtained by the morphism \(1_{\widehat{\mathcal{C}}}\to 1_{\widehat{\mathcal{C}}}.\mathbb{N}\) which we define \(\mathbf{0}_{\Gamma}(x):=(x,0)\) for any \(\Gamma:\mathcal{C}\) and \(x:(1_{\widehat{\mathcal{C}}})_{\Gamma}\). The morphism \(\mathsf{succ}:1_{\widehat{\mathcal{C}}}.\mathbb{N}\to 1_{\widehat{ \mathcal{C}}}.\mathbb{N}\) is defined as \(\mathsf{succ}_{\Gamma}(x,k):=(x,k+1)\) for any \(\Gamma:\mathcal{C}\) and \((x,k):(1_{\widehat{\mathcal{C}}}.\mathbb{N})_{\Gamma}\). For any \(P:\widehat{\mathcal{C}}\) and \(\sigma:P\to 1_{\widehat{\mathcal{C}}}\), if \(B:\widehat{\mathsf{Ty}}(P.\mathbb{N}[\sigma])\) with \(b_{0}\) and \(b_{s}\), the function \(J_{B}^{\mathbb{N}}:P.\mathbb{N}[\sigma]\to P.\mathbb{N}[\sigma].B\) is defined as, for each \(\Gamma:\mathcal{C}\), and \((x,k):(P.\mathbb{N}[\sigma])_{\Gamma}\) \[(J_{B}^{\mathbb{N}})_{\Gamma}(x,k):=\begin{cases}b_{0}(x)&\text{if }k=0\\ b_{s}\big{(}(x,k^{\prime}),\,(J_{B}^{\mathbb{N}})_{\Gamma}(x,k^{\prime})\big{)} &\text{if }k=k^{\prime}+1\end{cases}.\] Clearly, \(J_{B}^{\mathbb{N}}\circ\mathbf{0}[\sigma]=b_{0}\) holds. For the other equality, we have \[(J_{B}^{\mathbb{N}}\circ\mathsf{succ}[\sigma])_{\Gamma}(x,k)=(J_{B}^{ \mathbb{N}})_{\Gamma}(x,k+1)=(b_{s})_{\Gamma}\big{(}(x,k),\,(J_{B}^{\mathbb{ N}})_{\Gamma}(x,k)\big{)}.\] Also, it is easy to prove the remaining coherence rules. We can talk about the unit type, the empty type, coproducts, and the others and give the conditions for a CwF to support them. In general, if we have a collection \(T\) of type formers, we say a _CwF supports_\(T\) if the CwF supports each type formers in \(T\). **Example 4.9**.: Let \(T:=\{\prod,\sum,\mathbf{1},\mathbf{0},\mathbb{N},\mathsf{Eq}\}\). Then for any (small) category \(\mathcal{C}\), the presheaf CwF \((\widehat{\mathcal{C}},\widehat{\mathsf{Ty}},\widehat{\mathsf{Tm}})\) supports \(T\).13 Footnote 13: We’ve not given the proofs for \(\mathbf{1}\) and \(\mathbf{0}\), but these are trivial facts. ### CwFs as a model Based on the example of presheaf CwF we discussed earlier, it becomes clear that CwFs can be used to model dependent type theory. A CwF provides a structure that covers contexts, types, terms, and substitutions. Moreover, if a CwF has enough type formers, it allows us to work with dependent products, dependent sums, and other inductive types. A reader who is interested in delving into the details and exploring similar constructions related to CwFs can refer to [5]. Our primary focus here is to establish the necessary background to discuss models where exo-nat is cofibrant. **Definition 4.10** ([4]).: A _model of Martin-Lof type theory with type formers_\(T\) is a CwF that supports \(T\). We already know that a presheaf CwF is a model of Martin-Lof type theory with usual type formers and extensional identity types. The simplicial set CwF \(\mathsf{SSet}\) is a model of Martin-Lof type theory with usual type formers and intensional identity types. ### Two-level CwFs Let us recall that the aim of this study on semantics is to gain insight into the models of 2LTT with a cofibrant exo-nat. Once we have acquired models of various type theories, a natural question arises: can we merge these models to obtain a comprehensive model of 2LTT? The answer to this question is affirmative, as it is indeed possible to combine two CwF structures in the same category in a manner that ensures their compatibility and coherence. **Definition 4.11** ([5]).: A _two-level CwF_ is a CwF \((C,\mathtt{Ty},\mathtt{Tm})\), equipped with another type presheaf \(\mathtt{Ty}^{f}:\mathrm{C}^{\mathrm{op}}\to\mathsf{Set}\), and a natural transformation \(c:\mathtt{Ty}^{f}\to\mathtt{Ty}\). **Remark 4.12**.: Given a two-level CwF \(C\), we define a second CwF structure on \(C\) using \(\mathtt{Ty}^{f}\) as the type functor and the term functor is obtained as \(\mathtt{Tm}^{f}(\Gamma,A):=\mathtt{Tm}(\Gamma,c_{\Gamma}(A))\). The context extension holds from \(\Gamma.A:=\Gamma.c_{\Gamma}(A)\). In order to emphasize the difference, we use the superscripts \(\_^{e},\_^{f}\) and write \(\mathtt{Ty}^{e},\mathtt{Tm}^{e}\) for the original CwF structure, write \(\mathtt{Ty}^{f},\mathtt{Tm}^{f}\) for the one obtained by the coercion transformation. It is not surprising that this choice is intentional to be consistent with the first section. The original CwF will model the "exo" level of 2LTT, while the other model is the usual "HoTT" level of 2LTT. **Example 4.13**.: Recall \(\widehat{\mathtt{Ty}}\) denotes the presheaf CwF structure. The simplicial set presheaf \(\mathsf{SSet}\) originally have already a presheaf CwF structure. Recall that \[\mathtt{Ty}(P):=\{A:\widehat{\mathtt{Ty}}(P)\mid p_{A}:P.A\to P\text{ is a Kan fibration}\}\] gives another type functor. Taking \(\mathtt{Ty}^{f}=\mathtt{Ty}\) and \(c:\mathtt{Ty}^{f}\to\widehat{\mathtt{Ty}}\) as the inclusion, we obtain \(\mathsf{SSet}\) as a two-level CwF. In a similar vein to how we can build a presheaf CwF from any arbitrary (small) category when the category itself is a CwF, we can proceed to construct a two-level CwF. This particular construction, which we refer to as the _presheaf two-level CwF_, will serve as our primary focus and model of interest. **Definition 4.14**.: Let \(C\) be a (small) category with CwF structure \(\mathtt{Ty}\), \(\mathtt{Tm}\). There is a two-level CwF structure on \(\widehat{C}\) called _presheaf two-level CwF_, denoted by \((\widehat{C},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}},\mathtt{Ty}^{f}, \mathtt{Tm}^{f})\) defined as follows: * \((\widehat{C},\widehat{\mathtt{Ty}},\widehat{\mathtt{Tm}})\) is the presheaf CwF defined in Section 4.1, * given \(P\) in \(\widehat{C}\), the type functor \(\mathtt{Ty}^{f}\) is given by \(\mathtt{Ty}^{f}(P):=\widehat{C}(P,\mathtt{Ty})\), and * for \(\Gamma\) in \(C\) and \(B\) in \(P(\Gamma)\), we define \(c_{P}(A)(\Gamma,B):=\mathtt{Tm}(\Gamma,A_{\Gamma}(B))\). As before, given \(A\) in \(\mathtt{Ty}^{f}(P)\), we define \(\mathtt{Tm}^{f}(P,A):=\widehat{\mathtt{Tm}}(P,c_{P}(A))\). #### Two-level CwFs as a model Similar to how a type theory can be interpreted within the framework of a category with families, two-level type theories can be interpreted using a two-level category with families. Below, we provide the precise definition for such an interpretation. **Definition 4.15**.: A _two-level model_ of a type theory with _type formers_\(T^{f}\) and _exo-type formers_\(T^{e}\) is a two-level CwF on a category \(C\) such that * the structure \(\mathtt{Ty}^{f},\mathtt{Tm}^{f}\) is a model of type theory with \(T^{f}\), * the structure \(\mathtt{Ty}^{e},\mathtt{Tm}^{e}\) is a model of type theory with \(T^{e}\). **Remark 4.16**.: In the subsequent sections, when we say _a two-level model with enough type formers_ or _a model of 2LTT_, we mean the two-level model with \(T^{f}\) and \(T^{e}\) where the collections are the types and exo-types we defined in Section 2.1. Our assumption concerning the coercion morphism \(\mathtt{Ty}^{f}\to\mathtt{Ty}^{e}\) is only that it is a natural transformation. However, in the context of a two-level model with enough type formers, we have the semantic counterpart of Theorem 2.6. This theorem furnishes excellent inversion rules from types to exo-types, and its proof heavily relies on the preservation of context extension and the elimination rules associated with the type formers [4]. With the foundational knowledge established thus far, we are now equipped to delve into the discussion of potential models of 2LTT that satisfy the condition of having a cofibrant exo-nat. However, before proceeding to the subsequent section, where this discussion takes place, let us first provide the semantic definition of "cofibrancy" for an exo-type. **Definition 4.17**.: Let \((C,\mathtt{Ty}^{e},\mathtt{Tm}^{e},\mathtt{Ty}^{f},\mathtt{Tm}^{f})\) be a model of 2LTT with conversion \(c:\mathtt{Ty}^{f}\to\mathtt{Ty}^{e}\). We say an exo-type \(A:\mathtt{Ty}^{e}(\Gamma)\) is _cofibrant_ if for any \(\Delta:\mathcal{C}\) and \(\sigma:\Delta\to\Gamma\) 1. there is a map, natural in \(\Delta\), \[\Theta^{\mathtt{Ty}}_{\Delta}:\mathtt{Ty}^{f}(\Delta.A[\sigma])\to\mathtt{Ty }^{f}(\Delta)\] such that for any \(Y:\mathtt{Ty}^{f}(\Delta.A[\sigma])\) we have the following isomorphism natural in \(\Delta\): \[c_{\Delta}(\Theta^{\mathtt{Ty}}_{\Delta}(Y))\cong\prod\nolimits^{e}(A,c_{ \Delta.A[\sigma]}(Y)),\] 2. and there is a map, natural in \(\Delta\), \[\Theta^{\mathtt{Tm}}_{\Delta}:\mathtt{Tm}^{f}\left(\Delta.A[\sigma],\mathsf{ isContr}(Y)\right)\to\mathtt{Tm}^{f}\left(\Delta,\mathsf{isContr}(\Theta^{ \mathtt{Ty}}_{\Delta}(Y))\right).\] In other words, if \(Y\) is contractible, then so is \(\Theta^{\mathtt{Ty}}_{\Gamma}(Y)\). **Remark 4.18**.: This does not represent a _direct_ translation of the internal definition; rather, it can be seen as a _universe-free_ adaptation of it. In Definition 2.13, the quantification is over specific types within a particular universe. Externally, we can express it in terms of _all types_. Consequently, the external version holds slightly more strength. The naturality conditions give the following: In Figure 6, all vertical arrows are context substitutions. When the commutative sides are appropriately composed, the top and bottom isomorphisms are equal, which can be expressed as the cube "commuting". In Figure 7, the vertical arrows are context substitutions, and the square is commutative. Figure 6: Naturality condition for \(\Theta^{\mathtt{Ty}}\), where \(\sigma:\Delta\to\Gamma\) and \(\tau:\Upsilon\to\Delta\) in \(\mathcal{C}\). Models with cofibrant exo-nat In the following definition, recall that all products exist in the category of sets. **Definition 5.1**.: Let \((\mathcal{C},\mathtt{Ty},\mathtt{Tm})\) be a CwF with enough type formers. We say \(\mathcal{C}\) has _exo-nat products_ if there is a map \(\Omega_{\Gamma}:\prod_{\mathbf{N}}\mathtt{Ty}(\Gamma)\to\mathtt{Ty}(\Gamma)\), where \(\mathbf{N}\) is the external natural numbers, such that for any \(Y:\prod_{\mathbf{N}}\mathtt{Ty}(\Gamma)\) we have 1. the set \(\mathtt{Tm}(\Gamma,\Omega_{\Gamma}(Y))\) is isomorphic to the categorical product of the sets \(\mathtt{Tm}(\Gamma,Y_{a})\) for each \(a:\mathbf{N}\), namely, we have 2. if \(d,c:\prod_{a:\mathbf{N}}\mathtt{Tm}(\Gamma,Y_{a})\) are such that there is a term in the type \(\mathsf{Id}(d_{a},c_{a})\) as being terms of \(Y_{a}:\mathtt{Ty}(\Gamma)\), then there is a term in the type \(\mathsf{Id}(\psi(d),\psi(c))\) as being terms of \(\Omega_{\Gamma}(Y)\), and all these are natural in \(\Gamma\). In simpler terms, the first requirement stated in Definition 5.1 ensures that \(\prod_{a:\mathbb{N}^{e}}^{e}Y(a)\) has a fibrant match, while the second requirement ensures that the funext for cofibrant exo-types holds. **Example 5.2**.: Let \(\mathcal{C}\) be a good model category [11]. Define \(\mathtt{Ty}(\Gamma)\) as the set of fibrations over \(\Gamma\) (with suitable coherence conditions [12]). Define for \(A:\mathtt{Ty}(\Gamma)\) the set \(\mathtt{Tm}(\Gamma,P)\) as the hom-set \(\nicefrac{{C}}{{r}}[\Gamma,\Gamma.A]\). Since \(\mathtt{Ty}(\Gamma)\) is closed under countable products, we can take \(\Omega_{\Gamma}(Y):=\prod_{a:\mathbf{N}}Y_{a}\), and there is a clear bijection between \(\nicefrac{{C}}{{r}}[\Gamma,\Omega_{\Gamma}(Y)]\) and \(\prod_{a:\mathbf{N}}(\nicefrac{{C}}{{r}}[\Gamma,Y_{a}])\), the first requirement in Definition 5.1 holds. Suppose \(d,c:\prod_{a:\mathbf{N}}\mathtt{Tm}(\Gamma,Y_{a})\) are such that there is a term in the type \(\mathsf{Id}(d_{a},c_{a})\) as being terms of \(Y_{a}:\mathtt{Ty}(\Gamma)\). In that model, it means \(d_{a}\) and \(c_{a}\), as being maps \(\Gamma\to Y_{a}\), are right homotopic. That is, there are maps \(p_{a}:\Gamma\to Y_{a}{}^{I}\) such that the following diagram commutes: Since \(Y_{a}\) is fibrant (as being in the slice category) and \(\Gamma\) is cofibrant (as being an object of a good model category), by a standard lemma (Corollary 1.2.6 in [9]), we have \(d_{a}\) and \(c_{a}\) are also left homotopic. Namely, the following diagram commutes: where \(\Gamma^{\prime}\) is a cylinder object for \(\Gamma\) fixed for all \(a:\mathbf{N}\). This induces a left homotopy between \(d\) and \(c\), namely, we have: Now by the same lemma, we have \(d\) and \(c\) are right homotopic, that is, we have \(p:\Gamma\to(\prod_{a:\mathbf{N}}Y_{a})^{I}\) such that the following diagram commutes: This means that there is a term in the type \(\mathsf{Id}(d,c)\) as being terms of \(\prod_{a:\mathbf{N}}Y_{a}\). So the second requirement in Definition 5.1 holds. We omit the details, but the naturality conditions follow from the coherence conditions on \(\mathtt{Ty}\). In Theorem 5.3, we provide a class of two-level CwFs that satisfy the axiom that \(\mathbb{N}^{e}\) is a cofibrant exo-type. This is the main result of this section. **Theorem 5.3**.: _If \((C,\mathtt{Ty},\mathtt{Tm})\) is CwF (with sufficient type formers) has exo-nat products, the corresponding two-level CwF obtained by Definition 4.14 satisfies the axiom that \(\mathbb{N}^{e}\) is a cofibrant exo-type._ Proof.: Recall that \(\mathbb{N}^{e}:\widehat{\mathtt{Ty}}(1_{\widehat{C}})\). For any \(P:\widehat{C}\), the context morphism \(\sigma:P\to 1_{\widehat{C}}\) is unique, so we omit substitutions over such morphisms and write \(P.\mathbb{N}^{e}\) instead of \(P.\mathbb{N}^{e}[\sigma]\). First, we define the map \(\Theta^{\mathtt{Ty}}_{P}:\mathtt{Ty}^{f}(P.\mathbb{N}^{e})\to\mathtt{Ty}^{f}(P)\). For any \(Y:\mathtt{Ty}^{f}(P.\mathbb{N}^{e})=\widehat{C}(P.\mathbb{N}^{e},\mathtt{Ty})\), we need \(\Theta^{\mathtt{Ty}}_{P}(Y):\mathtt{Ty}^{f}(P)=\widehat{C}(P,\mathtt{Ty})\). We denote \(\Theta^{\mathtt{Ty}}_{P}(Y)\) by \(\tilde{Y}\) for easier reading. Now, for \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), we define \[\tilde{Y}_{\Gamma}(x):=\Omega_{\Gamma}\left((Y_{\Gamma}(x,n))_{n}\right)\] where the map \(\Omega_{\Gamma}:\prod_{\mathbf{N}}\mathtt{Ty}(\Gamma)\to\mathtt{Ty}(\Gamma)\) is obtained by the assumption of having exo-nat products, Definition 5.1. Since we have \(Y_{\Gamma}(x,n):\mathtt{Ty}(\Gamma)\), the definition makes sense. We want to show \[c_{P}(\tilde{Y})\cong\prod^{e}(\mathbb{N}^{e},c_{P.\mathbb{N}^{e}}(Y)).\] Both are elements in \(\widehat{\mathtt{Ty}}(P)\), namely, presheaves over \(\int P\). Thus, it is enough to define a natural transformation \(G:c_{P}(\tilde{Y})\to\prod^{e}(\mathbb{N}^{e},c_{P.\mathbb{N}^{e}}(Y))\) such that for any \(\Gamma:\mathcal{C}\) and \(x:P_{\Gamma}\), the map \[G_{\Gamma,x}:c_{P}(\tilde{Y})(\Gamma,x)\to\prod^{e}(\mathbb{N}^{e},c_{P. \mathbb{N}^{e}}(Y))(\Gamma,x)\] is an isomorphism of sets, namely, a bijection. If we elaborate on these further, we obtain the following. By definition, \(c_{P}(\tilde{Y})(\Gamma,x)=\mathtt{Tm}(\Gamma,\tilde{Y}_{\Gamma}(x))\). Also, \(\prod^{e}(\mathbb{N}^{e},c_{P.\mathbb{N}^{e}}(Y))(\Gamma,x)\) consists of the elements \[f:\prod_{\begin{subarray}{c}\Delta:C\\ \sigma:\Delta\to\Gamma\\ n:\mathbb{N}^{e}(\Delta,x[\sigma])\end{subarray}}c_{P.\mathbb{N}^{e}}(Y)( \Delta,x[\sigma],n)\Big{(}=\mathtt{Tm}(\Delta,Y_{\Delta}(x[\sigma],n))\Big{)}\] such that if \(\Upsilon:\mathcal{C}\) and \(\tau:\Upsilon\to\Delta\), then \(f(\Delta,\sigma,n)[\tau]=f(\Upsilon,\sigma\circ\tau,n[\tau])\). By the definition of \(\mathbb{N}^{e}\), we have \(\mathbb{N}^{e}(\Delta,x[\sigma])=\mathbf{N}\), external natural number set, and having exo-nat products provides us \[\prod_{n:\mathbf{N}}\mathtt{Tm}(\Delta,Y_{\Delta}(x[\sigma],n))\cong\mathtt{ Tm}(\Delta,\tilde{Y}(x[\sigma])).\] Thus, the range of \(G_{\Gamma,x}\) can be written as \[f:\prod_{\begin{subarray}{c}\Delta:\mathcal{C}\\ \sigma:\Delta\to\Gamma\end{subarray}}\mathtt{Tm}(\Delta,\tilde{Y}_{\Delta}(x[ \sigma]))\] such that if \(\Upsilon:\mathcal{C}\) and \(\tau:\Upsilon\to\Delta\), then \(f(\Delta,\sigma)[\tau]=f(\Upsilon,\sigma\circ\tau)\). This elaboration allows us to easily perceive that this function is a bijection because it is a standard application of the Yoneda Lemma. The naturality condition of this operation is also easily satisfied because the substitution is functorial. Thus, we have confirmed the first stage of our claim. It remains to handle the contractibility part. Basically, we need to show that for the center of contraction \(c:\mathtt{Tm}^{f}(P.\mathbb{N}^{e},Y)\) and the identity terms in \(\mathsf{Id}(c,d)\) for any other terms \(d:\mathtt{Tm}^{f}(P.\mathbb{N}^{e},Y)\), we can find (naturally) a center of contraction \(\tilde{c}:\mathtt{Tm}^{f}(P,\tilde{Y})\) and an identity term \(\mathsf{Id}(\tilde{c},\tilde{d})\) for any other terms \(\tilde{d}:\mathtt{Tm}^{f}(P,\tilde{Y})\). With a similar elaboration on terms, we have \(\mathtt{Tm}^{f}(P.\mathbb{N}^{e},Y)=\widehat{\mathtt{Tm}}(P.\mathbb{N}^{e},c _{P.\mathbb{N}^{e}}(Y))\) and \(\mathtt{Tm}^{f}(P,\tilde{Y})=\widehat{\mathtt{Tm}}(P,c_{P}(\tilde{Y}))\). Therefore, we know \[c:\prod_{\begin{subarray}{c}\Gamma:\mathcal{C}\\ x:\Gamma\\ n:\mathbb{N}^{e}(\Gamma,x)\end{subarray}}c_{P.\mathbb{N}^{e}}(Y)(\Gamma,x,n) \Big{(}=\mathtt{Tm}(\Gamma,Y_{\Gamma}(x,n))\Big{)}\] is a center of contraction, for any such term \(d\), we have a term in \(\mathsf{Id}(c,d)\), and we need \[\tilde{c}:\prod_{\begin{subarray}{c}\Gamma:\mathcal{C}\\ x:\Gamma\\ x:\Gamma\end{subarray}}c_{P}(\tilde{Y})(\Gamma,x)\left(=\mathtt{Tm}(\Gamma, \tilde{Y}_{\Gamma}(x))\right)\] as a center of contraction, and related contracting terms. However, this is exactly the second criterion in Definition 5.1, and we have already assumed it. Therefore, \(\mathbb{N}^{e}\) is a cofibrant exo-type in the presheaf two-level \(\mathrm{CwF}\). **Remark 5.4**.: Example 5.2 also enables us to construct a two-level \(\mathrm{CwF}\) out of the class in the theorem that satisfies the axiom. Indeed, we can take \(\mathtt{Ty}^{e}(\Gamma)\) as the set of all morphisms over \(\Gamma\), and \(\mathtt{Tm}^{e}\) as the same as \(\mathtt{Tm}\), and obtain a two-level \(\mathrm{CwF}\)\((C,\mathtt{Ty}^{e},\mathtt{Tm}^{e},\mathtt{Ty},\mathtt{Tm})\) with the conversion \(c:\mathtt{Ty}\to\mathtt{Ty}^{e}\) as being inclusion. It is then enough to take the map \(\Theta_{\Gamma}(Y)\) in Definition 4.17 as equal to \(\prod_{a:\mathbf{N}}Y_{a}\). ## 6 Future directions As previously mentioned, this formalisation project aims to move a study about 2LTT [1] to Agda. In addition to definitions and results here, we also formalised _exo-categories_ and _diagram signatures_ in that study. More will be added in the future. We also plan to generalize the results about cofibrancy and sharpness. Natural numbers, lists, and binary-trees are all inductive types. The general class of such inductive types is called _W-types_. Similar to the cofibrant exo-nat axiom, we have been studying on possible conditions (or axioms) related to W-types to obtain criteria for cofibrant and sharp W-types. Currently, a study on W-types in 2LTT has not been conducted yet; thus, we plan to work on this open problem. We have been studying to improve the Agda library. The experimental feature of Agda we used reveals also some bugs; hence, we plan to solve these issues to obtain precise consistency. Furthermore, it is not unreasonable to think that this study will offer new ideas about the concepts specific to 2LTT.
2305.19535
Low-rank extended Kalman filtering for online learning of neural networks from streaming data
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream. The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior precision matrix, which gives a cost per step which is linear in the number of model parameters. In contrast to methods based on stochastic variational inference, our method is fully deterministic, and does not require step-size tuning. We show experimentally that this results in much faster (more sample efficient) learning, which results in more rapid adaptation to changing distributions, and faster accumulation of reward when used as part of a contextual bandit algorithm.
Peter G. Chang, Gerardo Durán-Martín, Alexander Y Shestopaloff, Matt Jones, Kevin Murphy
2023-05-31T03:48:49Z
http://arxiv.org/abs/2305.19535v3
# Low-rank extended Kalman filtering for online learning of neural networks from streaming data ###### Abstract We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream. The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior precision matrix, which gives a cost per step which is linear in the number of model parameters. In contrast to methods based on stochastic variational inference, our method is fully deterministic, and does not require step-size tuning. We show experimentally that this results in much faster (more sample efficient) learning, which results in more rapid adaptation to changing distributions, and faster accumulation of reward when used as part of a contextual bandit algorithm. ## 1 Introduction Suppose we observe a stream of labeled observations, \(\mathcal{D}_{t}=\{(\mathbf{x}_{t}^{n},\mathbf{y}_{t}^{n})\sim p_{t}(\mathbf{x},\mathbf{y}):n=1 {:}N_{t}\}\), where \(\mathbf{x}_{t}^{n}\in\mathcal{X}=\mathbb{R}^{D}\), \(\mathbf{y}_{t}^{n}\in\mathcal{Y}=\mathbb{R}^{C}\), and \(N_{t}\) is the number of examples at step \(t\). (In this paper, we assume \(N_{t}=1\), since we are interested in rapid learning from individual data samples.) Our goal is to fit a prediction model \(\mathbf{y}_{t}=h(\mathbf{x}_{t},\mathbf{\theta})\) in an online fashion, where \(\mathbf{\theta}\in\mathbb{R}^{P}\) are the parameters of the model. (We focus on the case where \(h\) is a deep neural network (DNN), although in principle our methods can also be applied to other (differentiable) parametric models.) In particular, we want to recursively estimate the posterior over the parameters \[p(\mathbf{\theta}|\mathcal{D}_{1:t})\propto p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta})p (\mathbf{\theta}|\mathcal{D}_{1:t-1}) \tag{1}\] without having to store all the past data. Here \(p(\mathbf{\theta}|\mathcal{D}_{1:t-1})\) is the posterior belief state from the previous step, and \(p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta})\) is the likelihood function given by \[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta})=\begin{cases}\mathcal{N}(\mathbf{y}_{t}|h(\bm {x}_{t},\mathbf{\theta}),\mathbf{R}_{t})&\text{regression}\\ \text{Cat}(\mathbf{y}_{t}|h(\mathbf{x}_{t},\mathbf{\theta}))&\text{classification}\end{cases} \tag{2}\] For regression, we assume \(h(\mathbf{x}_{t},\mathbf{\theta})\in\mathbb{R}^{C}\) returns the mean of the output, and \(\mathbf{R}_{t}=R\mathbf{I}_{C}\) is the observation covariance, which we view as a hyper-parameter. For classification, \(h(\mathbf{x}_{t},\mathbf{\theta})\) returns a \(C\)-dimensional vector of class probabilities, which is the mean parameter of the categorical distribution. In many problem settings (e.g., recommender systems (Huang et al., 2015), robotics (Wolczyk et al., 2021; Lesort et al., 2020), and sensor networks (Ditzler et al., 2015)), the data distribution \(p_{t}(\mathbf{x},\mathbf{y})\) may change over time (Gomes et al., 2019). Hence we allow the model parameters \(\mathbf{\theta}_{t}\) to change over time, according to a simple Gaussian dynamics model:1 Footnote 1: We do not assume access to any information about if and when the distribution shifts (sometimes called a “task boundary”), since such information is not usually available. Furthermore, the shifts may be gradual, which makes the concept of task boundary ill-defined. \[p_{t}(\mathbf{\theta}_{t}|\mathbf{\theta}_{t-1})=\mathcal{N}(\mathbf{\theta}_{t}|\gamma_{ t}\mathbf{\theta}_{t-1},\mathbf{Q}_{t}). \tag{3}\] where we usually take \(\mathbf{Q}_{t}=q\mathbf{I}\) and \(\gamma_{t}=\gamma\), where \(q\geq 0\) and \(0\leq\gamma\leq 1\). Using \(q>0\) injects some noise at each time step, and ensures that the model does not lose "plasticity", so it can continue to adapt to changes (cf. Kurle et al., 2020; Ash and Adams, 2020; Dohare et al., 2021), and using \(\gamma<1\) ensures the variance of the unconditional stochastic process does not blow up. If we set \(q=0\) and \(\gamma=1\), this corresponds to a deterministic model in which the parameters do not change, i.e., \[p_{t}(\mathbf{\theta}_{t}|\mathbf{\theta}_{t-1})=\delta(\mathbf{\theta}_{t}-\mathbf{\theta}_{t- 1}) \tag{4}\] This is a useful special case for when we want to estimate the parameters from a stream of data coming from a static distribution. (In practice we find this approach can also work well for the non-stationary setting.) Recursively computing eq. (1) corresponds to Bayesian inference (filtering) in a state space model, where the dynamics model in eq. (3) is linear Gaussian, but the observation model in eq. (2) is non-linear and possibly non-Gaussian. Many approximate algorithms have been proposed for this task (see e.g. Sarkka, 2013; Murphy, 2023), but in this paper, we focus on Gaussian approximations to the posterior, \(q(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})=\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\), since they strike a good balance between efficiency and expressivity. In particular, we build on the extended Kalman filter (EKF), which linearizes the observation model at each step, and then computes a closed form Gaussian update. The EKF has been used for online training of neural networks in many papers (see e.g., Singhal & Wu, 1989; Watanabe & Tzafestas, 1990; Puskorius & Feldkamp, 1991; Iguni et al., 1992; Ruck et al., 1992; Haykin, 2001). It can be thought of as an approximate Bayesian inference method, or as a natural gradient method for MAP parameter estimation (Ollivier, 2018), which leverages the posterior covariance as a preconditioning matrix for fast Newton-like updates (Alessandri et al., 2007). The EKF was extended to exponential family likelihoods in (Ollivier, 2018; Tronarp et al., 2018), which is necessary when fitting classification models. The main drawback of the EKF is that it takes \(O(P^{3})\) time per step, where \(P=|\mathbf{\theta}_{t}|\) is the number of parameters in the hidden state vector, because we need to invert the posterior covariance matrix. It is possible to derive diagonal approximations to the posterior covariance or precision, by either minimizing \(D_{\text{KL}}\left(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})\parallel q(\mathbf{\theta} _{t})\right)\) or \(D_{\text{KL}}\left(q(\mathbf{\theta}_{t})\parallel p(\mathbf{\theta}_{1:t})\right)\), as discussed in (Puskorius & Feldkamp, 1991; Chang et al., 2022; Jones et al., 2023). These methods take \(O(P)\) time per step, but can be much less statistically efficient than full-covariance methods, since they ignore joint uncertainty between the parameters. This makes the method slower to learn, and slower to adapt to changes in the data distribution, as we show in section 4. In this paper, we propose an efficient and deterministic method to recursively minimize \(D_{\text{KL}}\left(\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t},\mathbf{\Sigma}_{t}) \parallel p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})\right)\), where we assume that the precision matrix is diagonal plus low-rank, \(\mathbf{\Sigma}_{t}^{-1}=\mathbf{\Upsilon}_{t}+\mathbf{W}_{t}\mathbf{W}_{t}^{T}\), where \(\mathbf{\Upsilon}_{t}\) is diagonal and \(\mathbf{W}_{t}\in\mathbb{R}^{P\times L}\) for some memory limit \(L\). The key insight is that, if we linearize the observation model at each step, as in the EKF, we can use the resulting gradient vector or Jacobian as "pseudo-observation(s)" that we append to \(\mathbf{W}_{t-1}\), and then we can perform an efficient online SVD approximation to obtain \(\mathbf{W}_{t}\). We therefore call our method LO-FI, which is short for low-rank extended Kalman filter. Our code is available at [https://github.com/probml/rebayes](https://github.com/probml/rebayes). We use the posterior approximation \(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})\) in two ways. First, under Bayesian updating the covariance matrix \(\mathbf{\Sigma}_{t}\) acts as a preconditioning matrix to yield a deterministic second-order Newton-like update for the posterior mean (MAP estimate). This update does not have any step-size hyperparameters, in contrast to SGD. Second, the posterior uncertainty in the parameters can be propagated into the uncertainty of the predictive distribution for observations, which is crucial for online decision-making tasks, such as active learning (Holzmuller et al., 2022), Bayesian optimization (Garnett, 2023), contextual bandits (Duran-Martin et al., 2022), and reinforcement learning (Khetarpal et al., 2022; Wang et al., 2021). In summary, our main contribution is a novel algorithm for efficiently (and deterministically) recursively updating a diagonal plus low-rank (DLR) approximation to the precision matrix of a Gaussian posterior for a special kind of state space model, namely an SSM with an arbitrary non-linear (and possibly non-Gaussian) observation model, but with a simple linear Gaussian dynamics. This model family is ideally suited to online parameter learning for DNNs in potentially non-stationary environments (but the restricted form of the dynamics model excludes some other applications of SSMs). We show experimentally that our approach works better (in terms of accuracy for a given compute budget) than a variety of baseline algorithms -- including online gradient descent, online Laplace, diagonal approximations to the EKF, and a stochastic DLR VI method called L-RVGA -- on a variety of stationary and non-stationary classification and regression problems, as well as a simple contextual bandit problem. ## 2 Related work Since exact Bayesian inference is intractable in our model family, it is natural to compute an approximate posterior at step \(t\) using recursive variational inference (VI), in which the prior for step \(t\) is the approximate posterior from step \(t-1\)(Opper, 1998; Broderick et al., 2013). That is, at each step we minimize the ELBO (evidence lower bound), which is equal (up to a constant) to the reverse KL, given by \[\mathcal{L}(\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})=D_{\text{KL}}\left(\mathcal{N}(\mathbf{ \theta}_{t}|\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\parallel Z_{t}p(\mathbf{y}_{t}|\mathbf{x}_{t },\mathbf{\theta}_{t})q_{t|t-1}(\mathbf{\theta}_{t}|\mathcal{D}_{1:t-1})\right) \tag{5}\] where \(Z_{t}\) is a normalization constant and \(q_{t}=\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\) is the variational posterior which results from minimizing this expression. The main challenge is how to efficiently optimize this objective. One common approach is to assume the variational family consists of a diagonal Gaussian. By linearizing the likelihood, we can solve the VI objective in closed form, as shown in (Chang et al., 2022); this is called the "variational diagonal EKF" (VD-EKF). They also propose a diagonal approximation which minimizes the forwards KL, \(D_{\mathrm{KL}}\left(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})\parallel q(\mathbf{ \theta}_{t})\right)\), and show that this is equivalent to the "fully decoupled EKF" (FD-EKF) method of (Puskorius and Feldkamp, 1991). Both of these methods are fully deterministic, which avoids the high variance that often plagues stochastic VI methods (Wu et al., 2019; Haussmann et al., 2020). It is also possible to derive diagonal approximations without linearizing the observation model. In (Kurle et al., 2020; Zeno et al., 2018) they propose a diagonal approximation to minimize the reverse KL, \(D_{\mathrm{KL}}\left(q(\mathbf{\theta}_{t})\parallel p(\mathbf{\theta}_{t}|\mathcal{D }_{1:t})\right)\); this requires a Monte Carlo approximation to the ELBO. In (Ghosh et al., 2016; Wagner et al., 2022), they propose a diagonal approximation to minimize the forwards KL, \(D_{\mathrm{KL}}\left(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})\parallel q(\mathbf{ \theta}_{t})\right)\); this requires approximating the first and second moments of the hidden units at every layer of the model using numerical integration. (Farquhar et al., 2020) claims that, if one makes the model deep enough, one can get good performance using a diagonal approximation; however, this has not been our experience. This motivates the need to go beyond a diagonal approximation. One approach is to combine diagonal Gaussian approximations with memory buffers, such as the variational continual learning method of (Nguyen et al., 2018) and other works (see e.g., (Kurle et al., 2020; Khan and Swaroop, 2021)). However, we seek to find a richer approximation to the posterior that does not rely on memory buffers, which can be problematic in the non-stationary setting. (Zeno et al., 2021) proposes the FOO-VB method, which uses a Kronecker block structured approximation to the posterior covariance. However, this method requires 2 SVD decompositions of the Kronecker factors for every layer of the model, in addition to a large number of Monte Carlo samples, at each time step. In (Ong et al., 2018) they compute a diagonal plus low-rank (DLR) approximation to the posterior covariance matrix using stochastic gradient applied to the ELBO. In (Tomczak et al., 2020) they develop a version of the local reparameterization trick for the DLR posterior covariance, to reduce the variance of the stochastic gradient estimate. In this paper we use a diagonal plus low-rank (DLR) approximation to the posterior precision. The same form of approximation has been used in several prior papers. In (Mishkin et al., 2018) they propose a technique called "SLANG" (stochastic low-rank approximate natural-gradient), which uses a stochastic estimate of the natural gradient of the ELBO to update the posterior precision, combined with a randomized eigenvalue solver to compute a DLR approximation. Their NGD approximation enables the variational updates to be calculated solely from the loss gradients, whereas our approach requires the network Jacobian. On the other hand, our EKF approach allows the posterior precision and the DLR approximation to be efficiently computed in closed form. In (Lambert et al., 2021), they propose a technique called "L-RVGA" (low-rank recursive variational Gaussian approximation), which uses stochastic EM to optimize the ELBO using a DLR approximation to the posterior precision. Their method is a one-pass online method, like ours, and also avoids the need to tune the learning rate. However, it is much slower, since it involves generating multiple samples from the posterior and multiple iterations of the EM algorithm (see fig. 7 for an experimental comparison of running time). The GGT method of (Agarwal et al., 2019) also computes a DLR approximation to the posterior precision, which they use as a preconditioner for computing the MAP estimate. However, they bound the rank by simply using the most recent \(L\) observations, whereas LO-FI uses SVD to combine the past data in a more efficient way. The ORFit method of (Min et al., 2022) is also an online low-rank MAP estimation method. They use orthogonal projection to efficiently compute a low rank representation of the precision at each step. However, it is restricted to regression problems with 1d, noiseless outputs (i.e., they assume the likelihood has the degenerate form \(p(y_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t})=\mathcal{N}(h(\mathbf{x}_{t},\mathbf{\theta}_{t}),0)\).) The online Laplace method of (Ritter et al., 2018; Daxberger et al., 2021) also computes a Gaussian approximation to the posterior, but makes different approximations. In particular, for "task" \(t\), it computes the MAP estimate \(\mathbf{\theta}_{t}=\operatorname*{argmax}_{\mathbf{\theta}}\log p(\mathcal{D}_{t}| \mathbf{\theta})+\log\mathcal{N}(\mathbf{\theta}|\mathbf{\mu}_{t-1},\mathbf{\Sigma}_{t-1})\), where \(\mathbf{\Sigma}_{t-1}=\mathbf{\Lambda}_{t-1}^{-1}\) is the approximate posterior covariance from the previous task. (This optimization problem is solved using SGD applied to a replay buffer.) This precision matrix is usually approximated as a block diagonal matrix, with one block per layer, and the terms within each block may be additionally approximated by a Kronecker product form, as in KFAC (Martens and Grosse, 2015). By contrast, LO-FI computes a posterior, not just a point estimate, and approximates the precision as diagonal plus low rank. In the appendix, we show experimentally that LO-FI outperforms online Laplace in terms of NLPD on various classification and regression tasks. It is possible to go beyond Gaussian approximations by using particle filtering (see e.g., (Yang et al., 2023)). However, we focus on faster deterministic inference methods, since speed is important for many real time online decision making tasks (Ghunaim et al., 2023). There are many papers on continual learning, which is related to online learning. However the CL literature usually assumes the task boundaries, corresponding to times when the distribution shifts, are given to the learner (see e.g., (Delange et al., 2021; De Lange and Tuytelaars, 2021; Wang et al., 2022; Mai et al., 2022; Mundt et al., 2023; Wang et al., 2023).) By contrast, we are interested in the continual learning setting where the distribution may change at unknown times, in a continuous or discontinuous manner (c.f., (Gama et al., 2013)); this is sometimes called the "task agnostic" or "streaming" setting. Furthermore, our goal is accurate forecasting of the future (which can be approximated by our estimate of the "current" distribution), so we are less concerned with performance on "past" distributions that the agent may not encounter again; thus "catastrophic forgetting" (see e.g., (Parisi et al., 2019)) is not a focus of this work (c.f., (Dohare et al., 2021)). ## 3 Methods In LO-FI, we approximate the belief state by a Gaussian, \(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})=\mathcal{N}(\mathbf{\mu}_{t},\mathbf{\Sigma}_{t})\), where the posterior precision is diagonal plus low rank, i.e., it has the form \(\mathbf{\Sigma}_{t}^{-1}=\mathbf{\Upsilon}_{t}+\mathbf{W}_{t}\mathbf{W}_{t}^{\top}\), where \(\mathbf{\Upsilon}_{t}\) is diagonal and \(\mathbf{W}_{t}\) is a \(P\times L\) matrix. We denote this class of models by \(\text{DLR}(L)\), where \(L\) is the rank. Below we show how to efficiently update this belief state in a recursive (online) fashion. This has two main steps -- predict (see algorithm 2) and update (see algorithm 3) -- which are called repeatedly, as shown in algorithm 1. The predict step takes \(O(PL^{2}+L^{3})\) time, and the update step takes \(O(P(L+C)^{2})\) time, where \(C\) is the number of outputs. ``` 1def \(\text{ofi}(\mathbf{\mu}_{0},\mathbf{\Upsilon}_{0},\mathbf{x}_{1:T},\mathbf{y}_{1:T},\gamma_{1: T},q_{1:T},L,h)\) 2\(\mathbf{W}_{0}=\mathbf{0}\) 3foreach\(t=1:T\)do 4\((\mathbf{\mu}_{t|t-1},\mathbf{\Upsilon}_{t|t-1},\mathbf{W}_{t|t-1},\hat{\mathbf{y}}_{t})= \text{predict}(\mathbf{\mu}_{t-1},\mathbf{\Upsilon}_{t-1},\mathbf{W}_{t-1},\mathbf{x}_{t}, \gamma_{t},q_{t},h)\) 5\((\mathbf{\mu}_{t},\mathbf{\Upsilon}_{t},\mathbf{W}_{t})=\text{update}(\mathbf{\mu}_{t|t-1},\mathbf{\Upsilon}_{t|t-1},\mathbf{W}_{t|t-1},\mathbf{x}_{t},\mathbf{y}_{t},\hat{\mathbf{y}}_ {t},h,L)\) 6\(\text{callback}(\hat{\mathbf{y}}_{t},\mathbf{y}_{t})\) ``` **Algorithm 1**LOFI main loop. ### Predict step ``` 1def \(\text{predict}(\mathbf{\mu}_{t-1},\mathbf{\Upsilon}_{t-1},\mathbf{W}_{t-1},\mathbf{x}_{t}, \gamma_{t},q_{t},h)\): 2\(\mathbf{\mu}_{t|t-1}=\gamma_{t}\mathbf{\mu}_{t-1}\)// Predict the mean of the next state 3\(\mathbf{\Upsilon}_{t|t-1}=\left(\gamma_{t}^{2}\mathbf{\Upsilon}_{t-1}^{-1}+q_{t}\mathbf{ I}_{P}\right)^{-1}\)// Predict the diagonal precision 4\(\mathbf{C}_{t}=\left(\mathbf{I}_{L}+q_{t}\mathbf{W}_{t-1}^{\top}\mathbf{\Upsilon}_{t |t-1}\mathbf{\Upsilon}_{t-1}^{-1}\mathbf{W}_{t-1}\right)^{-1}\) 5\(\mathbf{W}_{t|t-1}=\gamma_{t}\mathbf{\Upsilon}_{t|t-1}\mathbf{\Upsilon}_{t-1}^{-1} \mathbf{W}_{t-1}\text{chol}(\mathbf{C}_{t})\)// Predict the low-rank precision 6\(\hat{\mathbf{y}}_{t}=h\left(\mathbf{x}_{t},\mathbf{\mu}_{t|t-1}\right)\)// Predict the mean of the output Return \((\mathbf{\mu}_{t|t-1},\mathbf{\Upsilon}_{t|t-1},\mathbf{W}_{t|t-1},\hat{\mathbf{y}}_{t})\) ``` **Algorithm 2**LO-FI predict step. In the predict step, we go from the previous posterior, \(p(\mathbf{\theta}_{t-1}|\mathcal{D}_{1:t-1})=\mathcal{N}(\mathbf{\theta}_{t-1}|\mathbf{\mu}_ {t-1},\mathbf{\Sigma}_{t-1})\), to the one-step-ahead predictive distribution, \(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t-1})=\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t| t-1},\mathbf{\Sigma}_{t|t-1})\). To compute this predictive distribution, we apply the dynamics in eq. (3) with \(\mathbf{\theta}_{t}=q_{t}\mathbf{I}\) to get \(\mathbf{\mu}_{t|t-1}=\gamma_{t}\mathbf{\mu}_{t-1}\) and \(\mathbf{\Sigma}_{t|t-1}=\gamma_{t}^{2}\mathbf{\Sigma}_{t-1}+q_{t}\mathbf{I}_{P}\). However, this recursion is in terms of the covariance matrix, but we need the corresponding result for a DLR precision matrix in order to be computationally efficient. In appendix A.1 we show how to use the matrix inversion lemma to efficiently compute \(\mathbf{\Sigma}_{t|t-1}^{-1}=\mathbf{\Upsilon}_{t|t-1}+\mathbf{W}_{t|t-1}\mathbf{W}_{t|t -1}^{\top}\). The result is shown in the pseudocode in algorithm 2, where \(\mathbf{A}=\text{chol}(\mathbf{B})\) denotes Cholesky decomposition (i.e., \(\mathbf{A}\mathbf{A}^{\top}=\mathbf{B}\)). The cost of computing \(\mathbf{\Upsilon}_{t|t-1}\) is \(O(P)\) since it is diagonal. The cost of computing \(\mathbf{W}_{t|t-1}\) is \(O(PL^{2}+L^{3})\). If we use a full-rank approximation, \(L=P\), we recover the standard EKF predict step. ### Update step ``` 1defupdate(\(\mathbf{\mu}_{t|t-1}\), \(\mathbf{\Upsilon}_{t|t-1},\mathbf{W}_{t|t-1},\mathbf{x}_{t},\mathbf{y}_{t},\hat{\mathbf{y}}_{t},h,L\)): 2\(\mathbf{R}_{t}=h_{V}(\mathbf{x}_{t},\mathbf{\mu}_{t|t-1})\) // Covariance of predicted output 3\(\mathbf{L}_{t}=\text{chol}(\mathbf{R}_{t})\) 4\(\mathbf{A}_{t}=\mathbf{L}_{t}^{-1}\) 5\(\mathbf{H}_{t}=\text{jac}(h(\mathbf{x}_{t},\cdot))(\mathbf{\mu}_{t|t-1})\) // Jacobian of observation model 6\(\hat{\mathbf{W}}_{t}=\left[\begin{array}{cc}\mathbf{W}_{t|t-1}&\mathbf{\Pi}_{t}^{\top} \mathbf{\Lambda}_{t}^{\top}\end{array}\right]\) // Expand low-rank with new observation 7\(\mathbf{G}_{t}=\left(\mathbf{I}_{L}+\hat{\mathbf{W}}_{t}^{\top}\mathbf{\Upsilon}_{t|t-1}^{-1} \hat{\mathbf{W}}_{t}\right)^{-1}\) 8\(\mathbf{C}_{t}=\mathbf{\Pi}_{t}^{\top}\mathbf{\Lambda}_{t}^{\top}\mathbf{\Lambda}_{t}\) 9\(\mathbf{K}_{t}=\mathbf{\Upsilon}_{t|t-1}^{-1}\mathbf{C}_{t}-\mathbf{\Upsilon}_{t|t-1}^{-1} \hat{\mathbf{W}}_{t}\mathbf{G}_{t}\hat{\mathbf{W}}_{t}^{\top}\mathbf{\Upsilon}_{t|t-1}^{-1} \mathbf{C}_{t}\) // Kalman gain matrix 10\(\mathbf{\mu}_{t}=\mathbf{\mu}_{t|t-1}+\mathbf{K}_{t}(\mathbf{y}_{t}-\hat{\mathbf{y}}_{t})\) // Mean update 11\((\hat{\mathbf{\Lambda}}_{t},\hat{\mathbf{U}}_{t})=\mathrm{SVD}(\hat{\mathbf{W}}_{t})\) // Take SVD of the expanded low-rank 12\((\mathbf{\Lambda}_{t},\mathbf{U}_{t})=\left(\hat{\mathbf{\Lambda}}_{t},\hat{\mathbf{U}}_{t} \right)\) [;1:L] // Keep top \(L\) most important terms 13\(\mathbf{W}_{t}=\mathbf{U}_{t}\mathbf{\Lambda}_{t}\) // New low-rank approximation 14\((\mathbf{\Lambda}_{t}^{\times},\mathbf{\mathbf{U}}_{t}^{\times})=\left(\hat{\mathbf{\Lambda}} _{t},\hat{\mathbf{U}}_{t}\right)\) [;(\(L+1)\):\(\hat{L}\)] // Extract remaining least important terms 15\(\mathbf{W}_{t}^{\times}=\mathbf{\mathbf{U}}_{t}^{\times}\mathbf{\Lambda}_{t}^{\times}\) // The low-rank part that is dropped 16\(\mathbf{\Upsilon}_{t}=\mathbf{\Upsilon}_{t|t-1}+\mathrm{diag}\left(\mathbf{W}_{t}^{\times} (\mathbf{W}_{t}^{\times})^{\top}\right)\) // Update diagonal to capture variance due to dropped terms Return \((\mathbf{\mu}_{t},\mathbf{\Upsilon}_{t},\mathbf{\Psi}_{t})\) ``` **Algorithm 3**LO-FI update step. In the update step, we go from the prior predictive distribution, \(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t-1})=\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t |t-1},\mathbf{\Sigma}_{t|t-1})\), to the posterior distribution, \(p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t})=\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t}, \mathbf{\Sigma}_{t})\). Unlike the predict step, this cannot be computed exactly. Instead we will compute an approximate posterior \(q_{t}\) by minimizing the KL objective in eq. (5). One can show (see e.g., Opper and Archambeau, 2009; Kurle et al., 2020; Lambert et al., 2021b) that the optimum must satisfy the following fixed-point equations: \[\mathbf{\mu}_{t}=\mathbf{\mu}_{t|t-1}+\mathbf{\Sigma}_{t-1}\nabla_{\mathbf{\mu}_{t}} \mathbb{E}_{q_{t}}\left[\log p(\mathbf{y}_{t}|\mathbf{\theta}_{t})\right]=\mathbf{\mu}_{t|t -1}+\mathbf{\Sigma}_{t-1}\mathbb{E}_{q_{t}}\left[\nabla_{\mathbf{\theta}_{t}}\log p( \mathbf{y}_{t}|\mathbf{\theta}_{t})\right] \tag{6}\] \[\mathbf{\Sigma}_{t}^{-1}=\mathbf{\Sigma}_{t|t-1}^{-1}-2\nabla_{\mathbf{\Sigma }_{t}}\mathbb{E}_{q_{t}}\left[\log p(\mathbf{y}_{t}|\mathbf{\theta}_{t})\right]=\mathbf{ \Sigma}_{t|t-1}^{-1}-\mathbb{E}_{q_{t}}\left[\nabla_{\mathbf{\theta}_{t}}^{2}\log p (\mathbf{y}_{t}|\mathbf{\theta}_{t})\right] \tag{7}\] Note that this is an implicit equation, since \(q_{t}\) occurs on the left and right hand sides. A common approach to solving this optimization problem (e.g., used in (Mishkin et al., 2018; Kurle et al., 2020; Lambert et al., 2021b)) is to approximate the expectation with samples from the prior predictive, \(q_{t|t-1}\). In addition, it is common to approximate the Hessian matrix with the generalized Gauss Newton (GGN) matrix, which is derived from the Jacobian, as we explain below. In this paper we replace the Monte Carlo expectations with analytic methods, by leveraging the same GGN approximation. We then generalize to the low-rank setting to make the method efficient. In more detail, we compute a linear-Gaussian approximation to the likelihood function, after which the KL optimization problem can be solved exactly by performing conjugate Bayesian updating. To approximate the likelihood, we first linearize the observation model about the prior predictive mean: \[\hat{h}_{t}(\mathbf{\theta}_{t})=h(\mathbf{x}_{t},\mathbf{\mu}_{t|t-1})+\mathbf{H}_{t}(\mathbf{ \theta}_{t}-\mathbf{\mu}_{t|t-1}) \tag{8}\] where \(\mathbf{H}_{t}\) is the \(C\times P\) Jacobian of \(h(\mathbf{x}_{t},\cdot)\) evaluated at \(\mathbf{\mu}_{t|t-1}\). To handle non-Gaussian outputs, we follow Ollivier (2018) and Tronarp et al. (2018), and approximate the output distribution using a Gaussian, whose conditional moments are given by \[\hat{\mathbf{y}}_{t}=\mathbb{E}\left[\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t}=\mathbf{\mu}_ {t|t-1}\right]=h(\mathbf{x}_{t},\mathbf{\mu}_{t|t-1}) \tag{9}\] \[\mathbf{R}_{t}=\mathrm{Cov}\left[\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\theta}_{t}=\mathbf{\mu}_ {t|t-1}\right]=h_{V}(\mathbf{x}_{t},\mathbf{\mu}_{t|t-1})=\begin{cases}R_{t}\,\mathbf{ I}_{C}&\text{regression}\\ \mathrm{diag}(\hat{\mathbf{y}}_{t})-\hat{\mathbf{y}}_{t}\hat{\mathbf{y}}_{t}^{\top}&\text{ classification}\end{cases} \tag{10}\] where \(\hat{\mathbf{y}}_{t}\) is a vector of \(C\) probabilities in the case of classification.2 Under the above assumptions, we can use the standard EKF update equations (see e.g., Sarkka, 2013). In appendix A.2 we extend these equations to the case where the precision matrix is DLR; this forms the core of our LO-FI method. The basic idea is to compute the exact update to get \(\mathbf{\Sigma}_{t}^{*-1}=\mathbf{\Upsilon}_{t}+\tilde{\mathbf{W}}_{t}\tilde{\mathbf{W} }_{t}^{\intercal}\), where \(\tilde{\mathbf{W}}_{t}\) extends \(\mathbf{W}_{t|t-1}\) with \(C\) additional columns coming from the Jacobian of the observation model, and then to project \(\tilde{\mathbf{W}}_{t}\) back to rank \(L\) using SVD to get \(\mathbf{\Sigma}_{t}^{-1}=\mathbf{\Upsilon}_{t}+\mathbf{W}_{t}\mathbf{W}_{t}^{\intercal}\), where \(\mathbf{\Upsilon}_{t}\) is chosen so as to satisfy \(\mathrm{diag}(\mathbf{\Sigma}_{t}^{-1})=\mathrm{diag}(\mathbf{\Sigma}_{t}^{*-1})\). See algorithm 3 for the resulting pseudocode. The cost is dominated by the \(O(P\tilde{L}^{2})\) time needed for the SVD, where \(\tilde{L}=L+C\).3 Footnote 3: Computing the SVD takes \(O(P(L+C)^{2})\) time in the update step (for both spherical and diagonal approximations), which may be too expensive. In appendix F.5.2 we derive a modified update step which takes \(O(PLC)\) time, but which is less accurate. The approach is based on the ORFit method (Min et al., 2022), which uses orthogonal projections to make the SVD fast to compute. However, we have found its performance to be quite poor (no better than diagonal approximations), so we have omitted its results. To gain some intuition for the method, suppose the output is scalar, with variance \(R=1\). Then we have \(A_{t}=1\) and \(\mathbf{H}_{t}^{\intercal}=\nabla_{\mathbf{\theta}_{t}}h(\mathbf{x}_{t},\mathbf{\theta}_ {t})=\mathbf{g}_{t}\) as the approximate linear observation matrix. (Note that, for a linear model, we have \(\mathbf{g}_{t}=\mathbf{x}_{t}\).) In this case, we have \(\tilde{\mathbf{W}}_{t}=\left[\begin{array}{cc}\mathbf{W}_{t|t-1}&\mathbf{g}_{t} \end{array}\right]\). Thus \(\tilde{\mathbf{W}}_{t}\) acts like a generalized memory buffer that stores data using a gradient embedding. This allows an interpretation of our method in terms of the neural tangent kernel (Jacot et al., 2018), although we leave the details to future work. ### Predicting the observations So far we have just described how to recursively update the belief state for the parameters. To predict the output \(\mathbf{y}_{t}\) given a test input \(\mathbf{x}_{t}\), we need to compute the one-step-ahead predictive distribution \[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathcal{D}_{1:t-1})=\int p(\mathbf{y}_{t}|\mathbf{x}_{t},\bm {\theta}_{t})p(\mathbf{\theta}_{t}|\mathcal{D}_{1:t-1})d\mathbf{\theta}_{t} \tag{11}\] The negative log of this, \(-\log p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathcal{D}_{1:t-1})\), is called the negative log predictive density or NLPD. If we ignore the posterior uncertainty, this integral gives us the following plugin approximation, given by \[p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathcal{D}_{1:t-1})\approx\int p(\mathbf{y}_{t}|\mathbf{x}_{t },\mathbf{\theta}_{t})\mathcal{N}(\mathbf{\theta}_{t}|\mathbf{\mu}_{t|t-1},0\mathbf{I})d \mathbf{\theta}_{t}=p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\mu}_{t|t-1}) \tag{12}\] The negative log of this, \(-\log p(\mathbf{y}_{t}|\mathbf{x}_{t},\mathbf{\mu}_{t|t-1})\), is called the negative log likelihood or NLL. We report NLL results in the main paper, since they are easy to compute. However, we can get better performance by using more accurate approximations to the integral. The simplest approach is to use Monte Carlo sampling; alternatively we can use deterministic approximations, as discussed in appendix B. We find that naively passing posterior samples through the model can result in worse performance than using the plugin approximation, which just uses the posterior mode. However, if we pass the samples through the linearized observation model, as proposed in (Immer et al., 2021), we find that the NLPD can outperform the NLL, as shown in appendix D.3 and appendix D.6 in the appendix. ### Initialization and hyper-parameter tuning The natural way to initialize the belief state is use a vague Gaussian prior of the form \(p(\mathbf{\theta}_{0})=\mathcal{N}(\mathbf{0},\mathbf{\Upsilon}_{0})\), where \(\mathbf{\Upsilon}_{0}=\eta_{0}\mathbf{I}_{P}\) and \(\eta_{0}\) is a hyper-parameter that controls the strength of the prior. However, plugging in all 0s for the weights will result in a prediction of 0, which will result in a zero gradient, and so no learning will take place. (With \(\mathbf{\mu}_{0}=0\), no deterministic algorithm can ever break the network's inherent symmetry under permutation of the hidden units.) So in practice we sample the initial mean weights using a standard neural network initialization procedure, such as "LeCun-Normal", which has the form \(\mathbf{\mu}_{0}\sim\mathcal{N}(\mathbf{0},\mathbf{\mathrm{S}}_{0})\), where \(\mathbf{\mathrm{S}}_{0}\) is diagonal and \(S_{0}[j,j]=1/F_{j}\) is the fan-in of weight \(j\). (The bias terms are initialized to 0.) We then set \(\mathbf{\Upsilon}_{0}=\eta_{0}\mathbf{I}_{P}\) and \(\mathbf{\mathrm{W}}_{0}=[0]^{P\times L}\).4 Footnote 4: To make the prior accord with the non-spherical distribution from which we sample \(\mathbf{\mu}_{0}\), we can scale the parameters by the fan-in, to convert to a standardized coordinate frame. However we found this did not seem to make any difference in practice, at least for our classification experiments. The hyper-parameters of our method are the initial prior precision \(\eta_{0}\), the dynamics noise \(q\), the dynamics scaling factor \(\gamma\), and (for regression problems), the observation variance \(R\). These play a role similar to the hyper-parameters of a standard neural network, such as degree of regularization and the learning rate. We optimize these hyper-parameters using Bayesian optimization, where the objective is the validation set NLL for stationary problems, or the average one-step-ahead NLL (aka prequential loss) for non-stationary problems. For details, see appendix C. ## 4 Experiments In this section, we report experimental results on various classification and regression datasets. using the following approximate inference techniques: LO-FI (this paper); FDEKF (fully decoupled diagonal EKF) (Puskorius & Feldkamp, 2003); VDEKF (variational diagonal EKF) (Chang et al., 2022); SGD-RB (stochastic gradient descent with FIFO replay buffer), with memory buffer of size \(B\), using either sgd or adam as the optimizer; online gradient descent (OGD), which corresponds to SGD-RB with \(B=1\); the LRVGA method of (Lambert et al., 2021) (for the NLPD results in appendix D.1); and the online Laplace approximation of (Ritter et al., 2018) (for the NLPD results in appendix D.3 and appendix D.6). For additional results, see appendix D. For the source code to reproduce these results, see [https://github.com/probml/rebayes](https://github.com/probml/rebayes). ### Classification In this section, we report results on various image classification datasets. We use a 2-layer MLP (with 500 hidden units each), which has \(648,010\) parameters. (For results using a CNN, see appendix D.3 in the appendix.) Stationary distributionWe start by considering the fashion-MNIST image classification dataset (Xiao et al. (2017)). For replay-SGD, we use a replay buffer of size \(10\) and tune the learning rate. In fig. 0(a) we plot the misclassification rate on the test set vs number of training samples using the MLP. (We show the mean and standard error over 100 random trials.) We see that LOFI (with \(L=10\)) is the most sample efficient learner, then replay SGD (with \(B=10\)), then replay Adam; the diagonal EKF versions and OGD are the least sample efficient learners. Figure 1: Test set misclassification rate vs number of observations on (a) the static fashion-MNIST dataset. Figure generated by generate_stationary_clf_plots_ipynb (b) Gradually rotating fashion-MNIST. Figure generated by generate_rotated_clf_plots_ipynb (c) Piecewise stationary permuted fashion-MNIST. The task boundaries are denoted by vertical lines. We show performance on the current task. Figure generated by generate_permuted_clf_plots_ipynb In the appendix we show the following additional results. In fig. 10a we show the results using NLL as the evaluation metric; in this case, the gap between LOFI and the other methods is similarly noticeable. In fig. 10b we show the results using NLPD under the generalized probit approximation; the performance gap reduces but LO-FI is still the best method (see appendix B for discussion on analytical approximations to the NLPD). In fig. 11 we show results using a CNN (a LeNet-style architecture with 3 hidden layers and 421,641 parameters); trends are similar to the MLP case. In fig. 12 we show how changing the rank \(L\) of LO-FI affects performance within the range 1 to 50. We see that for both NLL and misclassification rate, larger \(L\) is better, with gains plateauing at around \(L\approx 10\). We also show that a spherical approximation to LO-FI, discussed in appendix F in the appendix, gives worse results. Piecewise stationary distributionTo evaluate model performance in the non-stationary classification setting, we perform inference under the incremental domain learning scenario using the permuted-fashion-MNIST dataset (Hsu et al., 2018). After every \(300\) training examples, the images are permuted randomly and we compare performances across \(10\) consecutive tasks. In fig. 1c we plot the performance over the current test set for each task (each test size has size \(500\)) as a function of the number of training samples. (We show mean and standard error across \(20\) random initializations of the dataset). The task boundaries are denoted by vertical dotted lines (this boundary information is not available to the learning agents, and is only used for evaluation). We see that LO-FI rapidly adapts to each new distribution and outperforms all other methods. In the appendix we show the following additional results. In fig. 13 we show the results using NLL as the evaluation metric; in this case, the gap between LOFI and the other methods is even larger. In fig. 14, we show misclassification for the current task as a function of LO-FI rank; as before, performance increases with rank, and plateaus at \(L=10\). In fig. 17, we show results on _split_ fashion MNIST (Hsu et al., 2018), in which each task corresponds to a new pair of classes. However, since this is such an easy task that all methods are effectively indistinguishable. Slowly changing distributionThe above experiments simulate an unusual form of non-stationarity, corresponding to a sudden change in the task. In this section, we consider a slowly changing distribution, where the task is to classify the images as they slowly rotate. The angle of rotation \(\alpha_{t}\) gradually drifts according to an Ornstein-Uhlenbeck process, so \(d\alpha_{t}=-\theta(\mu-\alpha_{t})dt+\sigma dW_{t}\), where \(W_{t}\) is a white noise process, \(\mu=45\), \(\sigma=15\), \(\theta=10\) and \(dt=1/N\), where \(N=2000\) is the number of examples. The test-set is modified using the same rotation at each step, perturbed by a Gaussian noise with standard deviation of \(5\) degrees. To evaluate performance we use a sliding window of size \(200\) around the current time point. The misclassification results are shown in fig. 1b. LO-FI adapts to the continuously changing environment quickly and outperforms the other methods. In fig. 18 in the appendix we show the NLL and NLPD, which shows a similar trend. ### Regression In this section, we consider regression tasks using variants of the fashion-MNIST dataset (images from class 2), where we artificially rotate the images, and seek to predict the angle of rotation. As in the classification setting, we use a 2-hidden layer MLP with 500 units per layer. Stationary distributionWe start by sampling an iid dataset of images, where the angle of rotation at time \(t\) is sampled from a uniform \(\mathcal{U}[0,180]\) distribution. In Figure fig. 2a, we show the RMSE over the test set as a function of the number of trained examples; we see that LOFI outperforms the other methods by a healthy margin. (The NLL and NLPD results in fig. 19 show a similar trend.) Piecewise stationary distributionWe introduce nonstationarity through discrete task changes: we randomly permute the fashion-MNIST dataset after every \(300\) training examples, for a total of \(10\) tasks. This is similar to the classification setting of section 4.2, except the prediction target is the angle, which is randomly sampled from \((0,180)\) degrees. The goal is to predict the rotation angle of test-set images with the same permutation as the current task. The results are shown in fig. 2c. We see that LO-FI outperforms all other methods. Slowly changing distributionTo simulate an arguably more realistic kind of change, we consider the case where the rotation angle slowly changes, generated via an Ornstein-Uhlenbeck process as in section 4.1, except with parameters \(\mu=90,\sigma=30\). To evaluate performance we use a sliding window of size \(200\), applied to the test set whose rotations are generated by the same rotations as the training set, except perturbed by a Gaussian noise with standard deviation of \(5\) degrees. We show the results in fig. 2b. We see that LO-FI outperforms the baseline methods. Results on stationary UCI regression benchmarkIn this section, we evaluate various methods on the UCI tabular regression benchmarks used in several other BNN papers (e.g., (Hernandez-Lobato and Adams, 2015; Gal and Ghahramani, 2016; Mishkin et al., 2018)). We use the same splits as in (Gal and Ghahramani, 2016). As in these prior works, we consider an MLP with 1 hidden layer of \(H=50\) units using RELU activation, so the number of parameters is \(P=(D+2)H+1\), where \(D\) is the number of input features. In Table 1 in the appendix, we show the number of features in each dataset, as well as the number of training and testing examples in each of the 20 partitions. We use these small datasets to compare LO-FI with LRVGA, as well as the other baselines. We show the RMSE vs number of training examples for the Energy dataset in fig. 2(a). In this case, we see that LO-FI (rank 10) outperforms LRVGA (rank 10), and both outperform diagonal EKF and SGD-RB (buffer size 10). However, full covariance EKF is the most sample efficient learner. On other UCI datasets, LRVGA can slightly outperform LO-FI (see appendix D.1 for details). However, it is about 20 times slower than LOFI. This is visualized in fig. 2(b), which shows RMSE vs compute time, averaged over the 8 UCI datasets listed in table 1. This shows that, controlling for compute costs, LO-FI is a more efficient estimator, and both outperform replay SGD. ### Contextual bandits In this section, we illustrate the utility of an online Bayesian inference method by applying it to a contextual bandit problem. Following prior work (e.g., (Duran-Martin et al., 2022)), we convert the MNIST classification problem into a bandit problem by defining the action space as a label from 0 to 9, and defining the reward to be 1 if the correct label is predicted, and 0 otherwise. For simplicity, we model this using a nonlinear Gaussian regression model, Figure 2: Test set regression error (measured using RMSE), computed using plugin approximation on various datasets. (a) Static iid distribution of rotated MNIST images. Figure generated by generate_iid_reg_plots.ipynb (b) Slowly changing version of rotated MNIST. Figure generated by generate_rw_reg_plots.ipynb (c) Piecewise stationary permuted coated MNIST. The task boundaries are denoted by vertical lines. We show performance on the current task. Figure generated by generate_permuted_reg_plots.ipynb rather than a nonlinear Bernoulli classification model. To tackle the exploration-exploration tradeoff, we either use Thompson sampling (TS) or the simpler \(\epsilon\)-greedy baseline. In TS, we sample a parameter from the posterior, \(\tilde{\mathbf{\theta}}_{t}\sim p(\mathbf{\theta}_{t}|a_{1:t-1},\mathbf{x}_{1:t-1}),r_{1:t-1})\) and then take the greedy action with this value plugged in, \(a_{t}=\operatorname*{argmax}_{a}E[r|\mathbf{x}_{t},\tilde{\mathbf{\theta}}_{t}]\). This method is known to obtain optimal regret (Russo et al., 2018), although the guarantees are weaker when using approximate inference (Phan et al., 2019). Of course, TS requires access to a posterior distribution to sample from. To compare to methods (such as SGD) that just compute a point estimate, we also use \(\epsilon\)-greedy; in this approach, with probability \(\epsilon=0.1\) we try a random action (to encourage exploration), and with probability \(1-\epsilon\) we pick the best action, as predicted by plugging in the MAP parameters into the reward model. In section 4.3, we compare these algorithms on the MNIST bandit problem, where the regression model is a simple MLP with the same architecture as shown in Figure 1b of (Duran-Martin et al., 2022). For the \(\epsilon\)-greedy exploration policy we use \(\epsilon=0.1\), where the MAP parameter estimate is either computed using LOFI (where the rank is on the \(x\)-axis) or using SGD with replay buffer (where the buffer size is on the \(x\)-axis). We also show results of using TS with LO-FI. We see see that TS is much better than \(\epsilon\)-greedy with LOFI MAP estimate, which in turn is better than \(\epsilon\)-greedy with SGD MAP estimate. In fig. 22 in the appendix, we plot reward vs time for these methods. ## 5 Conclusion and future work We have presented an efficient new method of fitting neural networks online to streaming datasets, using a diagonal plus low-rank Gaussian approximation. In the future, we are interested in developing online methods for estimating the hyper-parameters, perhaps by extending the variational Bayes approach of (Huang et al., 2020; de Vilmarest and Wintenberger, 2021), or the gradient based method of (Greenberg et al., 2021). We would also like to further explore the predictive uncertainty created by our posterior approximation, to see if it can be used for sequential decision making tasks, such as Bayesian optimization or active learning. This may require the use of (online) deep Bayesian ensembles, to capture functional as well as parametric uncertainty. Figure 4: Total reward on MNIST bandit problem after 8000 steps vs memory of the posterior approximation. We show results (averaged over 5 trials) using Thompson sampling or \(\epsilon\)-greedy with \(\epsilon=0.1\). See text for details. Figure generated by bandit-vs-memory.ipynb Figure 3: (a) RMSE vs number of examples on the UCI energy dataset. We show the mean and standard error across 20 partitions. Figure generated by plots-xval.ipynb (b) RMSE vs log running time per data point averaged over multiple UCI regression datasets. The speedup of LOFI compared to LRVGA is about \(e^{3}\approx 20\). Figure generated by time-analysis.ipynb
2308.16822
Latent Variable Multi-output Gaussian Processes for Hierarchical Datasets
Multi-output Gaussian processes (MOGPs) have been introduced to deal with multiple tasks by exploiting the correlations between different outputs. Generally, MOGPs models assume a flat correlation structure between the outputs. However, such a formulation does not account for more elaborate relationships, for instance, if several replicates were observed for each output (which is a typical setting in biological experiments). This paper proposes an extension of MOGPs for hierarchical datasets (i.e. datasets for which the relationships between observations can be represented within a tree structure). Our model defines a tailored kernel function accounting for hierarchical structures in the data to capture different levels of correlations while leveraging the introduction of latent variables to express the underlying dependencies between outputs through a dedicated kernel. This latter feature is expected to significantly improve scalability as the number of tasks increases. An extensive experimental study involving both synthetic and real-world data from genomics and motion capture is proposed to support our claims.
Chunchao Ma, Arthur Leroy, Mauricio Alvarez
2023-08-31T15:52:35Z
http://arxiv.org/abs/2308.16822v1
# Latent Variable Multi-output Gaussian Processes for Hierarchical Datasets ###### Abstract Multi-output Gaussian processes (MOGPs) have been introduced to deal with multiple tasks by exploiting the correlations between different outputs. Generally, MOGPs models assume a flat correlation structure between the outputs. However, such a formulation does not account for more elaborate relationships, for instance, if several replicates were observed for each output (which is a typical setting in biological experiments). This paper proposes an extension of MOGPs for hierarchical datasets (i.e. datasets for which the relationships between observations can be represented within a tree structure). Our model defines a tailored kernel function accounting for hierarchical structures in the data to capture different levels of correlations while leveraging the introduction of latent variables to express the underlying dependencies between outputs through a dedicated kernel. This latter feature is expected to significantly improve scalability as the number of tasks increases. An extensive experimental study involving both synthetic and real-world data from genomics and motion capture is proposed to support our claims. Multi-output Gaussian processes Latent variables Hierarchical data Variational inference ## 1 Introduction In Bayesian statistics, hierarchical designs are a way to represent generative models that take multi-level structures of correlation into consideration. A hierarchical dataset can generally be represented as a top-down tree-like architecture. We refer to all leaf nodes of the same level as replicas since they inherit from the same parent node. The authors of Kalinka et al. (2010) proposed a dataset, which we used in our experiments, where gene expression is observed through eight replicas. Gene expression is a biological process indicating how the information of a particular gene can affect the phenotype, and many practitioners aim to understand this phenomenon better. In real-world applications, many datasets present a hierarchical structure, such as the one observed in this gene expression dataset. In a hierarchical model, prior distributions of the parameters of interest generally depend upon other parameters (often called hyper-parameters) that also have their own prior distribution (Gelman et al., 2013). Standard _flat_ (i.e. non-hierarchical) modelling strategies often struggle to fit hierarchical datasets adequately with a reasonable number of parameters. Conversely, they can be prone to overfitting as the number of parameters increases (Gelman et al., 2013). However, those issues can be avoided when properly designing the hierarchical structure in modelling assumptions. In our previous example, a model designed with a hierarchical structure appears as a natural choice to account for correlations between leaf nodes (or replicas). In the Gaussian processes (GP) literature, the topic of hierarchical modelling has quickly emerged as a promising approach to tackle a wide range of problems. More specifically, Lawrence and Moore (2007) first introduced a hierarchical Gaussian process model for dimensionality reduction. Then, the two-layer hierarchical approximation proposed in Park and Choi (2010) helped to reduce the computational complexity of standard GP regression. Later, Hensman et al. (2013) derived a novel hierarchical kernel to handle gene expression data, while Damianou and Lawrence (2013) established a deep-layer model where each layer was based on a Gaussian processes mapping. The paper Flaxman et al. (2015) also developed a hierarchical model through a prior distribution over kernel hyperparameters and used MCMC for inference. More recently, Li and Chen (2018) proposed a hierarchical formulation extracting latent features from the input dataset through the GP latent variable model and derived a Bayesian inference procedure to generate outputs based on those latent features. None of the aforementioned models is yet adapted to the case of multiple-output GPs, where each output presents an underlying hierarchical structure. In this sense, previous models would generally fail to capture the correlation existing between each replica. Moreover, to the best of our knowledge, no method is currently able to predict entirely missing replicas. This paper aims to fill this gap by providing an extension of the latent variable multi-output Gaussian process (LVMOGP) model (Dai et al., 2017) that can cope with hierarchical datasets and naturally predict missing replicas. Interestingly, our model could also be viewed as a generalisation of hierarchical GPs (HGP) (Hensman et al., 2013), as it somewhat combines the two approaches. Therefore, we named this method _hierarchical multi-output Gaussian processes with latent variables_ (HMOGP-LV). More specifically, **HMOGP-LV** controls the correlation between outputs through latent variables and captures the structure of data using a hierarchical kernel. Using inducing variables that share information of all replicas across the outputs, our model can predict missing points and entirely missing replicas. In this sense, our model tackles a more general problem, which the standard HGP model did not handle. When predicting a missing replica from one output, the inducing variables can use information from the corresponding replicas in other outputs. We derived an analytical approximation scheme for **HMOGP-LV** in two different settings: all outputs having the same input data; all outputs having specific input data. ## 2 Model and assumptions In this section, let us formally derive the hierarchical multi-output Gaussian processes with latent variables (HMOGP-LV). We first present HMOGP-LV in a setting where all outputs are observed on the same input set. Further, the model is extended to deal with cases where each output has its own input set. ### Hierarchical Multi-output Gaussian Processes with Latent Variables Assume that we observe a \(D\)-dimensional output vector \(\mathbf{y}(\mathbf{x})=\left[\mathbf{y}_{1}^{\top}(\mathbf{x}),\,\mathbf{y}_{2 }^{\top}(\mathbf{x}),\,\cdots,\,\mathbf{y}_{D}^{\top}(\mathbf{x})\right]^{\top}\), where \(\mathbf{x}\in\mathbf{R}^{v}\) is the input vector (of an arbitrary dimension \(v\)). To encode the hierarchical structure of the data, we assume that \(R\) replicas are observed for each output. Therefore, for all \(d=1,\ldots,D\), each component can be decomposed as \(\mathbf{y}_{d}(\mathbf{x})=\left[y_{d}^{1}(\mathbf{x}),y_{d}^{2}(\mathbf{x}), \cdots,y_{d}^{R}(\mathbf{x})\right]^{\top}\), where \(y_{d}^{r}(\mathbf{x})\) is the \(r\)-th replica of the \(d\)-th output evaluated at \(\mathbf{x}\). For the sake of simplicity, we assume that each replica presents the same number \(N\) of data points (although the following would still hold otherwise, up to minor technical adjustments). Formally, each replica \(y_{d}^{r}(\mathbf{x})\) could be modelled as a latent random function \(f_{d}^{r}(\mathbf{x})\) corrupted by a Gaussian white noise \(\epsilon_{d}\) with \(\sigma_{d}^{2}\) variance: \[y_{d}^{r}\left(\mathbf{x}\right) =f_{d}^{r}\left(\mathbf{x}\right)+\epsilon_{d} \tag{1}\] \[f_{d}^{r}(\mathbf{x}) \sim\mathcal{GP}\left(0,k_{f}\left(\mathbf{x},\mathbf{x}^{\prime }\right)\right)\] (2) \[\epsilon_{d} \sim\mathcal{N}\left(0,\sigma_{d}^{2}\right). \tag{3}\] We refer to the collection of the \(r\)-th observed input data points as \(\mathbf{X}_{r}=[\mathbf{x}_{r}^{(1)},\cdots,\,\mathbf{x}_{r}^{(N)}]^{\top}\in \mathbf{R}^{N\times v}\), and to the associated outputs as \(\mathbf{y}_{d}^{r}=[y_{d}^{r}\left(\mathbf{x}_{r}^{(1)}\right),\cdots,y_{d}^ {r}\left(\mathbf{x}_{r}^{(N)}\right)]^{\top}\in\mathbf{R}^{N}\) for the \(r\)-th replica of the \(d\)-th output. The \(d\)-th input and output sets are denoted \(\mathbf{X}=\left\{\mathbf{X}_{r}\right\}_{r=1}^{R}\) and \(\mathbf{y}_{d}=\left\{\mathbf{y}_{d}^{r}\right\}_{r=1}^{R}\), respectively. Finally, the vector \(\mathbf{y}=[\mathbf{y}_{1}^{\top},\cdots,\mathbf{y}_{D}^{\top}]^{\top}\) refers to all observed outputs. To cope with the assumed hierarchical structure, we still need to define an additional layer of correlation in the generative model. Therefore, suppose that an underlying function controls the mean parameter of the prior distribution from which the replicas are drawn. Let us denote this function as \(g(\cdot)\), a zero mean GP with covariance \(k_{g}\left(\cdot,\cdot\right)\) such as \(g(\mathbf{x})\sim\mathcal{GP}\left(0,k_{g}\left(\mathbf{x},\mathbf{x}^{\prime }\right)\right)\). Similarly to the hierarchical structure proposed in Hensman et al. (2013), all latent functions are assumed to be drawn from a Gaussian process with a \(g(\mathbf{x})\) mean and a \(k_{f}\left(\mathbf{x},\mathbf{x}^{\prime}\right)\) covariance. Overall, we obtain: \[g(\mathbf{x}) \sim\mathcal{GP}\left(0,k_{g}\left(\mathbf{x},\mathbf{x}^{\prime} \right)\right), \tag{4}\] \[f_{d}^{r}(\mathbf{x}) \sim\mathcal{GP}\left(g(\mathbf{x}),k_{f}\left(\mathbf{x},\mathbf{ x}^{\prime}\right)\right),\] (5) \[y_{d}^{\prime}\left(\mathbf{x}\right) =f_{d}^{r}\left(\mathbf{x}\right)+\epsilon_{d}. \tag{6}\] Intuitively, the above generative model indicates that all outputs share information both through kernel functions \(k_{g}\left(\cdot,\cdot\right)\) and \(k_{f}\left(\cdot,\cdot\right)\). In order to replace the fixed coregionalisation matrix with a kernel matrix, we now assume there exists a continuous latent vector \(\mathbf{h}_{d}\in\mathbf{R}^{Q_{H}}\) associated with each output \(\mathbf{y}_{d}\). \(Q_{H}\) is set in advance by the modeller. From a learning point of view, the latent variables are ultimately extracted from observations by maximising the marginal likelihood. Latent variables of all outputs are stacked into \(\mathbf{H}=\left[\mathbf{h}_{1}^{\top},\ldots,\mathbf{h}_{D}^{\top}\right]^{\top}\) and each of them follows the same prior distribution (e.g. a normal distribution). Therefore, we now obtain the following: \[g(\mathbf{x}) \sim\mathcal{GP}\left(0,k_{g}\left(\mathbf{x},\mathbf{x}^{\prime }\right)\right), \tag{7}\] \[f_{d}^{r}(\mathbf{x}) \sim\mathcal{GP}\left(g(\mathbf{x}),k_{f}\left(\mathbf{x}, \mathbf{x}^{\prime}\right)\right),\] (8) \[y_{d}^{\prime}(\mathbf{x}) =f_{d}^{r}\left(\mathbf{x},\mathbf{h}_{d}\right)+\epsilon_{d},\ \mathbf{h}_{d}\sim\mathcal{N}(\mathbf{0},\mathbf{I}). \tag{9}\] There are many ways to build our kernel based on Eq. (9). The overall kernel matrix is built through a Kronecker product to account for all correlations between inputs and outputs, as illustrated in Figure 1. We first build a kernel matrix for the outputs: \[\mathbf{K}_{\mathbf{H}}^{H}=\left(\begin{array}{ccc}K_{1,1}^{H}&\ldots&K_{1,D}^{H}\\ \vdots&\ddots&\vdots\\ K_{D,1}^{H}&\ldots&K_{D,D}^{H}\end{array}\right), \tag{10}\] where \(K_{i,j}^{H}=k_{H}\left(\mathbf{h}_{i},\mathbf{h}_{j}\right)\) describes the correlation between \(i\)-th and \(j\)-th outputs and \(k_{H}(\cdot,\cdot)\) is a kernel function. Compared with a fixed coregionalisation matrix, \(k_{H}\) is still able to produce flexible matrices while dramatically reducing computational complexity in high dimensional applications. By leveraging \(k_{H}\) and latent variables \(\mathbf{H}\), this approach has previously demonstrated efficiency in avoiding over-fitting (Dai et al., 2017) and dealing with scarce data sets. Let us now derive a kernel matrix over the inputs. Since there exists a linear hierarchical structure for our latent functions, if two input points are associated with the same output and \(r\)-th replica (e.g., \(\mathbf{x}_{r}^{(i)}\) and \(\mathbf{x}_{r}^{(j)}\)), the corresponding GP distribution is characterised by a compound covariance function \(k_{g}^{f}\left(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r}^{(j)}\right)=k_{f}\left( \mathbf{x}_{r}^{(i)},\mathbf{x}_{r}^{(j)}\right)+k_{g}\left(\mathbf{x}_{r}^{( i)},\mathbf{x}_{r}^{(j)}\right)\). Conversely, for input points coming from different replicas, the covariance structure becomes \(k_{g}\left(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r^{\prime}}^{(j)}\right)\). We denote \(k_{\text{h}}\left(\cdot,\cdot\right)\) (where the index \(h\) stands for _hierarchy_) the kernel function defined as: \[k_{\text{h}}\left(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r^{\prime}}^{(j)}\right)= \begin{cases}k_{g}^{f}\left(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r^{\prime}}^{(j) }\right),\ r=r^{\prime}\\ k_{g}\left(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r^{\prime}}^{(j)}\right),\ r\neq r^{ \prime}\end{cases} \tag{11}\] Figure 1: Summary of the generative procedure used to derive the overall covariance structure. \(\mathbf{K}_{\mathbf{H}}^{X}\) contains the hierarchical structure of our model; \(\mathbf{K}_{\mathbf{H}}^{H}\) contains the correlation between each output. where \(\mathbf{x}_{r}^{(i)},\mathbf{x}_{r}^{(j)}\in\mathbf{X}\). The covariance matrix \(\mathbf{K}_{\mathbf{ff}}^{X}\) obtained by evaluating this hierarchical kernel on input points can be expressed as: \[\mathbf{K}_{\mathbf{ff}}^{X}=\left(\begin{array}{cccc}k_{g}^{f}\left(\mathbf{ X}_{1},\mathbf{X}_{1}\right)&\ldots&k_{g}\left(\mathbf{X}_{1},\mathbf{X}_{R}\right) \\ \vdots&\ddots&\vdots\\ k_{g}\left(\mathbf{X}_{R},\mathbf{X}_{1}\right)&\ldots&k_{g}^{f}\left(\mathbf{ X}_{R},\mathbf{X}_{R}\right)\end{array}\right). \tag{12}\] Finally, the covariance matrix of our proposed model is defined as \[\mathbf{K}_{\mathbf{ff}}=\mathbf{K}_{\mathbf{ff}}^{H}\otimes\mathbf{K}_{ \mathbf{ff}}^{X}, \tag{13}\] where \(\otimes\) denotes the Kronecker product between matrices. Based on Eq. (13), we can derive the prior distribution of \(\mathbf{f}=\left[\mathbf{f}_{1}^{\top},\ldots,\mathbf{f}_{D}^{\top}\right]^{\top}\) and the conditional likelihood: \[p\left(\mathbf{f}\mid\mathbf{X},\mathbf{H}\right)=\mathcal{N} \left(\mathbf{f}\mid\mathbf{0},\mathbf{K}_{\mathbf{ff}}\right), \tag{14}\] \[p\left(\mathbf{y}\mid\mathbf{X},\mathbf{f},\mathbf{H}\right)= \mathcal{N}\left(\mathbf{y}\mid\mathbf{f},\mathbf{\Sigma}\right), \tag{15}\] where \(\mathbf{\Sigma}\in\mathbf{R}^{NRD\times NRD}\) is a diagonal matrix with a noise variance that can depend on both the particular output \(d\) and the particular replica \(r\). Thus, the corresponding marginal likelihood can be expressed as (while omitting conditioning on \(\mathbf{X}\) for clarity): \[p\left(\mathbf{y}\right)=\int p\left(\mathbf{y}\mid\mathbf{f},\mathbf{H} \right)p\left(\mathbf{f}\mid\mathbf{H}\right)p\left(\mathbf{H}\right)\text{ d}\mathbf{f}\text{d}\mathbf{H}. \tag{16}\] ### Extension for Different Sets of Inputs In the above section, we derived a model that deals with multiple outputs sharing the same input set. However, in real-world applications, each output may often be observed at different locations. In this context, the \(d\)-th input with replicated data is expressed as \(\mathbf{X}_{d}=\left\{\mathbf{X}_{d,r}\right\}_{r=1}^{R}\), where \(\mathbf{X}_{d,r}=[\mathbf{x}_{d,r}^{(1)},\cdots,\mathbf{x}_{d,r}^{(N_{d})}]^{\top}\). Although the general model formulation described in Section 2.1 is preserved, we now need to take extra care when dealing with missing data. The specific equations associated with the learning procedure in this framework are detailed in Section 3.2. ## 3 Inference In general, the integral in the marginal likelihood expression (16) is intractable. Therefore, we must resort to a variational approximation scheme by deriving a lower bound of the log marginal likelihood. Our method can also deal with large-scale datasets based on similar ideas and notation, as in Dai et al. (2017). ### Scalable Variational Inference Let us first introduce inducing variables \(\mathbf{U}\in\mathbf{R}^{M_{\mathbf{X}}\times M_{\mathbf{H}}}\) associated with our previous outputs and \(\mathbf{U}_{:}=\text{vec}(\mathbf{U})\), where "\(:\)" denotes the vectorisation of a matrix. We assume that the prior distribution of \(\mathbf{U}\), can be expressed as \(p\left(\mathbf{U}_{:}\right)=\mathcal{N}\left(\mathbf{U}_{:}\mid\mathbf{0}, \mathbf{K}_{\mathbf{UU}}\right)\). In particular, \(\mathbf{K}_{\mathbf{UU}}\) is supposed to have a similar format as Eq. (13): \(\mathbf{K}_{\mathbf{UU}}=\mathbf{K}_{\mathbf{UU}}^{H}\otimes\mathbf{K}_{ \mathbf{UU}}^{X}\). The matrix \(\mathbf{K}_{\mathbf{UU}}^{H}\) is obtained by evaluating \(k_{H}(\cdot,\cdot)\) on the inducing outputs \(\mathbf{Z}^{H}=\left[\mathbf{z}_{1}^{H},\ldots,\mathbf{z}_{M_{\mathbf{H}_{ \mathbf{H}_{\mathbf{H}_{\mathbf{H}}}}}}^{H}\right]^{\top}\), \(\mathbf{z}_{m}^{H}\in\mathbf{R}^{Q_{H}}\). Similarly, \(\mathbf{K}_{\mathbf{UU}}^{X}\) can be computed with the kernel function \(k_{\text{h}}(\cdot,\cdot)\) evaluated on inducing input locations \(\mathbf{Z}^{X}\) where \(\mathbf{Z}^{X}=\{\mathbf{Z}_{r}^{X}\}_{r=1}^{R}\). \(\mathbf{Z}_{r}^{X}\) corresponds with the \(r\)-th replica and \(\mathbf{Z}_{r}^{X}=\left[\mathbf{z}_{r,1}^{X},\ldots,\mathbf{z}_{r,M_{r}}^{X} \right]^{\top}\) in which \(\mathbf{z}_{r,m}^{X}\in\mathbf{R}^{v}\), and \(M_{r}\) is the number of inducing input points in the \(r\)-th replica and \(M_{\mathbf{X}}=M_{r}\times R\). Similar to the inducing variables framework in Titsias (2009), the conditional distribution of \(\mathbf{f}\) can be expressed as (the inputs \(\mathbf{Z}^{X},\mathbf{Z}^{H},\mathbf{X}\) and \(\mathbf{H}\) are omitted in conditioning for clarity): \[p\left(\mathbf{f}\mid\mathbf{U}\right)=\mathcal{N}\left(\mathbf{f}\mid\mathbf{ K}_{\mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{U}_{:},\mathbf{K}_{ \mathbf{ff}}-\mathbf{K}_{\mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1} \mathbf{K}_{\mathbf{f}\mathbf{U}}^{\top}\right), \tag{17}\] where \(\mathbf{K}_{\mathbf{U}}=\mathbf{K}_{\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}}^{X}\). \(\mathbf{K}_{\mathbf{U}}^{X}\) denotes the cross-covariance matrix computed by evaluating \(k_{\text{h}}(\cdot,\cdot)\) between \(\mathbf{X}\) and \(\mathbf{Z}^{X}\); \(\mathbf{K}_{\mathbf{U}}^{H}\) is the cross-covariance computed between \(\mathbf{H}\) and \(\mathbf{Z}^{H}\) with \(k_{H}(\cdot,\cdot)\). The underlying graphical models summarising the different assumptions on the kernel structures are displayed in Figure 10 of the Appendix. As for covariance matrix (12), we can define: \[\mathbf{K}_{\mathbf{U}}^{X}=\left(\begin{array}{ccc}k_{g}^{f}\left(\mathbf{ Z}_{1}^{X},\mathbf{Z}_{1}^{X}\right)&\ldots&k_{g}\left(\mathbf{Z}_{1}^{X}, \mathbf{Z}_{R}^{X}\right)\\ \vdots&\ddots&\vdots\\ k_{g}\left(\mathbf{Z}_{R}^{\tilde{X}},\mathbf{Z}_{1}^{X}\right)&\ldots&k_{g}^{ f}\left(\mathbf{Z}_{R}^{\tilde{X}},\mathbf{Z}_{R}^{X}\right)\end{array}\right), \tag{18}\] and \[\mathbf{K}_{\mathbf{U}}^{X}=\left(\begin{array}{ccc}k_{g}^{f}\left(\mathbf{ X}_{1},\mathbf{Z}_{1}^{X}\right)&\ldots&k_{g}\left(\mathbf{X}_{1},\mathbf{Z}_{R}^{X} \right)\\ \vdots&\ddots&\vdots\\ k_{g}\left(\mathbf{X}_{R},\mathbf{Z}_{1}^{X}\right)&\ldots&k_{g}^{f}\left( \mathbf{X}_{R},\mathbf{Z}_{R}^{X}\right)\end{array}\right). \tag{19}\] To approximate posteriors over \(\mathbf{f}\) and \(\mathbf{H}\), we derive a variational distribution \(q(\mathbf{f},\mathbf{U}_{:},\mathbf{H})=p(\mathbf{f}\mid\mathbf{U}_{:}, \mathbf{H})q(\mathbf{U}_{:})q(\mathbf{H})\). To compute optimal parameters and hyperparameters for our model, we can maximise the associated lower bound of \(\log p(\mathbf{y})\) (see Sections A.1 and A.2 of the Appendix for technical details): \[\mathcal{L}=\mathcal{F}-\mathrm{KL}(q(\mathbf{U}_{:})\|p(\mathbf{U}_{:}))- \mathrm{KL}(q(\mathbf{H})\|p(\mathbf{H})), \tag{20}\] where we assume \(q(\mathbf{U}_{:})=\mathcal{N}\left(\mathbf{U}_{:}\mid\mathbf{M}_{:},\mathbf{ \Sigma}^{\mathbf{U}_{:}}\right)\) with \(\mathbf{M}_{:}\) and \(\mathbf{\Sigma}^{\mathbf{U}_{:}}\) being variational parameters, and \[\mathcal{F}= -\frac{DRN}{2}\log 2\pi\sigma^{2}-\frac{1}{2\sigma^{2}}\mathbf{y}^{ \top}\mathbf{y}+\frac{1}{\sigma^{2}}\mathbf{y}^{\top}\Psi\mathbf{K}_{\mathbf{ U}}^{-1}\mathbf{M}_{:}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\mathbf{K}_{\mathbf{U}\mathbf{ U}}^{-1}\Phi\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left(\mathbf{M}_{:}\mathbf{M}_{:}^{ \top}+\mathbf{\Sigma}^{\mathbf{U}_{:}}\right)\right)\] \[-\frac{1}{2\sigma^{2}}\left(\text{Tr}\left\langle\mathbf{K}_{ \mathbf{H}}\right\rangle_{q(\mathbf{H})}-\text{Tr}\left(\mathbf{K}_{\mathbf{ U}\mathbf{U}}^{-1}\Phi\right)\right), \tag{21}\] where \(\Phi=\left\langle\mathbf{K}_{\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{U}\mathbf{ U}}\right\rangle_{q(\mathbf{H})}\) and \(\Psi=\left\langle\mathbf{K}_{\mathbf{f}\mathbf{U}}\right\rangle_{q(\mathbf{H})}\). Notice that the computational complexity of the lower bound is dominated by the product \(\mathbf{K}_{\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{f}\mathbf{U}}\) that is \(\mathcal{O}\left(NDRM_{\mathbf{X}}^{2}M_{\mathbf{H}}^{2}\right).\) ### Lower Bound for Different Sets of Inputs When the input locations differ among outputs, the expression in (20) still holds for the lower bound of the log-marginal likelihood. However, the term \(\mathcal{F}\) needs to be reformulated as (see Section A.3 of the Appendix for technical details): \[\mathcal{F}= \sum_{d=1}^{D}-\frac{N_{d}R}{2}\log 2\pi\sigma_{d}^{2}-\frac{1}{2 \sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\mathbf{y}_{d}\] \[+\frac{1}{\sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\Psi_{d}\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{:}-\frac{1}{2\sigma_{d}^{2}}\left(\psi_{ d}-\text{Tr}\left[\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\Phi_{d}\right]\right)\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{U} \mathbf{U}}^{-1}\Phi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left(\mathbf{M} _{:}\mathbf{M}_{:}^{\top}+\mathbf{\Sigma}^{\mathbf{U}_{:}}\right)\right], \tag{22}\] where \(\Phi_{d}=\left\langle\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}^{\top}\mathbf{K}_{ \mathbf{f}_{d}\mathbf{U}}\right\rangle_{q(\mathbf{h}_{d})}\), \(\Psi_{d}=\left\langle\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}\right\rangle_{q( \mathbf{h}_{d})}\) and \(\psi_{d}=\text{Tr}\left\langle\mathbf{K}_{\mathbf{f}_{d}\mathbf{f}_{d}} \right\rangle_{q(\mathbf{h}_{d})}\). Interestingly, the two KL divergence terms in (20) remain identical in both cases, as they do not depend on the data. The product \(\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}\) now drives the \(\mathcal{O}(N_{d}RM_{\mathbf{X}}^{2}M_{\mathbf{H}}^{2})\) computational complexity of the lower bound. While Eq. (22) allows us to define different noise variances for each output and handle datasets observed at irregular input locations, it is also computationally more expensive to evaluate than Eq. (21) as in practice we need to calculate the expectations \(\Phi_{d}\), \(\Psi_{d}\), \(\psi_{d}\) for each output. ## 4 Prediction In this section, we derive the predictive distribution of HMOGP-LV. For existing outputs and a test set of inputs \(\mathbf{X}^{*}\), we have: \[q\left(\mathbf{f}^{*}\mid\mathbf{X}^{*}\right)=\int q\left(\mathbf{f}^{*}\mid \mathbf{X}^{*},\mathbf{H}\right)q(\mathbf{H})\mathrm{d}\mathbf{H}. \tag{23}\] Recalling Eq. (17), the variational distribution in the integral can be analytically derived as: \[q\left(\mathbf{f}^{*}\mid\mathbf{X}^{*},\mathbf{H}\right)=\int p\left( \mathbf{f}^{*}\mid\mathbf{U},\mathbf{X}^{*},\mathbf{H}\right)q\left(\mathbf{ U}_{:}\right)\mathrm{d}\mathbf{U}_{:}=\mathcal{N}\left(\mathbf{f}^{*}\mid\tilde{ \mathbf{m}}_{*},\tilde{\mathbf{K}}_{*}\right), \tag{24}\] where \(\tilde{\mathbf{m}}_{*}\) is \(\mathbf{K}_{\mathbf{f}^{*}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{M}\). and \(\tilde{\mathbf{K}}_{*}\) is equal to \(\mathbf{K}_{\mathbf{f}^{*}\mathbf{f}^{*}}-\mathbf{K}_{\mathbf{f}^{*}\mathbf{ U}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{f}^{*}\mathbf{U}}^{+}\)\(\mathbf{K}_{\mathbf{f}^{*}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{UU}}^{-1} \mathbf{K}_{\mathbf{UU}}^{\top}\) with \(\mathbf{K}_{\mathbf{f}^{*}\mathbf{f}^{*}}=\mathbf{K}_{\mathbf{f}^{*}\mathbf{ f}^{*}}^{H}\otimes\mathbf{K}_{\mathbf{f}^{*}\mathbf{f}^{*}}^{\mathbf{K}_{ \mathbf{f}^{*}}}\) and \(\mathbf{K}_{\mathbf{f}^{*}\mathbf{U}}=\mathbf{K}_{\mathbf{f}^{*}\mathbf{U}}^{ H}\otimes\mathbf{K}_{\mathbf{U}^{*}\mathbf{U}}^{X}\). Although Eq. (23) is intractable, we are still able to obtain the first and second moments of \(\mathbf{f}^{*}\) in \(q\left(\mathbf{f}^{*}\mid\mathbf{X}^{*}\right)\)(Titsias and Lawrence, 2010). ## 5 Experiments In this section, we evaluate HMOGP-LV on both synthetic and real-world datasets and compare its performance against alternative methods. The evaluation between competing approaches is performed regarding two performance metrics for regression problems: normalised mean square error (NMSE) and negative log predictive density (NLPD). Both for NMSE and NLPD, the smaller the values, the better. Baselines:In terms of structure assumptions, we compare our method with three GP models involving hierarchical kernel matrices as introduced in Hensman et al. (2013), namely, **HGP** the original approach, **HGPInd** a modified version using inducing variables, and **DHGP** that presents a deep hierarchical structure. Two multi-output GPs approaches are also considered: a standard linear model of coregionalisation (**LMC**) (Goovaerts et al., 1997), and the latent variables multi-output GPs model (**LVMOGP**) (Dai et al., 2017). We also compared our method to a Neural Network (NN), with 2 layers of 200 units and a ReLU activation, to handle a single output. Both **HGP** and **HGPInd** can only handle a single output with its own replicas. **DHGP**, however, is able to deal with multiple outputs having their own replicas. **LMC** and **LVMOGP** can manage multiple outputs, but to deal with the multiple replicas per output, we stack them in concatenated vectors per output. The Adam optimiser (Kingma and Ba, 2014) is used for maximising the lower bound of the log marginal likelihood (i.e., \(\mathcal{L}\) in Eq. (20)) with a 0.01 learning rate over 10,000 iterations. The Adam optimiser is also used with identical settings to train **LMC** and **NN**. The other models have been trained thanks to the L-BFGS-B algorithm implemented in SciPy (Virtanen et al., 2020) over 10,000 iterations as well. We assume that each output has its own noise variance for all the models. Computational Complexity:Let us provide a quick discussion about the computational complexity of those different frameworks. For the sake of simplicity, we assume here that all outputs are observed over the same input set, so the total number of data points is \(N\times R\). Since **HMOGP-LV** is derived from **LVMOGP** with no extra computational burden, both methods present the same complexity, specifically, \(\mathcal{O}\left(\text{max}\left(NR,M_{\mathbf{H}}\right)\text{max}\left(D,M_ {\mathbf{X}}\right)\text{max}\left(M_{\mathbf{H}},M_{\mathbf{X}}\right)\right)\)(Dai et al., 2017). Regarding **LMC**, the computational complexity is \(\mathcal{O}\left(QM^{3}+DNRQM^{2}\right)\). The complexity of **HGP** and **HGPInd** is \(\mathcal{O}\left((NR)^{3}\right)\) and \(\mathcal{O}\left(NR(M_{\mathbf{H}}M_{\mathbf{X}})^{2}\right)\), respectively, whereas **DHGP** can generally be computed in \(\mathcal{O}\left((DNR)^{3}\right)\) or reduced to \(\mathcal{O}\left((ND)^{3}\right)\) in specific cases (see Hensman et al. (2013) for details). All experiments were performed on a Dell PowerEdge C6320 with an Intel Xeon E5-2630 v3 at 2.40 GHz and 64GB of RAM1. Each experiment is repeated three times. Regarding the experiments with no missing replica, 50% of the data points are dedicated to training in each replica and the other 50% are used for testing purposes. Neither **HGP** nor **DHGP** make use of inducing variables. The value of \(Q_{H}\) is set to 2 for **HMOGP-LV** and **LVMOGP** in all experiments. Footnote 1: Our code is publicly available in the repository [https://github.com/ChunchaoPeter/HMOGP-LV](https://github.com/ChunchaoPeter/HMOGP-LV). **DHGP:** We use the same code as in the previous section. ### Simulation Study: Predicting Missing Time Points To exhibit the ability of our model to exploit correlations from hierarchical structures and between outputs simultaneously, we generated synthetic datasets by sampling from a Gaussian process with zero mean and covariance as Figure 3: Prediction performances (mean \(\pm\) standard deviation) for the first synthetic dataset. For both NMSE and NLPD values, the lower the better. Figure 2: Mean predictive curves associated with their 95% credible intervals for the third output (top row) and seventh output (bottom row) with three replicas each, coming from the synthetic dataset. Locations of training points (in black) and testing points (in red) are specific to each output. in Eq. (13). This covariance function is a combination of two kernels: \(k_{H}(\cdot,\cdot)\) for outputs (two-dimensional space) Kronecker-times a hierarchical kernel. Two kernels are also involved in the hierarchical kernel design: \(k_{g}(\cdot,\cdot)\), which is assumed to be Matem(3/2) with 1.0 lengthscale and 0.1 variance; and \(k_{f}(\cdot,\cdot)\) defined as another Matem(3/2) kernel with 1.0 lengthscale and 1.0 variance. Each output is generated from a specific input set. In addition, a Gaussian noise term with a 0.02 variance is added to each data sample. One synthetic dataset consists of 50 outputs with three replicas each, while each replica comprises 10 data points. As an illustrative example, we displayed in Figure 2 the prediction results for each replica in the third output (top row) and the seventh output (bottom row). One can notice in Figure 2 that **HMOGP-LV** can offer remarkable predictions even from a handful of training points. Our method provides both a mean prediction that closely fits testing points and an accurate uncertainty quantification encompassing relatively narrow regions around this curve. This desirable behaviour can be explained by the ability of **HMOGP-LV** to share information at different levels by leveraging intra- and inter-output correlations and capturing the adequate hierarchical structure present in the data. Sharing knowledge across different outputs allows for accurate predictions on unobserved regions for a specific replica while maintaining a relatively high level of confidence over all the input space considering such a sparse setting. To pursue this simulation study, we provide in Figure 3 a comparative evaluation of predictive performances for all competing methods. It should be noticed that **HMOGP-LV** outperforms both single-output GP models (**HGP**, **HGPInd**), **NN** and multi-output ones (**LMC**, **LVMOGP**) in terms of NMSE and NLPD. Figure 4: Top row: the result of the \(14^{\text{th}}\) output with four replicas; Middle row: the result of the \(24^{\text{th}}\) output with four replicas; Bottom row: the result of the \(40^{\text{th}}\) output with four replicas. The black and red colour represents the train and test data points, respectively. The best-performing method among the alternatives is **DHGP** as its deeper structure may approach the ability of our model to capture complex relationships at different levels. In particular, the top layer can capture correlations between different outputs, while the remaining two layers are likely to capture correlations among replicas. Neither **LVMOGP** nor **LMC** offer satisfying results since they rely on a flat structure, preventing them from capturing the hierarchical structure of the dataset. In the meantime, single-output GP methods remain limited as they cannot take advantage of other outputs to boost performances. Regarding **NN**, it presents lower performances in terms of RMSE and noticeably high variability in results. Moreover, **NN** does not provide uncertainty quantification and cannot be evaluated in terms of NLPD. The ability of **HMOGP-LV** to exploit both properties simultaneously makes our model a sensible choice to handle this kind of highly nested dataset. ### Simulation Study: Predicting an Entirely Missing Replica To demonstrate the unique ability of **HMOGP-LV** to predict an entirely missing replica, an additional experiment is provided with the following setting. We generate 50 outputs with four replicas each, where each replica contains 10 data points. In each output, we assume that one replica is missing. Therefore, three replicas are used for training, and the remaining one is kept aside for testing purposes. As an illustration, we display in Figure 4 the **HMOGP-LV** prediction for the three different outputs, where training points are in black and testing points in red. For instance, with the \(14^{\text{th}}\) output (top row), the first, third and fourth replicas are observed, whereas the second replica is missing. One can observe in each case the excellent predictions for the missing replica. In this example, this can probably be explained by the strong correlations among replicas in all outputs. However, it confirms that our model adequately captures correlations and can transfer them through the inducing variables to predict the missing replica accurately. In Figure 5, we compare our model against competitors for both evaluation metrics. Once again, **HMOGP-LV** offers superior performances compared to alternatives. Let us note that **HGP** cannot make predictions for missing replicas as it is not originally designed to be trained in such settings. **HGPInd** uses other replicas in the same output to obtain the information for the missing replica and all the information kept in the inducing points. However, it cannot share knowledge across outputs whereas our model can fully leverage this information. Both **LVMOGP** and **LMC** can predict missing replicas since they do not distinguish replicas in each output that have a hierarchical structure. Nevertheless, **HMOGP-LV** can keep information from all replicas in inducing variables to improve predictive performances. ### Real Datasets In this subsection, we compare the performance of **HMOGP-LV** against other GP models and **NN** on two real datasets, related to genomics and motion capture applications for multi-output regression problems. Figure 5: Prediction performances (mean \(\pm\) standard deviation) for the second synthetic data with one missing replica in each output. For both NMSE and NLPD values, the lower the better. #### 5.3.1 Gene Dataset The first problem we aim at tackling consists in predicting temporal gene expression of Drosophila development based on a dataset originally proposed by Kalinka et al. (2010). For each of the six observed Drosophila species, the expression of 3695 genes has been measured in eight replicas at different time points. Following Hensman et al. (2013), this paper focuses on one of these six species (_melanogaster_) and the following genes considered as outputs in our model: 'CG12723', 'CG13196', 'CG13627', 'Osi15'. For those outputs, each of the eight replicas is partially observed on a grid of 10 distinct time points (i.e. each replica has a specific set of inputs, which is a sub-sample of a 10-point common grid). When considering such relatively small datasets, setting the value of \(M_{\mathbf{X}}\) to 14 for **HMOGP-LV**, **HGPInd**, **LVMOGP**, and **LMC** appeared as a sensible choice. In the case of **HMOGP-LV** and **LVMOGP**, we additionally defined \(M_{\mathbf{H}}=2\). As previously mentioned, the goal of this experiment consists in predicting 50% of the data points that have been randomly removed in each replica to be used as testing points. To illustrate the behaviour of our method to tackle such a task, we display in Figure 6 the GP predictions obtained by applying **HMOGP-LV** on all outputs and replicas. It can be noticed that in all cases, the mean curve sticks close to the true test points while maintaining narrow credible intervals on the studied domain, though uncertainty significantly increases when moving towards 0 as the number of observed data is low for all replicas. While this visual inspection is promising, the comparison with competing methods provided in Figure 7 highlights that **HMOGP-LV** also outperforms the alternatives. Let us mention that **DHGP** offers once again performances that are noticeably better than other approaches, confirming our first insights from the synthetic data experiments. As previously mentioned during modelling developments, our method also allows the prediction of an entirely missing replica, by sharing information across outputs and replicas to reconstruct the signal. We propose this additional experiment applied to the gene dataset in supplementary materials, and demonstrate the remarkable ability of **HMOGP-LV** to provide predictions that remain accurate even in the absence of data points for a whole replica. #### 5.3.2 Motion Capture Database Let us pursue by presenting another application of **HMOGP-LV** involving observations from the CMU motion capture database (MOCAP) 2. In this dataset, four different categories of movement are identified and distinguished: walking, running, golf swing and jumping. According to the experimental setting, only specific parts of the body are tracked by the motion capture devices. Regarding walking, the data of interest consists of trials number 2, 3, 8 and 9, for the 8-th subject, where we consider each trial as a replica. Our study focuses on right-hand movements (humerus, radius wrist, femur and tibia) for which we consider 16 positions in total. Additionally, the input and output data points are both scaled to have a zero mean and unit variance. Each position is identified as an output, though we only retained outputs with a signal-to-noise ratio over 20 dB. Therefore, using 16 outputs, each of them containing four replicas, and designated as _MOCAP-8_. For the case of running, data for the 9-th subject were extracted for trials number 1, 2, 3, 5, 6, and 11. Head and foot movements (lower-neck, upper-neck, head, femur, tibia and foot) were tracked, for a total of 16 outputs with six replicas each (_MOCAP-9_). The golfswing case is studied through trials number 3, 4, 5, 7, 8 and 9 of the \(64\)-th individual. We consider left and right-hand movements (humerus, radius and wrist) by modelling nine outputs with six replicas each (_MOCAP-64_). Finally, jumping is analysed through trials number 3, 4, 11 and 17 of the \(118\)-th individual. We chose to focus of foot movements (femur, tibia and foot) to collect 12 outputs with four replicas each (_MOCAP-118_). The overall parameter settings are summarised within a table in supplementary materials. In all settings, each replica is observed over 200 time points except MOCAP-9 (in MOCAP-9, each replica is observed over 100 time points since its replica has around 140 times). In this experiment, we aim to predict unobserved replicas. More precisely, for each output, one of its replicas is entirely missing, while all the others are fully observed. As highlighted in Figure 8, **HMOGP-LV** outperforms other methods in most situations, except for MOCAP-64 and MOCAP-118 in terms of NMSE, where **DHGP** and **LVMOGP** present comparable results. In particular, the results of the MOCAP-9 experiment, for which the improvement provided by our method is the most prominent, are illustrated in Figure 9. One can notice how our model retrieves adequately the overall pattern for the missing replica at no cost in terms of uncertainty. As displayed, it seems that sharing information at different levels, both among outputs and replicas, allows the prediction to remain accurate regardless of the sub-sample of data that is removed. It is worth mentioning that both multi-output methods (**LMC** and **LVMOGP**) also exhibit excellent performance in those task, although **HMOGP-LV** seems to remain the most sensible choice overall. Footnote 2: The CMU Graphics Lab Motion Capture Database was created with funding from NSF EIA-0196217 and is available at [http://mocap.cs.cmu.edu](http://mocap.cs.cmu.edu). Figure 6: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the gene dataset. Locations of training points (in black) and testing points (in red) are specific to each output. ## 6 Conclusion In this paper, we introduced HMOGP-LV, an extended framework of multi-output Gaussian processes to deal with multiple regression problems for hierarchically structured datasets. HMOGP-LV uses latent variables to capture the correlation between multiple outputs and a hierarchical kernel matrix to capture the dependency between replicas for each output. Even in the presence of missing replicas, HMOGP-LV remains able to make predictions by using information shared through inducing variables. We experimentally demonstrated that HMOGP-LV offers enhanced performances in terms of NMSE and NLPD compared to natural competitors for both synthetic and real datasets. In terms of limitations, HMOGP-LV only addresses regression problems so far since the likelihood considered is Gaussian. Moreover, our model is also limited to two layers of hierarchy when accounting for correlations. Therefore, several extensions of the present framework would be valuable, such as enabling heterogeneous multi-output prediction (Moreno-Munoz et al., 2018) or defining additional layers to build a deeper hierarchical structure (Hensman et al., 2013). Figure 7: Prediction performances (mean \(\pm\) standard deviation) for the gene dataset. For both NMSE and NLPD values, the lower the better. Figure 8: Prediction performances (mean \(\pm\) standard deviation) for the MOCAP-8, MOCAP-9, MOCAP-64 and MOCAP-118 datasets. For both NMSE and NLPD values, the lower the better. Figure 9: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the MOCAP-9 dataset. Locations of training points (in black) and testing points (in red) are specific to each output. ### CRediT authorship contribution statement **Chunchao Ma**: Methodology, Software, Writing - original draft. **Arthur Leroy**: Investigation, Formal analysis, Writing - review & editing. **Mauricio Alvarez**: Conceptualization, Writing - review & editing, Supervision. ### Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### Data availability The CMU Graphics Lab Motion Capture (MOCAP) Database was created with funding from NSF EIA-0196217 and is available at [http://mocap.cs.cmu.edu](http://mocap.cs.cmu.edu). The gene dataset is available in this repository: [https://github.com/ChunchaoPeter/HMOGP-LV/tree/main/Gene_data_set](https://github.com/ChunchaoPeter/HMOGP-LV/tree/main/Gene_data_set). ### Code availability The Python implementation of **HMOGP-LV** is freely available in the following repository: [https://github.com/ChunchaoPeter/HMOGP-LV](https://github.com/ChunchaoPeter/HMOGP-LV). ## Acknowledgements Chunchao Ma would like to thank Zhenwen Dai for the helpful conversations. Arthur Leroy and Mauricio Alvarez have been financed by the Wellcome Trust project 217068/Z/19/Z ## Appendix A Proofs In this section, we present technical details for deriving the lower bound of the log marginal likelihood as well as computationally efficient formulations by exploiting Kronecker product decomposition for \(\mathcal{F}\). Before diving into the mathematical details, let us also provide in Figure 10 an illustrative recall of the modelling assumptions. ### Derivation of the Log-marginal Likelihood Lower Bound To obtain the lower bound of the log marginal likelihood of our model, we assume that the variational posterior distributions are \(q(\mathbf{H})\), \(q(\mathbf{U}_{:})\) and \(q(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H})=p(\mathbf{f}\mid\mathbf{U}_{:}, \mathbf{H})\), such as: \[\begin{split}\text{log }p\left(\mathbf{y}\right)&=\text{ log}\int\int\int p\left(\mathbf{y},\mathbf{f},\mathbf{H},\mathbf{U}_{:} \right)\text{df}\mathbf{dH}\mathbf{U}_{:}\\ &=\text{log}\int\int\int\frac{p\left(\mathbf{y},\mathbf{f}, \mathbf{H},\mathbf{U}_{:}\right)q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:} \right)}{q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:}\right)}\text{df}\mathbf{dH }\mathbf{U}_{:}\\ &\geq\int\int\int q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:} \right)\text{log}\frac{p\left(\mathbf{y},\mathbf{f},\mathbf{H},\mathbf{U}_{: }\right)}{q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:}\right)}\text{df}\mathbf{dH }\mathbf{U}_{:}\\ &=\mathcal{L}.\end{split} \tag{25}\] \[\mathcal{L}= \left\langle\log\frac{p\left(\mathbf{y},\mathbf{f},\mathbf{H}, \mathbf{U}_{:}\right)}{q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:}\right)} \right\rangle_{q\left(\mathbf{f},\mathbf{H},\mathbf{U}_{:}\right)}\] \[= \int\int\int p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H} \right)q(\mathbf{U}_{:})q(\mathbf{H})\] \[\text{log}\frac{p(\mathbf{y}|\mathbf{f},\mathbf{H},\mathbf{U}_{: })p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)p(\mathbf{U}_{:})p( \mathbf{H})}{p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)q(\mathbf{ U}_{:})q(\mathbf{H})}\text{df}\mathbf{dH}\] \[= \int\int\int p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H} \right)q(\mathbf{U}_{:})q(\mathbf{H})\text{log}\frac{p(\mathbf{y}|\mathbf{f},\mathbf{H},\mathbf{U}_{:})p(\mathbf{U}_{:})p(\mathbf{H})}{q(\mathbf{U}_{:})q (\mathbf{H})}\text{df}\mathbf{dU}_{:}\text{df}\mathbf{H}. \tag{26}\] Finally, \[\mathcal{L} =\int q(\mathbf{H})\left[\int q(\mathbf{U}_{:})\left[\mathbb{E}_{p( \mathbf{f}(\mathbf{U}_{:},\mathbf{H})}[\log p(\mathbf{y}\mid\mathbf{f},\mathbf{ H})]+\log\frac{p(\mathbf{U}_{:})}{q(\mathbf{U}_{:})}+\log\frac{p(\mathbf{H})}{q( \mathbf{H})}\right]\mathbf{d}\mathbf{U}_{:}\right]\] \[=\overbrace{\mathbb{E}_{q(\mathbf{f},\mathbf{U}_{:},\mathbf{H}) }[\log p(\mathbf{y}\mid\mathbf{f},\mathbf{H})]}^{\mathcal{F}}-\mathrm{KL}(q( \mathbf{H})\|p(\mathbf{H}))-\mathrm{KL}(q(\mathbf{U}_{:})\|p(\mathbf{U}_{:})). \tag{27}\] ### Derivation of \(\mathcal{F}\) Given the Same Input Datasets In this section, we show details for deriving \(\mathcal{F}\) using the same input datasets: \[\mathcal{F} =\mathbb{E}_{p(\mathbf{f}(\mathbf{U}_{:},\mathbf{H})q(\mathbf{U} _{:})q(\mathbf{H})}\left[\log p\left(\mathbf{y}\mid\mathbf{f},\mathbf{H} \right)\right]\] \[=\int q(\mathbf{H})\int q\left(\mathbf{U}_{:}\right)\underbrace{ \int p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)\log p\left( \mathbf{y}\mid\mathbf{f},\mathbf{H}\right)\mathrm{df}}_{\mathcal{L}_{F}} \mathbf{d}\mathbf{U}_{:}\mathbf{d}\mathbf{H}\] \[=\underbrace{\int q(\mathbf{H})\mathcal{L}_{U}\mathbf{d}\mathbf{ H}}_{\mathcal{L}_{H}}. \tag{28}\] First, we calculate \(\mathcal{L}_{F}\): \[\mathcal{L}_{F}= \int p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)\log p \left(\mathbf{y}\mid\mathbf{f},\mathbf{H}\right)\mathbf{df}\] \[= \log\mathcal{N}\left(\mathbf{y}\mid\mathbf{K}_{\mathbf{f}\mathbf{ U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{U}_{:},\sigma^{2}\right)-\frac{1}{2 \sigma^{2}}\mathrm{Tr}\left[\mathbf{K}_{\mathbf{f}\mathbf{f}}-\mathbf{K}_{ \mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{\top}\right], \tag{29}\] Figure 10: (a): Summary of the procedure used to derive the kernel matrix for inducing variables, where \(\mathbf{Z}^{X}\) and \(\mathbf{Z}^{H}\) are associated with the inputs \(\mathbf{X}\) and the latent variables \(\mathbf{H}\), respectively; (b): Summary of the procedure used to derive the kernel matrix between observations and inducing variables. where \(p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)=\mathcal{N}\left(\mathbf{f} \mid\mathbf{K}_{\mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1} \mathbf{U}_{:},\mathbf{K}_{\mathbf{f}\mathbf{f}}-\mathbf{K}_{\mathbf{f}\mathbf{ U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{K}_{\mathbf{f}\mathbf{U}}^{ \top}\right)\) and \(\text{Tr}[\cdot]\) is a trace of a matrix. Second, we calculate \(\mathcal{L}_{U}\): \[\mathcal{L}_{U}= \int q\left(\mathbf{U}_{:}\right)\mathcal{L}_{F}\mathbf{d}\mathbf{ U}_{:}\] \[= \log\mathcal{N}\left(\mathbf{y}\mid\mathbf{K}_{\mathbf{f}\mathbf{ U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{:},\sigma^{2}\right)-\frac{1}{2 \sigma^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{f}\mathbf{f}}-\mathbf{K}_{ \mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{\top}\right]\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\mathbf{\Sigma}^{\mathbf{U }}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{K}_{\mathbf{f}\mathbf{U}}^{ \top}\mathbf{K}_{\mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1} \right]. \tag{30}\] where \(q(\mathbf{U}_{:})=\mathcal{N}\left(\mathbf{U}_{:}\mid\mathbf{M}_{:},\mathbf{ \Sigma}^{\mathbf{U}_{:}}\right)\) in which \(\mathbf{U}_{:}\) and \(\mathbf{M}_{:}\) are variational parameters. Finally, we consider \(\mathcal{L}_{H}\): \[\mathcal{L}_{H}= \int q(\mathbf{H})\mathcal{L}_{U}\mathbf{d}\mathbf{H}\] \[= \left\langle\log\mathcal{N}\left(\mathbf{y}\mid\mathbf{K}_{ \mathbf{f}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{:}, \sigma^{2}\right)\right\rangle_{q(\mathbf{H})}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\left\langle\mathbf{K}_{ \mathbf{f}\mathbf{f}}\right\rangle_{q(\mathbf{H})}-\mathbf{K}_{\mathbf{U} \mathbf{U}}^{-1}\left\langle\mathbf{K}_{\mathbf{f}\mathbf{U}}^{\top}\mathbf{K }_{\mathbf{f}\mathbf{U}}\right\rangle_{q(\mathbf{H})}\right]\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\mathbf{\Sigma}^{\mathbf{U }}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left\langle\mathbf{K}_{\mathbf{f} \mathbf{U}}^{\top}\mathbf{K}_{\mathbf{f}\mathbf{U}}\right\rangle_{q(\mathbf{H })}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\right]\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\mathbf{\Sigma}^{\mathbf{U }}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\underbrace{\left\langle\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{f}\mathbf{U}}\right\rangle_{q( \mathbf{H})}}_{\Phi}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\right]\] \[= \mathbf{C}+\frac{1}{\sigma^{2}}\mathbf{y}^{\top}\Psi\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{:}-\frac{1}{2\sigma^{2}}\left(\psi- \text{Tr}\left[\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\Phi\right]\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\mathbf{\Sigma}^{\mathbf{U }}\mathbf{U}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\underbrace{\left\langle \mathbf{K}_{\mathbf{f}\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{f}\mathbf{U}} \right\rangle_{q(\mathbf{H})}}_{\Phi}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\right]\] \[= \mathbf{C}+\frac{1}{\sigma^{2}}\mathbf{y}^{\top}\Psi\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{:}-\frac{1}{2\sigma^{2}}\left(\psi- \text{Tr}\left[\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\Phi\right]\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{U}\mathbf{ U}}^{-1}\Phi\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left(\mathbf{M}_{:}\mathbf{M}_{:}^{ \top}+\mathbf{\Sigma}^{\mathbf{U}_{:}}\right)\right], \tag{31}\] where \[\Psi=\left\langle\mathbf{K}_{\mathbf{f}\mathbf{U}}^{H}\otimes\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{X}\right\rangle_{q(\mathbf{H})}=\left\langle\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{H}\right\rangle_{q(\mathbf{H})}\otimes\mathbf{K}_{ \mathbf{f}\mathbf{U}}^{X}=\Psi^{H}\otimes\mathbf{K}_{\mathbf{f}\mathbf{U}}^{X}, \tag{32}\] \[\psi=\text{Tr}\left\langle\mathbf{K}_{\mathbf{f}\mathbf{f}}\right\rangle_{q( \mathbf{H})}=\text{Tr}\left\langle\mathbf{K}_{\mathbf{f}\mathbf{f}}^{H} \otimes\mathbf{K}_{\mathbf{f}\mathbf{f}}^{X}\right\rangle_{q(\mathbf{H})}, \tag{33}\] \[\Phi =\left\langle\mathbf{K}_{\mathbf{f}\mathbf{U}}^{\top}\mathbf{K}_{ \mathbf{f}\mathbf{U}}\right\rangle_{q(\mathbf{H})}=\left\langle\left(\mathbf{K}_ {\mathbf{f}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{f}\mathbf{U}}^{X}\right)^{ \top}\left(\mathbf{K}_{\mathbf{f}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{f} \mathbf{U}}^{X}\right)\right\rangle_{q(\mathbf{H})}\] \[=\Phi^{H}\otimes\left(\mathbf{K}_{\mathbf{f}\mathbf{U}}^{X}\right)^{ \top}\mathbf{K}_{\mathbf{f}\mathbf{U}}^{X}. \tag{34}\] ### Derivation of \(\mathcal{F}\) Given Different Input Datasets In this section, we show details for deriving \(\mathcal{F}\) using different input datasets: \[\mathcal{F} =\mathbb{E}_{p\left(\mathbf{f}|\mathbf{U},\mathbf{H}\right)q\left( \mathbf{U}\right),q\left(\mathbf{H}\right)}\left[\log p\left(\mathbf{y}\mid \mathbf{f},\mathbf{H}\right)\right]\] \[=\int q\left(\mathbf{H}\right)\int q\left(\mathbf{U}_{:}\right) \underbrace{\int p\left(\mathbf{f}\mid\mathbf{U}_{:},\mathbf{H}\right)\log p \left(\mathbf{y}\mid\mathbf{f},\mathbf{H}\right)\text{df}}_{\mathcal{L}_{F}} \text{df}\] \[=\underbrace{\int q\left(\mathbf{H}\right)\mathcal{L}_{U}\text{ dH}}_{\mathcal{L}_{H}}. \tag{35}\] Now, we calculate \(\mathcal{L}_{F}\): \[\mathcal{L}_{F}= \int\prod_{d=1}^{D}p\left(\mathbf{f}_{d}\mid\mathbf{U}_{:}, \mathbf{H}\right)\log\prod_{d=1}^{D}p\left(\mathbf{y}_{d}\mid\mathbf{f}_{d}, \mathbf{H}\right)\text{df}_{d}\] \[= \sum_{d=1}^{D}\int p\left(\mathbf{f}_{d}\mid\mathbf{U}_{:}, \mathbf{H}\right)\log p\left(\mathbf{y}_{d}\mid\mathbf{f}_{d},\mathbf{H}\right) \text{df}_{d}\] \[= \sum_{d=1}^{D}\left(\log\mathcal{N}\left(\mathbf{y}_{d}\mid \mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{U}_ {:},\sigma_{d}^{2}}\right)-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_ {\mathbf{\ell}_{d}\mathbf{\ell}_{d}}-\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U} \mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}^{\top} \mathbf{U}}\right]\right), \tag{36}\] where \(p\left(\mathbf{f}_{d}\mid\mathbf{U}_{:},\mathbf{H}\right)=\mathcal{N}\left( \mathbf{f}_{d}\mid\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}\mathbf{K}_{\mathbf{UU }}^{-1}\mathbf{U}_{:},\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{\ell}_{d}}-\mathbf{ K}_{\mathbf{\ell}_{d}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{\ell}_{d} \mathbf{U}}^{\top}\right)}\). Then, we consider the \(\mathcal{L}_{U}\): \[\mathcal{L}_{U}= \int q\left(\mathbf{U}_{:}\right)\mathcal{L}_{F}\text{d}\text{U}\] \[= \int q\left(\mathbf{U}_{:}\right)\sum_{d=1}^{D}\Bigg{(}\log \mathcal{N}\left(\mathbf{y}_{d}\mid\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U} \mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{U}_{:},\sigma_{d}^{2}}\right)\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{ \ell}_{d}\mathbf{\ell}_{d}}-\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}\mathbf{K}_ {\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}^{\top}\right]\Bigg{)} \text{d}\text{U}\text{:}\] \[= \sum_{d=1}^{D}\Big{(}\log\mathcal{N}\left(\mathbf{y}_{d}\mid \mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{ M}_{:},\sigma_{d}^{2}}\right)-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_{ \mathbf{\ell}_{d}\mathbf{\ell}_{d}}-\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U} \mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}^{\top} \right]\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{\Sigma}^{\mathbf{ U}_{:}}\mathbf{K}_{\mathbf{UU}}^{-1}\mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}^{\top} \mathbf{K}_{\mathbf{\ell}_{d}\mathbf{U}}\mathbf{K}_{\mathbf{UU}}^{-1}\right] \Big{)}, \tag{37}\] where \(q(\mathbf{U}_{\cdot})=\mathcal{N}\left(\mathbf{U}_{\cdot}\mid\mathbf{M}_{\cdot}, \mathbf{\Sigma}^{\mathbf{U}_{\cdot}}\right)\). Further, we obtain \(\mathcal{L}_{H}\): \[\mathcal{L}_{H}= \int q(\mathbf{H})\mathcal{L}_{U}\mathrm{d}\mathbf{H}\] \[= \sum_{d=1}^{D}\left\langle\log\mathcal{N}\left(\mathbf{y}_{d} \mid\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}\mathbf{K}_{\mathbf{U}\mathbf{ U}\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{\cdot},\sigma_{d}^{2}\right)\right\rangle_{q( \mathbf{h}_{d})}\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\left\langle\mathbf{K}_ {\boldsymbol{\ell}_{d}\mathbf{\ell}_{d}}\right\rangle_{q(\mathbf{h}_{d})}- \left\langle\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}\mathbf{K}_{\mathbf{ U}\mathbf{U}}^{-1}\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{\top}\right\rangle_{q( \mathbf{h}_{d})}\right]\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{\Sigma}^{ \mathbf{U}_{\cdot}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left\langle\mathbf{K }_{\boldsymbol{\ell}_{d}\mathbf{U}}^{\top}\mathbf{K}_{\boldsymbol{\ell}_{d} \mathbf{U}}\right\rangle_{q(\mathbf{h}_{d})}\mathbf{K}_{\mathbf{U}\mathbf{U}}^ {-1}\right]\] \[= \sum_{d=1}^{D}\underbrace{-\frac{N_{d}R}{2}\log 2\pi\sigma_{d}^{2 }-\frac{1}{2\sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\mathbf{y}_{d}}_{\mathbf{C}_{ d}}+\frac{1}{\sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\underbrace{\left\langle \mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}\right\rangle_{q(\mathbf{h}_{d})}} _{\Psi_{d}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{\cdot}\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{\Sigma}^{ \mathbf{U}_{\cdot}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\underbrace{\left\langle \mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{\top}\mathbf{K}_{\boldsymbol{\ell} _{d}\mathbf{U}}\right\rangle_{q(\mathbf{h}_{d})}}_{\Phi_{d}}\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{-1}\right]\] \[= \sum_{d=1}^{D}\mathbf{C}_{d}+\frac{1}{\sigma_{d}^{2}}\mathbf{y}_{ d}^{\top}\Psi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{\cdot}-\frac{1}{2 \sigma_{d}^{2}}\mathbf{M}_{\cdot}^{\top}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1} \Phi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{\cdot}-\frac{1}{2 \sigma_{d}^{2}}\psi_{d}\] \[+\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{U} \mathbf{U}}^{-1}\Phi_{d}\right]-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[ \mathbf{\Sigma}^{\mathbf{U}_{\cdot}}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1} \Phi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\right]\] \[= \sum_{d=1}^{D}\mathbf{C}_{d}+\frac{1}{\sigma_{d}^{2}}\mathbf{y}_{ d}^{\top}\Psi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\mathbf{M}_{\cdot}-\frac{1}{2 \sigma_{d}^{2}}\left(\psi_{d}-\text{Tr}\left[\mathbf{K}_{\mathbf{U}\mathbf{U} }^{-1}\Phi_{d}\right]\right)\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left[\mathbf{K}_{\mathbf{U} \mathbf{U}}^{-1}\Phi_{d}\mathbf{K}_{\mathbf{U}\mathbf{U}}^{-1}\left(\mathbf{M} \mathbf{M}_{\cdot}^{\top}+\mathbf{\Sigma}^{\mathbf{U}_{\cdot}}\right)\right], \tag{38}\] where \(q(\mathbf{H})=\prod_{d=1}^{D}q(\mathbf{h}_{d})\) and \[\Psi_{d}=\left\langle\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{H}\otimes \mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{X}\right\rangle_{q(\mathbf{h}_{d} )}=\left\langle\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{H}\right\rangle_{q( \mathbf{h}_{d})}\otimes\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{X}=\Psi_{d }^{H}\otimes\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{X} \tag{39}\] \[\psi_{d}=\text{Tr}\left\langle\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}} \right\rangle_{q(\mathbf{h}_{d})}=\text{Tr}\left\langle\mathbf{K}_{\boldsymbol{ \ell}_{d}\mathbf{\ell}_{d}}^{H}\otimes\mathbf{K}_{\boldsymbol{\ell}_{d} \mathbf{\ell}_{d}}^{X}\right\rangle_{q(\mathbf{h}_{d})}, \tag{40}\] \[\Phi_{d}=\left\langle\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{ \top}\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}\right\rangle_{q(\mathbf{h}_{d})} =\left\langle\left(\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{H}\otimes \mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{X}\right)^{\top}\left(\mathbf{K}_{ \boldsymbol{\ell}_{d}\mathbf{U}}^{H}\otimes\mathbf{K}_{\boldsymbol{\ell}_{d} \mathbf{U}}^{X}\right)\right\rangle_{q(\mathbf{h}_{d})}\] \[=\] \[= \Phi_{d}^{H}\otimes\left(\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U} }^{X}\right)^{\top}\mathbf{K}_{\boldsymbol{\ell}_{d}\mathbf{U}}^{X}. \tag{41}\] ## Appendix B More Efficient Formulations In this subsection, we reduce the computational complexity by exploiting the Kronecker product decomposition. To fully utilise its properties, we assume there is a Kronecker product decomposition of the covariance matrix of \(q(\mathbf{U}_{\cdot})\) \(\mathbf{\Sigma}^{\mathbf{U}_{\cdot}}=\mathbf{\Sigma}^{H_{\cdot}}\otimes\mathbf{ \Sigma}^{X_{\cdot}}\) and this format can reduce variational parameters from \(M_{\mathbf{X}}^{2}M_{\mathbf{H}}^{2}\) to \(M_{\mathbf{X}}^{2}+M_{\mathbf{H}}^{2}\) in \(q(\mathbf{U}_{\cdot})\). We also reformulate \(\Phi\), \(\Psi\), \(\psi\) as \[\Phi =\left\langle\mathbf{K}_{\mathbf{U}}^{\top}\mathbf{K}_{\mathbf{U }}\right\rangle_{q(\mathbf{H})}=\Phi^{H}\otimes\Phi^{X}, \tag{42}\] \[\Phi^{H} =\left\langle\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{\top} \mathbf{K}_{\mathbf{U}}^{H}\right\rangle_{q(\mathbf{H})},\] (43) \[\Phi^{X} =\left(\mathbf{K}_{\mathbf{U}}^{X}\right)^{\top}\mathbf{K}_{ \mathbf{U}}^{X},\] (44) \[\Psi =\left\langle\mathbf{K}_{\mathbf{U}}^{H}\otimes\mathbf{K}_{ \mathbf{U}}^{X}\right\rangle_{q(\mathbf{H})}=\left\langle\mathbf{K}_{\mathbf{ U}}^{H}\right\rangle_{q(\mathbf{H})}\otimes\mathbf{K}_{\mathbf{U}}^{X}=\Psi^{H} \otimes\mathbf{K}_{\mathbf{U}}^{X},\] (45) \[\psi =\text{Tr}\left\langle\mathbf{K}_{\mathbf{H}}\right\rangle_{q( \mathbf{H})}=\text{Tr}\left\langle\mathbf{K}_{\mathbf{H}}^{H}\otimes\mathbf{ K}_{\mathbf{H}}^{X}\right\rangle_{q(\mathbf{H})}. \tag{46}\] Using the property of the Kronecker product decomposition, we obtain a new format of the lower bound (for more detail see Section B.1): \[\mathcal{F}= -\frac{NDR}{2}\log 2\pi\sigma^{2}-\frac{1}{2\sigma^{2}}\mathbf{y}^{ \top}\mathbf{y}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\mathbf{M}^{\top}\left( \mathbf{K}_{\mathbf{U}}^{X}\right)^{-1}\Phi^{X}\left(\mathbf{K}_{\mathbf{U}}^ {X}\right)^{-1}\mathbf{M}\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{-1}\Phi^{H} \left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{-1}\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\mathbf{ U}}^{H}\right)^{-1}\Phi^{H}\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{-1} \mathbf{\Sigma}^{H_{\cdot}}\right)\text{Tr}\left(\left(\mathbf{K}_{\mathbf{ U}}^{X}\right)^{-1}\Phi^{X}\left(\mathbf{K}_{\mathbf{U}}^{X}\right)^{-1} \mathbf{\Sigma}^{X_{\cdot}}\right)\] \[+\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\mathbf{ U}}^{H}\right)^{-1}\Phi^{H}\right)\text{Tr}\left(\left(\mathbf{K}_{\mathbf{ U}}^{X}\right)^{-1}\Phi^{X}\right). \tag{47}\] Similarly, the KL-divergence between \(q(\mathbf{U}_{\cdot})\) and \(p(\mathbf{U}_{\cdot})\) can also benefit from the above decomposition (see Section B.1 for more detail): \[\text{KL}\left\{q\left(\mathbf{U}_{\cdot}\right)\mid p\left( \mathbf{U}_{\cdot}\right)\right\}= \frac{1}{2}\Bigg{(}M_{\mathbf{X}}\log\frac{\left|\mathbf{K}_{ \mathbf{U}}^{H}\right|}{|\mathbf{\Sigma}^{H_{\cdot}}|}+M_{\mathbf{H}}\log \frac{\left|\mathbf{K}_{\mathbf{U}}^{X}\right|}{|\mathbf{\Sigma}^{X_{\cdot} }|}\] \[+\text{Tr}\left(\mathbf{M}^{\top}\left(\mathbf{K}_{\mathbf{U}}^{X} \right)^{-1}\mathbf{M}\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{-1}\right)\] \[+\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{-1} \mathbf{\Sigma}^{H_{\cdot}}\right)\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}}^ {X}\right)^{-1}\mathbf{\Sigma}^{X_{\cdot}}\right)-M_{\mathbf{H}}M_{\mathbf{X}} \Bigg{)}. \tag{48}\] The computational complexity of \(\mathcal{L}\) is led by the product \(\left(\mathbf{K}_{\mathbf{U}}^{H}\right)^{\top}\mathbf{K}_{\mathbf{U}}^{H}\) and \(\left(\mathbf{K}_{\mathbf{U}}^{X}\right)^{\top}\mathbf{K}_{\mathbf{U}}^{X}\) with a cost of \(\mathcal{O}\left(DM_{\mathbf{H}}^{2}\right)\) and \(\mathcal{O}\left(NRM_{\mathbf{X}}^{2}\right)\), respectively, which is more efficient than the original formulation. Further, we can extend the lower bound with by using mini-batches to improve its scalability. Besides, we can also reduce the computational complexity in \(\mathcal{F}\) and Kullback-Leibler divergence by taking the advantage of the Kronecker product decomposition. ### Datasets with Common Inputs In this section, given the same input datasets, we re-define \(\mathcal{F}\) and Kullback-Leibler divergence by using the Kronecker product decomposition, such that: \[\mathcal{F}= -\frac{NDR}{2}\log 2\pi\sigma^{2}-\frac{1}{2\sigma^{2}}\mathbf{y}^{ \top}\mathbf{y}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\text{UU} }^{H}\otimes\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\left(\Phi^{H}\otimes\Phi^ {X}\right)\left(\mathbf{K}_{\text{UU}}^{H}\otimes\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\mathbf{M}.\mathbf{M}^{\top}\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\text{UU} }^{H}\otimes\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\left(\Phi^{H}\otimes\Phi^ {X}\right)\left(\mathbf{K}_{\text{UU}}^{H}\otimes\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\left(\mathbf{\Sigma}^{H}.\otimes\mathbf{\Sigma}^{X.}\right)\right)\] \[+\frac{1}{2\sigma^{2}}\mathbf{y}^{\top}\left(\Psi^{H}\otimes \mathbf{K}_{\text{UU}}^{X}\right)\left(\mathbf{K}_{\text{UU}}^{H}\otimes \mathbf{K}_{\text{UU}}^{X}\right)^{-1}\mathbf{M}.-\frac{1}{2\sigma^{2}}\psi\] \[+\frac{1}{2\sigma^{2}}\left(\text{Tr}\left(\left(\mathbf{K}_{ \text{UU}}^{H}\otimes\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\left(\Phi^{H} \otimes\Phi^{X}\right)\right)\right)\] \[= -\frac{NDR}{2}\log 2\pi\sigma^{2}-\frac{1}{2\sigma^{2}}\mathbf{y}^{ \top}\mathbf{y}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\left(\mathbf{K}_{ \text{UU}}^{H}\right)^{-1}\Phi^{H}\left(\mathbf{K}_{\text{UU}}^{H}\right)^{-1 }\right)\otimes\left(\left(\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\Phi^{X}\left( \mathbf{K}_{\text{UU}}^{X}\right)^{-1}\right)\mathbf{M}.\mathbf{M}^{\top}_{ :}\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\left(\mathbf{K}_{ \text{UU}}^{H}\right)^{-1}\Phi^{H}\left(\mathbf{K}_{\text{UU}}^{H}\right)^{-1 }\mathbf{\Sigma}^{H}.\right)\otimes\left(\left(\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\Phi^{X}\left(\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\mathbf{ \Sigma}^{X.}\right)\right)\] \[+\frac{1}{\sigma^{2}}\mathbf{y}^{\top}\left(\Psi^{H}\left( \mathbf{K}_{\text{UU}}^{H}\right)^{-1}\otimes\mathbf{K}_{\text{UU}}^{X}\left( \mathbf{K}_{\text{UU}}^{X}\right)^{-1}\right)\mathbf{M}.-\frac{1}{2\sigma^{2}}\psi\] \[+\frac{1}{2\sigma^{2}}\left(\text{Tr}\left(\left(\mathbf{K}_{ \text{UU}}^{H}\right)^{-1}\Phi^{H}\otimes\left(\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\Phi^{X}\right)\right)\] \[= -\frac{NDR}{2}\log 2\pi\sigma^{2}-\frac{1}{2\sigma^{2}}\mathbf{y}^{ \top}\mathbf{y}\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\mathbf{M}^{\top}\left( \mathbf{K}_{\text{UU}}^{X}\right)^{-1}\Phi^{X}\left(\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\mathbf{M}\left(\mathbf{K}_{\text{UU}}^{H}\right)^{-1}\Phi^{H} \left(\mathbf{K}_{\text{UU}}^{H}\right)^{-1}\right)\] \[-\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\text{UU }}^{H}\right)^{-1}\Phi^{H}\left(\mathbf{K}_{\text{UU}}^{H}\right)^{-1}\mathbf{ \Sigma}^{H.}\right)\text{Tr}\left(\left(\mathbf{K}_{\text{UU}}^{X}\right)^{ -1}\Phi^{X}\left(\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\mathbf{\Sigma}^{X.}\right)\] \[+\frac{1}{\sigma^{2}}\mathbf{y}^{\top}\left(\mathbf{K}_{\text{UU }}^{X}\left(\mathbf{K}_{\text{UU}}^{X}\right)^{-1}\mathbf{M}\left(\mathbf{K}_{ \text{UU}}^{H}\right)^{-1}\left(\Psi^{H}\right)^{\top}\right)_{-}\frac{1}{2 \sigma^{2}}\psi\] \[+\frac{1}{2\sigma^{2}}\text{Tr}\left(\left(\mathbf{K}_{\text{UU }}^{H}\right)^{-1}\Phi^{H}\right)\text{Tr}\left(\left(\mathbf{K}_{\text{UU}}^{X} \right)^{-1}\Phi^{X}\right). \tag{49}\] We also assume there is a Kronecker product decomposition of the covariance matrix of \(q(\mathbf{U}_{:})\), \(\mathbf{\Sigma}^{\mathbf{U}_{:}}=\mathbf{\Sigma}^{H_{:}}\otimes\mathbf{\Sigma}^{ X_{:}}\) so the KL-divergence between \(q(\mathbf{U}_{:})\) and \(p(\mathbf{U}_{:})\) can also take advantage of the decomposition: \[\text{KL}\left\{q\left(\mathbf{U}_{:}\right)\mid p\left(\mathbf{U} _{:}\right)\right\}\] \[= \frac{1}{2}\Bigg{(}\log\left|\mathbf{K}_{\mathbf{U}\mathbf{U} }^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\left(\mathbf{\Sigma}^{H_{: }}\otimes\mathbf{\Sigma}^{X_{:}}\right)^{-1}\right|\] \[+\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{H} \otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\left(\mathbf{M}. \mathbf{M}_{:}^{\top}+\mathbf{\Sigma}^{H_{:}}\otimes\mathbf{\Sigma}^{X_{:}}- \left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\right)\right)\right)\Bigg{)}\] \[= \frac{1}{2}\Bigg{(}\log\left|\mathbf{K}_{\mathbf{U}\mathbf{U}}^{ H}\left(\mathbf{\Sigma}^{H_{:}}\right)^{-1}\otimes\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\left(\mathbf{\Sigma}^{X_{:}}\right)^{-1}\right|\] \[+\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{H} \otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\left(\mathbf{M}. \mathbf{M}_{:}^{\top}+\mathbf{\Sigma}^{H_{:}}\otimes\mathbf{\Sigma}^{X_{:}} \right)\right)-M_{H}M_{X}\Bigg{)}\] \[= \frac{1}{2}\Bigg{(}M_{X}\log\left|\mathbf{K}_{\mathbf{U}\mathbf{U }}^{H}\left(\mathbf{\Sigma}^{H_{:}}\right)^{-1}\right|+M_{H}\log\left| \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\left(\mathbf{\Sigma}^{X_{:}}\right)^{-1 }\right|\] \[+\text{Tr}\left(\mathbf{M}^{\top}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\right)^{-1}\mathbf{M}\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^ {H}\right)^{-1}\right)\] \[+\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{H} \right)^{-1}\mathbf{\Sigma}^{H_{:}}\right)\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{X_{:}}\right)-M_{H}M_{X} \Bigg{)}\] \[= \frac{1}{2}\left(M_{X}\log\frac{\left|\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\right|}{\left|\mathbf{\Sigma}^{H_{:}}\right|}+M_{H}\log\frac{ \left|\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right|}{\left|\mathbf{\Sigma}^{X_{: }}\right|}+\text{Tr}\left(\mathbf{M}^{\top}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\right)^{-1}\mathbf{M}\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^ {H}\right)^{-1}\right)\right.\] \[\left.+\text{Tr}\left(\left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{H} \right)^{-1}\mathbf{\Sigma}^{H_{:}}\right)\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{X_{:}}\right)-M_{H}M_{X }\right). \tag{50}\] ### Datasets with Different Inputs As for common input datasets, we reformulate \(\Phi_{d}\), \(\Psi_{d}\) as \[\Phi_{d}\] \[=\Phi_{d}^{H}\otimes\Phi_{d}^{X}, \tag{51}\] \[\Phi_{d}^{H}\] (52) \[\Phi_{d}^{X}\] (53) \[\Psi_{d}\] (54) \[\psi_{d} \tag{55}\] We also reduce the computational complexity by using the property of the Kronecker product decomposition (for more detail, see Section B.2): \[\mathcal{F}= \sum_{d=1}^{D}-\frac{N_{d}R}{2}\log 2\pi\sigma_{d}^{2}-\frac{1}{2 \sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\mathbf{y}_{d}\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\right)^{-1}\mathbf{\Sigma}^{H_{:}}\right)\text{Tr}\left(\left( \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\left(\mathbf{K}_ {\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{X_{:}}\right)\] \[+\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\right)^{-1}\left(\Psi_{d}^{H}\right)^{\top}\right)_{:}-\frac{1}{2 \sigma_{d}^{2}}\psi_{d}\] \[+\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\right)\text{Tr}\left(\left( \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\right). \tag{56}\] The computational complexity of the lower bound is mainly controlled by \(\left(\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}^{X}\right)^{\top}\times\mathbf{K}_{ \mathbf{f}_{d}\mathbf{U}}^{X}\) with \(\mathcal{O}\left(N_{d}RM_{\mathbf{X}}^{\top}\right)\). We also can extend \(\mathcal{L}\) by applying the mini-bath method to improve scalability of our model. In this section, given the different input datasets, we re-define \(\mathcal{F}\) using the Kronecker product decomposition. \[\mathcal{F}= \sum_{d=1}^{D}-\frac{N_{d}R}{2}\log 2\pi\sigma_{d}^{2}-\frac{1}{2 \sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\mathbf{y}_{d}\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^ {-1}\left(\Phi_{d}^{H}\otimes\Phi_{d}^{X}\right)\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1} \mathbf{M}\mathbf{.M}_{\cdot}^{\top}\right)\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right) ^{-1}\left(\Phi_{d}^{H}\otimes\Phi_{d}^{X}\right)\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\left( \mathbf{\Sigma}^{H_{\cdot}}\otimes\mathbf{\Sigma}^{X_{\cdot}}\right)\right)\] \[+\frac{1}{\sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\left(\Psi_{d}^{H} \otimes\mathbf{K}_{\mathbf{f}_{d}\mathbf{U}}^{X}\right)\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^ {-1}\mathbf{M}_{\cdot}-\frac{1}{2\sigma_{d}^{2}}\psi_{d}\] \[+\frac{1}{2\sigma_{d}^{2}}\left(\text{Tr}\left(\left(\mathbf{K}_ {\mathbf{U}\mathbf{U}}^{H}\otimes\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right) ^{-1}\left(\Phi_{d}^{H}\otimes\Phi_{d}^{X}\right)\right)\right)\] \[= \sum_{d=1}^{D}-\frac{N_{d}R}{2}\log 2\pi\sigma_{d}^{2}-\frac{1}{2 \sigma_{d}^{2}}\mathbf{y}_{d}^{\top}\mathbf{y}_{d}\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\left(\mathbf{K}_ {\mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\right)^{-1}\right)\otimes\left(\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\left(\mathbf{K}_{\mathbf{U}\mathbf{U} }^{X}\right)^{-1}\right)\mathbf{M}\mathbf{.M}_{\cdot}^{\top}\right)\] \[-\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\left(\mathbf{K}_ {\mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{H_{\cdot}}\right)\otimes\left( \left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\left( \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{X_{\cdot}} \right)\right)\] \[+\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\left(\mathbf{K}_{\mathbf{U} \mathbf{U}}^{H}\right)^{-1}\mathbf{\Sigma}^{H_{\cdot}}\right)\text{Tr}\left( \left(\mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\left( \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\mathbf{\Sigma}^{X_{\cdot}}\right)\] \[+\frac{1}{2\sigma_{d}^{2}}\text{Tr}\left(\left(\mathbf{K}_{ \mathbf{U}\mathbf{U}}^{H}\right)^{-1}\Phi_{d}^{H}\right)\text{Tr}\left(\left( \mathbf{K}_{\mathbf{U}\mathbf{U}}^{X}\right)^{-1}\Phi_{d}^{X}\right). \tag{57}\] ## Appendix C Additional Experiments Evaluation MetricsTo measure predictive accuracy, two evaluation metrics are considered: the normalised mean square error (NMSE) that informs on the quality of the predictive mean estimation; and the negative log predictive density (NLPD) that takes both predictive mean and predictive variance into account. Formally, the two metrics are defined as such: \[\mathrm{NMSE}= \frac{\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\hat{y}_{i}\right)^{2}} {\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}-\bar{y}_{test}\right)^{2}}, \tag{58}\] \[\mathrm{NLPD}= \frac{1}{2}\frac{1}{N}\sum_{i=1}^{N}\left(\left(\frac{y_{i}-\hat{y }_{i}}{\hat{\sigma}_{i}}\right)^{2}+\log\hat{\sigma}_{i}^{2}+\log 2\pi\right), \tag{59}\] where \(\hat{y}_{i}\) and \(\hat{\sigma}_{i}^{2}\) are respectively the predictive mean and variance for the \(i\)-th test point, and \(y_{i}\) is the actual test value for that instance. The average output value for test data is \(\bar{y}_{test}\). ### Simulation study: Comparing with a fixed coregionalisation matrix To showcase the ability of our kernel to leverage its latent variables, we compared our method to LMC on an experiment involving 10 outputs, with 3 replicas each observed at 10 locations, where we increased the number of training points per replica sequentially and used the remaining as testing points (see Figure 11). This experiment does provide an intuition as to why a model based on \(K_{H}\) can generalise better than a model based on a fixed coregionalisation matrix. An illustration of **HMOGP-LV** predictions using only one data point, depicted in Figure 12, highlights remarkable performances for an almost-entirly missing output. Let us mention that **HMOGP-LV** can naturally handle different input locations across outputs, which is a nice feature in many applications. ### Gene Dataset: Predicting an Entirely Missing Replica To validate the performance of **HMOGP-LV** to handle missing replicas in real-world applications, we now apply the method on the gene dataset, where we assume there is one missing replica in each output. We randomly chose a missing replica per output so that seven replicas in each output are considered as training datasets. As before, we provide in Figure 13 the visual results of **HMOGP-LV** in this experiment where one can observe that the entirely missing replicas are remarkably reconstructed with high accuracy and confidence. From Figure 14, we can see that multi-output Gaussian processes approaches (e.g. **LVMOGP** and **LMC**) also provide excellent results, comparable to Figure 11: Evolution of the prediction performance for HMOGP-LV and LMC while increasing the number of data points. Figure 12: Mean predictive curves associated with their 95% credible intervals obtained from **HMOGP-LV** for all replicas of the testing output. The unique training point is in black, and the testing points are in red. our method. In contrast, the performances of **HGPInd** appear notably poor in this context, as exhibited in Figure 15 where it presumably captured only noise. We also provided the analogous visualisation for **LMC** in Figure 16. Figure 13: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the gene dataset. Locations of training points (in black) and testing points (in red) are specific to each output. Gene dataset with one missing replica in each output (**HMOGP-LV** performance) ### Settings for the Motion Capture Dataset Let us provide in Table 1 a summary of the modelling parameter values for all experimental settings. As for the gene dataset, we provide in Figure 17 and Figure 18, the additional visualisation all predicted curves and uncertainty for both **HGPInd** and **LMC** on the MOCAP-9 dataset. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Dataset & Model & \(M_{\mathbf{H}}\) & \(M_{\mathbf{X}}\) \\ \hline \multirow{4}{*}{MOCAP-8} & HMOGP-LV & 2 & \multirow{4}{*}{6} \\ \cline{2-3} \cline{5-5} & LYMOGP & & \\ \cline{2-3} \cline{5-5} & HGPInd & & \\ \cline{2-3} \cline{5-5} & LMC & & \\ \hline \multirow{4}{*}{MOCAP-9} & HMOGP-LV & 5 & \multirow{4}{*}{5} \\ \cline{2-3} \cline{5-5} & LYMOGP & & \\ \cline{2-3} \cline{5-5} & LYMOGP & & \\ \cline{2-3} \cline{5-5} & HGPInd & & \\ \cline{2-3} \cline{5-5} & LMC & & \\ \hline \end{tabular} \end{table} Table 1: Setting and parameters of different GP models in MOCAP dataset. \(M_{\mathbf{X}}\) indicates the number of inducing points in \(\mathbf{Z}^{X}\). \(M_{\mathbf{H}}\) indicates the number of inducing points in \(\mathbf{Z}^{H}\). Neither **DHGP** or **NN** make use of inducing variables. Figure 14: Gene dataset with missing one replica in each output Figure 15: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the gene dataset. Locations of training points (in black) and testing points (in red) are specific to each output. Gene dataset with one missing replica in each output (**HGPInd** performance) Figure 16: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the gene dataset. Locations of training points (in black) and testing points (in red) are specific to each output. Gene dataset with one missing replica in each output (**LMC** performance) Figure 17: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the MOCAP-9 dataset. Locations of training points (in black) and testing points (in red) are specific to each output. (**HGPInd** performance) Figure 18: Mean predictive curves associated with their 95% credible intervals for all outputs and replicas of the MOCAP-9 dataset. Locations of training points (in black) and testing points (in red) are specific to each output. (**LMC** performance)
2309.03874
Box-based Refinement for Weakly Supervised and Unsupervised Localization Tasks
It has been established that training a box-based detector network can enhance the localization performance of weakly supervised and unsupervised methods. Moreover, we extend this understanding by demonstrating that these detectors can be utilized to improve the original network, paving the way for further advancements. To accomplish this, we train the detectors on top of the network output instead of the image data and apply suitable loss backpropagation. Our findings reveal a significant improvement in phrase grounding for the ``what is where by looking'' task, as well as various methods of unsupervised object discovery. Our code is available at https://github.com/eyalgomel/box-based-refinement.
Eyal Gomel, Tal Shaharabany, Lior Wolf
2023-09-07T17:36:02Z
http://arxiv.org/abs/2309.03874v1
# Box-based Refinement for Weakly Supervised ###### Abstract It has been established that training a box-based detector network can enhance the localization performance of weakly supervised and unsupervised methods. Moreover, we extend this understanding by demonstrating that these detectors can be utilized to improve the original network, paving the way for further advancements. To accomplish this, we train the detectors on top of the network output instead of the image data and apply suitable loss backpropagation. Our findings reveal a significant improvement in phrase grounding for the "what is where by looking" task, as well as various methods of unsupervised object discovery. Our code is available at [https://github.com/eyalgomel/box-based-refinement](https://github.com/eyalgomel/box-based-refinement). ## 1 Introduction In the task of unsupervised object discovery, one uses clustering methods to find a subset of the image in which the patches are highly similar, while being different from patches in other image locations. The similarity is computed using the embedding provided, e.g., by a transformer \(f\) that was trained using a self-supervised loss. The grouping in the embedding space does not guarantee that a single continuous image region will be selected, and often one region out of many is selected, based on some heuristic. It has been repeatedly shown [47, 58, 5] that by training a detection network, such as faster R-CNN[39], one can improve the object discovery metrics. This subsequent detector has two favorable properties over the primary discovery method: it is bounded to a box shape and shares knowledge across the various samples. In this work, we show that such a detector can also be used to improve the underlying self-supervised similarity. This is done by training a detector network \(h\) not on top of the image features, as was done previously, but on the output map of network \(f\). Once the detector network \(h\) is trained, we freeze it and use the same loss that was used to train the detector network to refine the underlying representation of \(f\). At this point, the detector network serves as a way to link a recovered set of detection boxes to an underlying feature map of \(f\). Without it, deriving a loss would be extremely challenging, since the process used for extracting the detection box from \(f\) is typically non-differentiable. The outcome of this process is a refined network \(f^{h}\), obtained by fine-tuning \(f\) using network \(h\). The finetuned network produces a representation that leads to a spatially coherent grouping of regions, as demonstrated in Fig. 1(a-c). A similar process is used for the phrase grounding problem. In this case, given a textual phrase, a network \(g\) is trained to mark a matching image region. Supervision is Figure 1: Examples of refining localization networks. The top row depicts an example of unsupervised object discovery. (a) the input image (b) the normalized cut eigenvector using the original DINO [9] network \(f\), as extracted with the TokenCut[58] method. (c) the same eigenvector using the refined DINO network \(f^{h}\) our method produces. The bottom row contains phrase grounding results (d) the original input corresponding to the phrase “two football teams”, (e) the localization map using the image-text network \(g\) of [42], and (f) the localization map using the refined \(g^{h}\). performed at the image level, without localization information, a process known as weakly supervised training. In this case, the same loss is used to train a network \(h\) on a set of extracted regions, and then to refine \(g\). Our method exhibits remarkable versatility, as demonstrated through extensive testing on multiple benchmarks, two phrase grounding tasks, and various unsupervised object discovery methods. In all cases, our method consistently achieves significant improvements across all metrics, surpassing the performance of state-of-the-art methods. The move approach introduced trains a detector on the network output rather than the image data. This strategy, distinct from previous work, allows us to refine the primary network independently and further enhance its performance. ## 2 Related work Our method is tested on two localization tasks that are not fully supervised: unsupervised object discovery (detection) and phrase grounding. Numerous studies have been introduced in the realm of unsupervised object discovery, alongside akin tasks involving detection and segmentation, using different techniques and methods to discover and localize objects in images (and videos) without requiring explicit object annotations. In particular, deep learning-based approaches have been combined with clustering-based methods [64, 49, 45, 57], generative models [56, 4, 33], and object-level grouping [46, 3]. Two of the methods we build upon in our experiments, LOST [47] and TokenCUT [58], employ clustering methods on top of the DINO network [9], while MOVE [5] uses a segmentation head on top of DINO representation. In the phrase grounding task, text phrases are associated with specific image locations [62, 26]. When relying on weakly supervised learning, the locations are not given during training, only during test time [1]. A common way to link the phrase to the image is to embed both the text and image patches in a shared embedding space [14, 41, 27]. Recent contributions employ CLIP [38] for linking text with image locations since it has powerful text and image encoders and relies on weakly supervised training [31, 42]. It can, therefore, be used both to represent the text and to obtain a training signal for the phrase grounding network. We are not aware of other work in which one network \(f\) trains another network \(h\), which in turn is used to refine the first network. There are contributions in which two networks are trained symbiotically at the same time. For example, for the task of semi-supervised semantic segmentation, two differently initialized networks were trained jointly, with each network creating pseudo-labels for the other [13]. The DINO unsupervised representation learning method [9] employs a self-distillation process in which the teacher is a combination of frozen student networks. The role of \(h\) in propagating a detection-based loss back to \(f\) is reminiscent of other cases in which a network is used for the purpose of supervising another, e.g., GANs [23]. In other cases, an auxiliary network can be trained in a supervised way to provide a differentiable approximation of an indifferentiable black box [35]. ## 3 The Phrase Grounding Method While we apply the same method for multiple applications, each application relies on a different configuration of baseline networks. Therefore, to minimize confusion, we first focus on phrase grounding. Applying our method to unsupervised object discovery is explored in Sec. 4. In phrase grounding, we refine a pre-trained localization model (\(g\)) using a detection model (\(h\)) that we add. \(h\) is trained based on \(g\) and then the predictions of \(h\), now serving as a teacher, are used to finetune network \(g\), which becomes the student. This cyclic process is illustrated in Fig. 2 and serves to make \(g\) more spatially coherent, see Fig. 1(d-f). The phrase grounding network \(g\) is based on an encoder-decoder architecture adapted to support text-based conditioning [42]. The input signals are (i) a text \(t\) and (ii) an RGB image \(I\in R^{3\times W\times H}\). It outputs a localization heatmap \(M\) that identifies image regions in \(I\) that correspond to the part of the scene described by \(t\). \[M=g(I,Z_{t}(t))\,, \tag{1}\] where \(M\in R^{W\times H}\) contains values between 0 and 1, and \(Z_{t}(t)\) is a text embedding of the input text \(t\), given by the text encoder of CLIP [37]. Our refinement algorithm uses \(g\) with the pre-trained weights published by [43]. Our method trains a model \(h\) to generate a set of bounding boxes \(\bar{B}\) that match the localization map \(M\). \[\bar{B}=h(M) \tag{2}\] Thus \(h\) provides a feedforward way to generate bounding boxes from \(M\). The alternative provided, for example, by [43] is a multi-step process in which \(M\) is first converted to a binary mask by zeroing out any pixel value lower than half the mask's max value [36, 17, 16]. Next, contours are extracted from the binary mask using the method of [51]. For each detected contour, a bounding box is extracted, whose score is given by taking the mean value of \(M\) for that bounding box. Finally, a non-maximal suppression is applied over the boxes with an overlap of at least 0.05 IOU, filtering out low-score boxes (0.5 of the maximal score). \(h\) replaces this process with a single feed-forward pass. However, its main goal is to provide a training signal for refining \(g\). This is done by considering the output of \(h\) as foreground masks and considering the values of \(g\)'s output inside and outside these masks. ### Training \(h\) The network \(h\) is trained to predict a fixed number \(k\) of bounding boxes \(\bar{B}\). Each box is represented as a vector \(b_{i}\in\mathbb{R}^{6}\) that contains the center coordinates of the box, its width, and its height. In addition, the network \(h\) contains a logit value, which denotes whether there is an expected object within each box. Training is performed maintaining the semi-supervised nature of the phrase grounding method. The bounding boxes used for training \(h\) are extracted using network \(g\) and the method of Suzuki et al[51], as explained above. We call the set of resulting bounding boxes \(B\). Following Carion et al. [8], we train \(h\) using a loss \(L_{h}\) that has three terms: (1) a classification loss \(L_{\text{cls}}\), (2) an \(l1\) loss \(L_{\text{box}}\), and (3) the GIoU[40] loss \(L_{\text{giou}}\). If the number of objects \(k\) returned by \(h\) is smaller than the number of target boxes \(|B|\), the \(k\) boxes with the highest confidence are used. In the opposite case, \(B\) is padded with zero-coordinate vectors with a "no object" label. For computing the loss, one assumes a one-to-one correspondence between the ground truth objects and the detected boxes. This matching is obtained by minimizing \(L_{h}\) over all possible permutations, using the Hungarian algorithm [30] for minimal cost bipartite matching. Denote as \(B^{\prime}=[b^{\prime}_{0},b^{\prime}_{1},...,b^{\prime}_{k-1}]\) the matrix that holds the set of boxes \(B\) ordered optimally. The classification loss \(L_{cls}\) is a Negative log-likelihood loss \[L_{\text{cls}}=\sum_{i=0}^{k-1}-\log\bar{p}_{i} \tag{3}\] where \(\bar{p}_{i}\) is the predicted box logit, representing the probability of the existence of an object. \(L_{box}\) is applied directly to the coordinates of the centers of the bounding boxes, their height and width: \[L_{\text{box}}=\sum_{i=0}^{k-1}\|b^{\prime}_{i}-\bar{b_{i}}\|_{1} \tag{4}\] While the loss \(L_{box}\) is affected by the size of the box, the 2nd loss, \(L_{giou}\), is a scale-invariant loss given by \[L_{\text{giou}}(B^{\prime},\bar{B})=\sum_{i=0}^{k-1}1-\left(\frac{\left|\bar{ b_{i}}\cap b^{\prime}_{i}\right|}{\left|\bar{b_{i}}\cup b^{\prime}_{i}\right|}- \frac{\left|c_{i}\setminus(\bar{b_{i}}\cup b^{\prime}_{i})\right|}{\left|C_{i }\right|}\right) \tag{5}\] where \(c_{i}\) is the smallest box containing \(b^{\prime}_{i}\) and \(\bar{b_{i}}\). All losses are normalized by the number of boxes. The final loss is a weighted sum of all three losses: \[L_{h}(B^{\prime},\bar{B})=\lambda_{1}*L_{\text{cls}}(B^{\prime}, \bar{B})+\lambda_{2}*L_{\text{box}}(B^{\prime},\bar{B})+\\ \lambda_{3}*L_{\text{giou}}(B^{\prime},\bar{B}) \tag{6}\] where \(\lambda_{1}=2,\lambda_{2}=5,\lambda_{3}=2\). These weights are similar to those used in previous work, with an extra emphasis on \(\lambda_{1}\) (using a value of 2 instead of 1), but there was no attempt to optimize them beyond inspecting a few training images. ### Refining \(g\) For finetuning \(g\), we use the multiple loss terms, including the same loss terms that are used for training \(h\), with a modification. Here, instead of just calculating the loss between two sets of boxes, we also compute the union box of ground truth boxes: \(BU=Union(B)\). With probability \(0.5\) we use \(BU\) instead of \(B\) for calculating the loss (in this case, the matching is done with a single box only) \[L_{h_{BU}}=\begin{cases}L_{h}(BU,\bar{B}),&\text{if }p\geq 0.5\\ L_{h}(B,\bar{B}),&\text{otherwise}\end{cases},p\sim\text{Uniform}[0,1] \tag{7}\] In addition to the bounding box loss, we use losses for the localization maps used by [43] to train \(g\). This prevents the fine-tuned model from following \(h\) "blindly", without considering the underlying data. The relevancy map loss, uses a CLIP-based relevancy [11] to provide rough estimation for the localization map \[L_{\text{map}}(I,H)=\|H-g^{h}(I,Z^{T})\|^{2}, \tag{8}\] Figure 2: An illustration of our method. The phrased grounding network \(f\) is given the input image \(I\) and a text phrase \(t\) and produces a heatmap \(M\). A heuristic (blue line) then produces a set of bounding boxes \(B\) from this map that are used to train a detection network \(h\), which outputs a set of boxes \(\bar{B}\). The loss that is used is applied after applying the optimal permutation. where \(H\) is the relevancy map and \(g^{h}\) is the refined network \(g\). The foreground loss \(L_{fore}(I,T)\) is given by \[L_{\text{fore}}(I,t)=-CLIP(g^{h}(I,Z^{T})\odot I,t), \tag{9}\] where \(\odot\) is the Hadamard product. The loss maximizes the similarity given by CLIP between the mask's foreground region and the input text \(t\). On the other hand, the background loss \(L_{back}(I,t)\) minimizes the similarity CLIP distance between the background and text \(t\) \[L_{back}(I,t)=CLIP((1-g^{h}(I,Z^{T}))\odot I,t), \tag{10}\] The overall loss is given by: \[L_{g}=L_{h_{BU}}+\lambda_{4}*L_{reg}(I,g^{h})+\lambda_{5}*L_{ \text{map}}(I,H)+\] \[\lambda_{6}*L_{\text{back}}(I,T)+\lambda_{7}*L_{\text{fore}}(I,T)\] where \(\lambda_{4}=1,\lambda_{5}=64,\lambda_{6}=2,\lambda_{7}=1\). These hyperparameters reflect the values assigned by previous work, multiplied by 4 in order to approximately balance the loss that arises from \(h\) with the other loss terms. Architecture\(h\) is a VGG16 [48], pre-trained on the ImageNet[18] dataset. In order to apply it to the single channel heatmap \(M\in R^{\times W\times H}\), this input is repeated three times across the channel dimension. The last layer of the classifier is replaced by a linear layer of dimensions \(4096\times(6k)\), \(k\) being the number of boxes predicted by \(h\). ## 4 Unsupervised object discovery For the task of unsupervised object discovery, a vision transformer \(f\) is pretrained in a self-supervised manner, using DINO [9]. It is then used to extract features \(F\) from an input image \(I\in R^{3\times W\times H}\) \[F=\bar{f}(I) \tag{11}\] where \(\bar{f}\) denotes the latent variables from the transformer \(f\). \(F\in R^{d\times N}\), where \(d\) is the features dimension and \(N\) denotes the number of patches for \(f\). For each patch \(p\), we denoted by \(f_{p}\in R^{d}\) the associated feature vector. Bounding boxes based on these features are extracted using unsupervised techniques, such as LOST [47], TokenCut [58] or MOVE [5]. **LOST** builds a patch similarities graph \(\mathcal{G}\), with a binary symmetric adjacency matrix \(A\!=\!(a_{pq})_{1\leq p,q\leq N}\in\{0,1\}^{N\times N}\) where \[a_{pq}=\left\{\begin{array}{ll}1&\text{if }f_{p}^{\top}f_{q}\geq 0,\\ 0&\text{otherwise}.\end{array}\right. \tag{12}\] An initial seed \(p*\) is selected as the patch with the smallest number of connections to other patches. \[p^{*}=\operatorname*{arg\,min}_{p\in\{1,\dots,N\}}d_{p}\ \ \ \text{where}\ \ d_{p}=\sum_{q=1}^{N}a_{pq}. \tag{13}\] This is based on the assumptions that connectivity implies belonging to the same object, since patch embeddings are similar for the same object, and that each object occupies less area than the background. Denote the list of \(a\) patches with the lowest degree \(d_{p}\) as \(\mathcal{D}_{a}\). LOST then considers the subset of \(\mathcal{D}_{a}\) that is positively correlated, in the embedding space, with \(p^{*}\) \[\mathcal{S}=\{q\in\mathcal{D}_{a}|f_{q}^{\top}f_{p^{*}}\geq 0\} \tag{14}\] This set is then expanded obtaining \[\mathcal{S}^{+}=\{q|\sum_{p\in\mathcal{S}}f_{q}^{\top}f_{p}\geq 0\} \tag{15}\] We note that in the image itself, the patches of \(\mathcal{S}^{+}\) can be part of multiple separate regions. The method selects the connected component (4-connectivity in the image space) in \(\mathcal{S}^{+}\) that contains the seed \(p^{*}\) as its single discovered object. **TokenCut[58]** employs a slightly different adjacency matrix, \(A\), which employs the cosine similarity score between pairs of feature vectors. \[Ap,q=\begin{cases}1,&\text{if }\frac{f_{p}^{\top}f_{q}}{\|f_{p}\|_{2}\|f_{q} \|_{2}}\geq\tau\\ \epsilon,&\text{else}\end{cases}, \tag{16}\] where \(\tau=0.2\) and \(\epsilon=1e-5\). The normalized cut method [44] is applied to the graph to achieve object discovery. This method clusters all patches into two groups, based on the 2nd smallest eigenvector of the normalized adjacency matrix, and selects the group with the maximal absolute value in this eigenvector. The bounding box of the patches in this group is returned. **MOVE[5]**, in contradistinction to the preceding two methodologies, employs a segmentation network that is trained atop the latent transformer features denoted as \(F\). The resulting output of this network takes the form of a segmentation map denoted as \(M\in R^{W\times H}\). Subsequently, this segmentation map undergoes binarization with a threshold set at 0.5, followed by the detection of connected components [7]. The most sizable bounding box is then selected to correspond to the most extensive connected component. ### Training \(h\) and refining \(f\) The training process of detector \(h\) follows the details described in Sec. 3.1, with a few minor changes. There is a single ground-truth bounding box \(B\), extracted from an image \(I\) by model \(f\) using the unsupervised techniques described above. Using the same loss term \(L_{h}\), \(h\) is optimized to minimize \(L_{h}(B,\bar{B})\), where \(\bar{B}\) are the \(k\) predicted boxes. To maintain the unsupervised nature of the task, \(h\) is initialized with weights from the self-supervised method DINO[9], using a ResNet-50[25] backbone. In the phrase grounding case and MOVE [5], the input of \(h\) is the map \(M\), and the analogue for non-trainable unsupervised object discovery is the map \(F\) where such map \(M\) is missing. For refining the DINO-trained transformer model \(f\), we use the same loss term \(L_{h}\) as is used in phrase grounding and add loss terms to prevent it from diverging too far. While in phrase grounding we used the loss terms that were used to train the phrase grounding network, here, for run-time considerations, we explicitly keep the transformer \(f\) in the vicinity of the DINO-pretrained network. The loss term is defined as the distance between the output of \(f\) and that of the refined model \(f^{h}\) \[L_{f}(I)=\|f(I)-f^{h}(I)\|^{2}, \tag{17}\] Both methods [47, 58] are improved by training a Class Agnostic Detector (CAD) on the extracted bounding boxes. Faster R-CNN [39] is used for CAD, with the _R50-C4_ model of Detectron2 [60] based on a ResNet-50[25] backbone. This backbone is pre-trained with DINO self-supervision. Following this process, we train an identical CAD using the refined model \(f^{h}\). Note that CAD and our method are complementary. While both train with the same pseudo-labels, CAD is trained on the original image and cannot backpropagate a loss to the underlying network \(f\). ## 5 Experiments We present our results for three tasks: weakly supervised phrase grounding (WSPG), "what is were by looking" (WWbL), and unsupervised single object discovery. The first two use the same phrase grounding network \(g\), and the third one is based on one of two techniques, which both utilize the same pre-trained transformer \(f\). **Datasets** For WSPG and WWbL, the network \(g\) is trained on either MSCOCO 2014 [32] or the Visual Genome (VG) dataset [29]. Evaluation is carried out on the test splits of Flickr30k[34], ReferIt[12, 24] and VG [29]. VG contains 77,398, 5,000, and 5000 training, validation, and test images, respectively. Each image is linked to natural-language text and annotated bounding boxes. During the training of MSCOCO2014 we use the training split defined by Akbari et al. [1]. It consists of 82,783 training samples and 40,504 validation samples, where each sam \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c}{VG trained} & \multicolumn{3}{c}{MS-COCO trained} \\ \cline{3-8} & & VG & Flickr & ReferIt & VG & Flickr & ReferIt \\ \hline Baseline & Random & 11.15 27.24 & 24.30 & 11.15 27.24 & 24.30 \\ Baseline & Center & 20.55 47.40 & 30.30 & 20.55 47.40 & 30.30 \\ GAE [10] & CLIP & 54.72 72.47 & 56.76 & 54.72 72.47 & 56.76 \\ \hline FCVC [22] & VGG & - & - & - & 14.03 29.03 & 33.52 \\ VPLS [61] & VGG & - & - & - & 24.40 & - \\ DT [62] & Inception-2 & 19.31 42.40 & 31.97 & - & - & - \\ SSS [26] & VGG & 30.03 49.10 & 39.98 & - & - & - \\ MG [1] & BiLSTM+VGG & 50.18 57.91 & 62.76 & 46.99 53.29 & 47.89 \\ MG [1] & ELMo+VGG & 48.76 60.08 & 60.01 & 47.94 61.66 & 47.52 \\ GbS [2] & VGG & 53.40 70.48 & 59.44 & 52.00 72.60 & 56.10 \\ WWbL [43] & CLIP+VGG & 62.31 75.63 & 65.95 & 59.09 75.43 & 61.03 \\ Ours & CLIP+VGG & **63.51 78.32** & **67.33** & **60.05** & **77.19** & **63.48** \\ \hline \hline \end{tabular} \end{table} Table 1: Phrase grounding results: “pointing game” accuracy on Visual Genome (VG), Flickr30K, and ReferIt. The methods in the first three rows do not train. Figure 3: Sample phrase-grounding results. where (a) the phrase (b) the input image (c) results (black) for network \(g\)[43] compared to ground-truth box (green) (d) same for refined network \(g^{h}\). ple contains an image and five captions describing the image. ReferIt[12, 24] consists of 130k expressions referring to 99,535 objects in 20k images. For evaluation, we use the test split of Akbari et al.[1]. The dataset Flickr30k Entities [34] consists of 224K phrases that depict objects present in more than 31K images, with each image having five corresponding captions. The evaluation is carried out on a the test split of Akbari et al.[1]. For unsupervised single object discovery, the network \(g\) is trained on either MSCOCO \begin{table} \begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Training} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Test Bbox Accuracy} \\ \cline{3-5} & & VG & Flickr & ReferIt \\ \hline \multirow{3}{*}{MS-COCO} & MG [1] & 15.77 & 27.06 & 15.15 \\ & WWbL [43] & 27.22 & 35.75 & 30.08 \\ & Ours & **28.77(2.1)** & **47.26(5.01)** & **30.63(2.08)** \\ \hline \multirow{3}{*}{VG} & MG [1] & 14.45 & 27.78 & 18.85 \\ & WWbL [43] & 27.26 & 36.35 & 32.25 \\ \cline{1-1} & Ours & **31.02(3.25)** & **42.40(4.491)** & **35.56(3.456)** \\ \hline \hline \end{tabular} \end{table} Table 2: Phrase grounding results: bounding box accuracy on Visual Genome (VG), Flickr30K, and ReferIt. The outcomes obtained from network \(h\) are presented within brackets. Figure 4: Single object discovery results. (a) the input image, (b) the inverse degree of the LOST [47] graph obtained over \(f\) (published model); the red bounding box is directly from LOST, the white is the prediction of CAD trained on top of it (c) same with our refined model \(f^{h}\) and LOST (d) same as b, but using \(f\) together with TokenCut[58], (using the published weights; the CAD model was not released and is not shown) (e) the results of \(f^{h}\) and TokenCut. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Training} & \multirow{2}{*}{Model} & \multicolumn{3}{c}{Test point Accuracy} & \multicolumn{3}{c}{Test Bbox Accuracy} \\ \cline{3-6} & & VG & Flickr & ReferIt & VG & Flickr & ReferIt \\ \hline \multirow{3}{*}{MS-COCO} & MG [1] & 32.91 & 50.154 & 36.34 & 11.48 & 23.75 & 13.31 \\ & WWbL [43] & 44.20 & 61.38 & 43.77 & 17.76 & 32.44 & 21.76 \\ & Ours & **46.29** & **63.43** & **44.59** & **22.32** & **38.00** & **22.91** \\ \hline \multirow{3}{*}{VG} & MG [1] & 32.15 & 49.48 & 38.06 & 12.23 & 24.79 & 16.43 \\ & WWbL [43] & 43.91 & 58.59 & 44.89 & 17.77 & 31.46 & 18.89 \\ \cline{1-1} & Ours & **46.77** & **61.75** & **44.9** & **22.40** & **35.23** & **23.44** \\ \hline \hline \end{tabular} \end{table} Table 3: WWbL results: bounding box accuracy on Visual Genome (VG), Flickr30K, and ReferIt. 20K, PASCAL-VOC07[20] or PASCAL-VOC12[21]. MSCOCO20K has 19,817 images chosen at random from the MSCOCO 2014 dataset[32]. VOC07 and VOC12 contain 5,011 and 11,540 images respectively, with each image belonging to one of 20 categories. For evaluation, we follow common practice and evaluate the train/val datasets. This evaluation is possible since the task is fully unsupervised. **Implementation details** For phrase grounding tasks, the proposed network \(h\) backbone is VGG16 [48], pre-trained on the ImageNet[18] dataset. For the object discovery task, we use \(h\) with ResNet-50[25] backbone, pre-trained with DINO[9] self-supervision on the ImageNet[18] dataset. For both tasks, \(h\) predicts \(k=10\) bounding boxes. Refining takes place using an Adam optimizer with a batch size of 36. The learning rate of \(h\) is 1e-5, while the learning rates of \(g^{h}\) and \(f^{h}\) are 1e-7 and 5e-7, respectively. The optimizer weight decay regularization is 1e-4. For the first 3000 iterations, network \(h\) is optimized, where \(g^{h}/f^{h}\) is fixed. Then, for the rest of the training (10k iterations), \(h\) is fixed while \(g^{h}/f^{h}\) is optimized. **Metrics** Phrase grounding tasks are evaluated with respect to the accuracy of the pointing game[62], which is calculated based on the output map by finding the location of the maximum value, given a query, and checking whether this point falls within the object's region. The "BBox accuracy" metric extracts a bounding box, given an output mask, and compares it with the ground-truth annotations. A prediction is considered accurate if IOU between the boxes is larger than 0.5. To extract the bounding box from an output map \(M\), the procedure of Shaharabany et al. [43] is employed. First, \(M\) is binarized using a threshold of 0.5, then contours are extracted from \(M\) using the method of Suzuki et al. [51]. Based on the contours, a set of bounding boxes is derived by taking the smallest box containing each contour. These bounding boxes are scored by summing the values of M within the contour while ignoring boxes with low scores. Next, a non-maximal suppression process is applied and the minimal bounding box that contains the remaining bounding boxes is chosen. The WWbL task is an open-world localization task, with only an image as input (no text input). Using this image, the goal is to both localize and describe all of the elements in the scene. To solve this task, a multi-stage algorithm was introduced by Shaharabany et al. [43], starting with obtaining object proposals using selective search [52]. Next, BLIP is used to caption these regions. Captions that are similar to each other are removed using the Community Detection (Cd) clustering method [6]. Using the learned phrase grounding model \(g\), heatmaps are generated according to the extracted captions. Similarly to the phrase grounding task, the WWbL task is evaluated using the same two metrics: pointing game accuracy and bounding box accuracy). For each ground-truth pair of bounding box and caption, the closest caption in CLIP space is selected from the list of automatically generated captions. The associated output map of the phrase \begin{table} \begin{tabular}{l c c c} \hline \hline Ablation & VOC07 & VOC12 & MSCOCO20K \\ \hline w/o reg. & 61.72 & 64.45 & 50.13 \\ k=1 & **62.54** & 64.67 & **52.00** \\ k=5 & 62.16 & 64.45 & 51.70 \\ k=10 & 61.92 & **66.16** & 51.98 \\ k=15 & 61.44 & 64.46 & 50.60 \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study for the object discovery task. \begin{table} \begin{tabular}{l c c} \hline \hline Model & VOC07 & VOC12 & MS-COCO \\ \hline Selective Search [52] & 18.8 & 20.9 & 16.0 \\ EdgeBoxes [65] & 31.1 & 31.6 & 28.8 \\ Kim et al. [28] & 43.9 & 46.4 & 35.1 \\ Zhang et al. [63] & 46.2 & 50.5 & 34.8 \\ DDT+ [59] & 50.2 & 53.1 & 38.2 \\ rOSD [54] & 54.5 & 55.3 & 48.5 \\ LOD [55] & 53.6 & 55.1 & 48.5 \\ DINO-seg [9] & 45.8 & 46.2 & 42.1 \\ LOST [47] & 61.9 & 64.0 & 50.7 \\ Ours using LOST & 62.0\({}_{(2.1)}\) & 66.2\({}_{(3.5)}\) & 52.0\({}_{(3.7)}\) \\ TokenCut [58] & 68.8 & 72.1 & 58.8 \\ Ours using TokenCut & 69.0\({}_{(4.6)}\) & 72.4\({}_{(5.1)}\) & 60.7\({}_{(3.5)}\) \\ MOVE [5] & 76.0 & 78.8 & 66.6 \\ Ours using MOVE & **77.5\({}_{(4.2)}\)** & **79.6\({}_{(5.4)}\)** & **67.2\({}_{(48.3)}\)** \\ \hline LOD + CAD [47] & 56.3 & 61.6 & 52.7 \\ rOSD + CAD [47] & 58.3 & 62.3 & 53.0 \\ LOST + CAD [47] & 65.7 & 70.4 & 57.5 \\ Ours using LOST + CAD & 66.1 & 71.0 & 58.7 \\ TokenCut [58] +CAD & 71.4 & 75.3 & 62.6 \\ Ours using TokenCut + CAD & 71.9 & 75.6 & 64.4 \\ MOVE [5] +CAD & 77.1 & 80.3 & 69.1 \\ Ours using MOVE [5] +CAD & **78.7** & **81.3** & **69.3** \\ \hline \hline \end{tabular} \end{table} Table 4: Object Discovery results: CorLoc score on MSCOCO20K, VOC07 and VOC12. Network \(h\) was trained using pseudo labels from either LOST [47], TokenCut [58] or MOVE [5]. +CAD indicates training a second-phase class-agnostic detector with model pseudo-boxes as labels. Network \(h\) results are enclosed in brackets. \begin{table} \begin{tabular}{l c c c c} \hline \hline Ablation & \multicolumn{2}{c}{Test point Accuracy} & \multicolumn{2}{c}{Test Bbox Accuracy} \\ \cline{2-5} & VG & Flickr & Referlt & VG & Flickr & Referlt \\ \hline w/o Box Union & 57.26 & 72.54 & 62.55 & 25.11 & 28.74 & 24.63 \\ w/o reg. & 53.49 & 68.47 & 61.92 & 26.45 & 42.79 & 29.74 \\ k=1 & 56.84 & 70.74 & 62.15 & 27.75 & 32.35 & 24.73 \\ Ours & **60.05** & **77.19** & **63.48** & **28.77** & **47.26** & **30.63** \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study for the phrase grounding task. See text for details. All models were trained on MS-COCO14[32] dataset grounding method is then compared to the ground truth bounding box using the pointing accuracy metric. In addition, bounding boxes are extracted for the output heatmaps \(M\), as described above. For single object discovery we use the Correct Localization (CorLoc) metric as used by [19, 54, 55, 53, 59, 15, 50]. A predicted bounding box is considered as correct if the IOU score between the predicted bounding box and one of the ground truth bounding boxes is above 0.5. We evaluate our model on the same datasets as [58, 47, 5]. **Results** Tab. 1 lists the results for Flickr30k, ReferIt, and VG for the weakly-supervised phrase grounding task. Evidently, our method is superior to all baselines, whether training takes place over VG or MS-COCO. In addition to the pointing game results, Tab. 2 presents bounding box accuracy for the phrase grounding task (this data is not available for most baselines). Here, too, our method outperforms the baseline methods by a wide margin. Phrase grounding samples are provided in Fig. 3, comparing the results after the refinement process (those with \(g^{h}\)) to the results of the baseline \(g\). As can be seen, our method encourages the localization maps to match the typical shape of image objects. As a result, the predicted bounding box after refining the model is often closer to the actual objects in the image. The WWbL results are listed in Tab. 3, which depicts the performance obtained by our \(g^{h}\), WWbL [43], and a baseline that employs the phrase grounding method MG [1] as part of the WWbL captioning procedure described above. Out of the three models, our refined model \(g^{h}\) achieves the best scores, for all benchmarks and both metrics. Tab. 4 summarize the results on the VOC07, VOC12, and MS-COCO20K datasets for the single object discovery task. When utilizing the MOVE [5] model, our method achieves superior performance compared to all other models across all datasets. This superiority holds true when comparing all methods without CAD and when comparing all methods with CAD. Furthermore, our method consistently outperforms other approaches when refining the DINO model f using both TokenCut [58] boxes and LOST [47] boxes on all datasets. Fig. 4 depicts typical samples of our results for the unsupervised object discovery task, when combining our method with either LOST [47] or TokenCut [58]. Evidently, our refining process improves object and background separation and produces a denser output mask, which covers the object more completely. Furthermore, the extracted bounding boxes become more accurate. **Ablation study** In order to validate the individual components of our approach, we conducted an ablation study. For the phrase grounding task, this study is reported in Tab. 5. The first ablation replaces the loss \(L_{h_{BU}}\) with the loss \(L_{h}\), i.e., no union of the detection boxes is performed. The second ablation employs only the loss of \(h\), \(L_{h_{BU}}\), and disregards the loss terms that were used to train network \(g\). The third ablation employs a single detection box (\(k=1\)) instead of the default of \(k=10\). As can be seen, these three variants reduce performance across all metrics and datasets. The exact reduction in performance varies across the datasets. To extensively explore the task of unsupervised object discovery, we conducted a comprehensive ablation study by varying multiple values of k, see Tab. 4.1. This ablation was performed using LOST, which is quicker than TokenCut and without the extra overhead of training CAD. Evidently, removing the regularization term, leaving only the loss \(L_{h}\) (there is no box union in this task, since both LOST and TokenCut return a single box) hurts performance. However, as can be expected, using \(k=1\), instead of the value of \(k=10\) that is used throughout our experiments, better fits this scenario and leads to better performance on VOC07 (and virtually the same on MSCOCO20K). **Training time** The time it takes to train our method on medium-sized datasets is reported in Tab. 7. For both original networks, \(f\) and \(g\), we use pretrained networks and report the published values. Training \(h,f^{h},g^{h}\) reflects our runs on GeForce RTX 2080Ti GPUs (\(f\) which is DINO, was trained on much more involved hardware, while \(g\) was trained on similar hardware). As can be seen, training \(h\) and refining \(f\) or \(g\) to obtain \(f^{h}\) or \(g^{h}\) is much quicker than the training of the \(f\) and \(g\) baselines. The difference in training time between LOST and TokenCut stems from the inference done during training, which is much quicker for LOST. ## 6 Conclusions We present a novel method, in which a primary network is used in a symbiotic manner with a detection network. The first network is used to extract a feature map and detection boxes, which are used as the input and output of the second. The second network is then used to allow the first network to be refined using the boxes extracted from its output. All training phases are performed on the same training set, within the bounds of the allowed level of supervision. Tested on a wide variety of tasks and benchmarks, the proposed method consistently improves localization accuracy. \begin{table} \begin{tabular}{l c c c} \hline \hline Network & Phrase Grounding & \multicolumn{2}{c}{Object discovery} \\ & & LOST & TokenCut \\ \hline \(f\) or \(g\) & 28 x [4] & 72.6 x [16] & 72.6 x [16] \\ \(h\) & 0.5 x [1] & 0.5 x [1] & 2.5 x [1] \\ \(f^{h}\) or \(g^{h}\) & 3.2 x [4] & 5.3 x [1] & 20.5 x [1] \\ \hline \hline \end{tabular} \end{table} Table 7: Training time (hours) for phrase grounding and unsupervised object discovery. Within brackets is the number of GPUs used during training.
2309.12979
EgoCor: an R package to facilitate the use of exponential semi-variograms for modelling the local spatial correlation structure in social epidemiology
As an alternative to using administrative areas for the evaluation of small-area health inequalities, Sauzet et al. suggested to take an ego-centred approach and model the spatial correlation structure of health outcomes at the individual level. Existing tools for the analysis of spatial data in R might appear too complex to non-specialists which could limit the use of the approach. We present the R package EgoCor which offers a user-friendly interface displaying in one function a range of graphics and tables of parameters to facilitate the decision making about which exponential parameters fit best either raw data or residuals. This function is based on the functions of the R package gstat. Moreover, we implemented a function providing the measure of uncertainty proposed by Dyck and Sauzet. With the R package EgoCor the modelling of spatial correlation structure of health outcomes or spatially structured predictors of health with a measure of uncertainty is made available to non-specialists.
Julia Dyck, Jan-Ole Koslik, Odile Sauzet
2023-09-22T16:22:06Z
http://arxiv.org/abs/2309.12979v2
# _Journal of Statistical Software_ ###### Abstract As an alternative to using administrative areas for the evaluation of small-area health inequalities, Sauzet et al suggested to take an ego-centred approach and model the spatial correlation structure of health outcomes at individual level. Existing tools for the analysis of spatial data in R may appear too complex to non-specialists which may limit the use of the approach. We present the R package **EgoCor** which offers a user-friendly interface displaying in one function a range of graphics and tables of parameters to facilitate the decision making about which exponential parameters fit best either raw data or residuals. This function is based on the functions of the R package **gstat**. Moreover, we implemented a function providing the measure of uncertainty proposed by Dyck and Sauzet. With the R package **EgoCor** the modelling of spatial correlation structure of health outcomes with a measure of uncertainty is made available to non specialists. _Keywords_: R package, semi-variogram, exponential models, small-area health inequalities. ## 1 Background The last 20 years has seen an increase in interest in the study of associations between neighbourhood characteristics and health. Most of the quantitative studies are based on neighbourhood defined as disjoint administrative spatial units and non-measured spatial effects are estimated via multilevel models. However, there are limits to this approach, which have been discussed in the literature (Chaix, Merlo, Evans, Leal, and Havard 2009; Van Ham and Manley 2012). An alternative approach is to conceptualise neighbourhood as ego-centred such that everyone has its own neighbourhood centred on the place of residence. Such a neighbourhood must have a relevance for the health outcome of interest. Some studies mentioned the use of the spatial correlation structure of health outcomes as an alternative to estimating the correlation within administrative areas (Chaix, Merlo, Subramanian, Lynch, and Chauvin 2005). This idea has been brought further by Sauzet et al (Sauzet, Breiding, Zolitschka, Breckenkamp, and Razum 2021) by proposing to use the parameters of an exponential model for the semi-variogram of health outcomes to quantitatively assess this correlation structure. Such models provide a measure of the presence of unmeasured spatial effects on health. It has the advantage of fitting empirical data well as well as modelling the following concept: if the place of residence has an effect on health, then health outcomes of neighbours are correlated and this correlation evanesce with increasing distance between neighbours. The procedure of investigating the spatial correlation includes first estimating an empirical semi-variogram based on the data available and then fit an exponential parametric model to this semi-variogram. Here only small distances are considered to model the local correlation structure, i.e. the correlation of health outcome between immediate neighbours. The empirical semi-variogram is not unique and is based on the choice of the maximal distance between observations (all pairs of observations which are further apart than the maximal distance are not used to estimate the semi-variogram) and the number of bins (distance intervals for pairs of observations for which one point of the semi-variogram will be estimated). If the number of observations is limited and only a small proportion of the variance is spatially structured, then the ability of fitting an exponential model to the semi-variogram may be strongly dependent on the mentioned meta parameters. Moreover, the fit of the estimated exponential model must be evaluated visually with a comparative evaluation of a range of maximal distances and numbers of bins. There are a number of possibilities to fit semi-variogram models using R packages. But the range of modelling possibilities can be a deterrent to its wider use by health researchers with limited experience with the analysis of spatial data. Moreover, it remains presently difficult to obtain measures of uncertainty for parametric semi-variogram models and as far as we know the only method that has been implemented in current R packages is the BRISC method by Saha and Datta (2018) provided in the R package **BRISC**(Saha and Datta 2022) which is not suitable to be used with health data (Dyck and Sauzet 2023). Dyck and Sauzet (2023) have investigated how to modify an existing bootstrap approach to make it reliable in the context of health data survey (sparse local data defined as a small number of pairs at small distances, and large overall sample size) based on the work of Olea and Pardo-Iguzquiza (2011). A filtered bootstrap estimate for the standard error of parameter estimates of an exponential semi-variogram model has been proposed and evaluated. The aim of the R package **EgoCor** is to offer a user-friendly interface displaying in one function a range of graphics and tables of parameters to facilitate the decision making about which exponential semi-variogram model parameters fit the data best together with a measure of uncertainty not available until now. Our package is based on the functions of the R package **gstat**(R Core Team 2020; Pebesma and Wesseling 1998) and on the work of Dyck and Sauzet (2023). A measure of uncertainty for the model parameters can be obtained from raw data as well as from the residuals from regression models directly for adjusted analyses. ## 2 Statistical Methods ### Semi-variogram model We provide some basic concepts on semi-variogram modelling. For more background we refer to Schabenberger and Gotway (2017). In the sequel, we assume that the spatial process is isotropic and second-order stationary. Under those conditions (Schabenberger and Gotway 2017), the correlation \(C(h)\) between health outcomes \(Z(s)\) and \(Z(s^{\prime})\) (or the residuals of a regression model) at locations \(s\) and \(s^{\prime}\) depends only on the (lag) distance \(h=||s-s^{\prime}||\) between those observations. The empirical semi-variogram \(\gamma(h)\) defined for a positive lag \(h\) as \[\gamma(h)=\frac{1}{2}Var[Z(s)-Z(s+h)]\] is estimated from the data and is provided by the function variogram from the R package gstat(Pebesma and Wesseling 1998). The covariance function \(C(h)\) under some regularity conditions is given by \[C(h)=c_{0}+\sigma_{0}^{2}-\gamma(h)\] where \(\sigma_{0}^{2}+c_{0}=Var[Z(s)]\) is the variance of the health outcome and the nugget effect \(c_{0}\) is the value of the semi-variogram when the distance between two observations tend to 0. The parameter \(\sigma_{0}^{2}\) is called the partial sill. The Matheron's estimator provides an unbiased estimator for the empirical semi-variogram at distance h between two observations (Matheron 1962) \[\hat{\gamma}(h)=\frac{1}{2|N(h)|}\sum_{(s,s^{\prime})\in N(h))}\{Z(s)-Z(s^{ \prime})\}\] where \(N(h)\) is the set of all observations lagging at distance \(h\). To the empirical semi-variogram we fit an exponential model \[\hat{\gamma}_{exp}(h)=\begin{cases}\hat{c}_{0}+\hat{\sigma}_{0}^{2}\Big{(}1- \exp\big{(}-\frac{h}{\hat{\phi}}\big{)}\Big{)}&\text{ for }\ h>0,\\ 0&\text{ for }\ h=0.\end{cases}\] The practical range (distance above which the correlation between observations is less than 5% of the total variance) for this model is given by \[H=\hat{\phi}\log\left(\frac{\hat{\sigma}_{0}^{2}}{0.05(\hat{c}_{0}+\hat{ \sigma}_{0}^{2})}\right).\] The relative structured variability (RSV) calculated as partial sill divided by total variance is a measure of the degree of spatial structure: \[RSV=\frac{{\hat{\sigma_{0}}}^{2}}{\hat{c_{0}}+{\hat{\sigma_{0}}}^{2}}.\] The relative bias between the estimated variance according to the model and the sample variance of the health outcome is obtained as \[RB=\frac{\hat{c_{0}}+\hat{\sigma_{0}}^{2}}{\hat{Var}[\hat{Z}(s)]}.\] ### Filtered bootstrap standard error The algorithm used to obtain standard errors for the parameters of the exponential semi-variogram model is based on the generalized bootstrap method explained in Olea and Pardo-Iguzquiza (2011) and Pardo-Iguzquiza and Olea (2012) and was adapted in Dyck and Sauzet (2023) to the case of the characteristics of population data: large sample sizes over all, low number of pairs at small distances. The estimation of the exponential model parameter uncertainties of the fitted semi-variogram model is obtained by weighted least squares. A filter was set up within the bootstrapping process to remove all bootstrap parameter estimates for which the estimation algorithm did not converge (due to the small number of pairs at small distances). We briefly recall the steps of the filtered bootstrap algorithm: 1. **Exponential semi-variogram model:** An empirical semi-variogram is fitted to the original spatial dataset to which an exponential model is fitted. This provides the parameter estimate \(\hat{\theta}\) for the true parameter vector \(\theta\) with \((\theta_{1},\theta_{2},\theta_{3})=(c_{0},\sigma_{0}^{2},\phi)\). 2. **Normal score transformation:** The data vector \(\mathbf{z}=(z_{1},\dots,z_{N})^{t}\) is mapped into a Gaussian space by the empirical normal score transformation function \(\varphi\). Consequently, \(\mathbf{y}=\varphi(\mathbf{z})\) is a realization vector of a standard normal random variable (Deutsch and Journel 1998). 3. **Exponential semi-variogram model for transformed data:** An empirical semi-variogram and exponential semi-variogram model \(\tilde{\gamma}_{\mathrm{exp}}\) are fitted to the transformed data \(\mathbf{y}\) combined with the raw data's geo-coding providing the parameter estimate \(\tilde{\theta}\). 4. **Covariance estimation:** Making use of the exponential semi-variogram model characterized by \(\tilde{\theta}\), the covariance between two data points \(z_{i}\) and \(z_{j}\) is calculated based on the Euclidean distance \(d_{ij}\) between these two points: \[c_{ij}=\tilde{c_{0}}+\tilde{\sigma_{0}^{2}}-\tilde{\gamma}_{\mathrm{exp}}(d_ {ij}).\] The covariance matrix with entries \(c_{ij}\) is denoted as \(\mathbf{C}\). 5. **Decorrelation of the data:** The decomposition of the covariance matrix \(\mathbf{C}\) into the product of a lower and triangular matrix \(\mathbf{L}\) and its transpose is obtained by the Cholesky decomposition algorithm as \(\mathbf{C}=\mathbf{L}\mathbf{L}^{t}\). This decomposition is used to remove the correlation structure within the sample \(\mathbf{y}\). The resulting vector \(\mathbf{x}=\mathbf{L}^{-1}\mathbf{y}\) contains independent and identically distributed, hence uncorrelated values (Solow 1985). 6. **Classical bootstrap:** Sampling with replacement from \(\mathbf{x}\) leads to a bootstrap sample \(\mathbf{x}^{*}\) of the same size as the original spatial dataset. 6. **Recorrelation:** The resample \(\mathbf{x}^{*}\) reinherits the correlation structure by applying the inverse operation of step 4, that is \(\mathbf{y}^{*}=\mathbf{L}\mathbf{x}^{*}\). 7. **Normal score back transformation:** The back transformation of \(\mathbf{y}^{*}\) to the attribute space through the inverse normal score function is done by \(\mathbf{z}^{*}=\varphi^{-1}(\mathbf{y}^{*})\). 8. **Analysis of the bootstrap sample:** An exponential semi-variogram model is estimated based on \(\mathbf{z}^{*}\) combined with the original coordinates providing an estimate \(\theta^{*}\). 9. **Filtering:** A check-filter based test is applied to the bootstrap estimate \(\theta^{*}\) to indicate whether the exponential semi-variogram fitting algorithm did converge. If \(c_{0}^{*}+\sigma_{0}^{2*}>\widehat{\tau Var(\mathbf{z})}\), i.e. if the variance indicated by the model exceeds the estimated sample variance times the threshold factor \(\tau\), the bootstrap estimate is discarded. Otherwise it is saved. Within the **EgoCor** package the threshold is set to \(\tau=3\) by default as in a simulation study it was found to provide the best results with respect to the standard error estimates (Dyck and Sauzet 2023). 10. **Repetition:** The steps 5 to 9 are repeated until a set of \(B\) bootstrap estimates \(\left\{\theta_{b}^{*}\right\}_{b=1,...,B}\) has aggregated. 11. **Parameter standard error estimation:** Based on the set of repeatedly estimated parameters \(\left\{\theta_{b}^{*}\right\}_{b=1,...,B}\) the parameters' standard error estimates are obtained as \[\widehat{se(\theta_{j})}=sd(\theta_{j}^{*})=\sqrt{\frac{1}{B-1}\sum_{b=1}^{B }\left\{\theta_{bj}^{*}-\overline{\theta}_{j}^{*}\right\}^{2}},\] for \(j=1,...,3\) referring to the three parameters \(c_{0},\ \sigma_{0}^{2}\) and \(\phi\). ## 3 The EgoCor package In this section we describe functions provided by the package. We then apply those functions to the simulated dataset birth. ### Dataset The simulated dataset birth is provided with the package **EgoCor**. The dataset is based on the spatial distribution of real birthweight data (Spallek, Grosser, Holler-Holtrichter, Doyle, Breckenkamp, and Razum 2017). It contains eight variables for 903 births: * x: x-coordinate in meters for a fictive Cartesian coordinate system, * y: y-coordinate in meters, * birthweight: birthweight in grams, * primiparous: first pregnancy (1) or subsequent pregnancy (0), * datediff: number of days to due date, * bmi: BMI of the mother at first medical appointment, * weight: weight of the mother at first medical appointment, * inc: income quintile. ### Functions We use the birth dataset to illustrate the following functions: * coords.plot(): for graphical description of locations, * distance.info(): for descriptive information about distances between observations, * vario.reg.prep(): to model the spatial correlation structure of residuals of a regression model, * vario.mod(): to fit exponential models to semi-variograms with graphical presentation, * par.uncertainty(): to obtain bootstrap standard errors for the parameters of the exponential semi-variogram model. ### Data exploration The data format required by the **EgoCor** functions is either a data frame or a matrix. The first three columns of the data frame or matrix should be ordered the following way: 1st column: x-coordinate in meters for a Cartesian coordinate system; 2nd column: y-coordinate in meters for a Cartesian coordinate system; 3rd column: outcome of interest. Other columns will be ignored. A message appears following the output of the function vario.mod() recalling the required order for the variables. The function coords.plot() provides a simple visualization of the locations on a two dimensional map and indicates whether the outcome is observed (by a black circle) or missing (by a red x) at a specific location. The purpose of this function is to look at the spatial distribution of observations and if there might be a spatial pattern in the distribution of missing values in the outcome of interest or in covariates. Figure 1 displays the location of the observations. For the outcome birthweight there are no missing values. To illustrate the display of missing values we first created a new matrix with the coordinates and the variable inc and then inserted 30 random missing values. In Figure 2 the missing values are marked with red crosses. As expected no spatial pattern is visible. Further information about the distribution of pairwise Euclidean distances is provided by the function distance.info(). It calculates * the distance matrix containing all pairwise Euclidean distances, * the set of all pairwise Euclidean distances where duplicate values due to symmetry are deleted. Moreover, distance.info() displays the following descriptive statistics: * a histogram of the Euclidean distances, * minimum, 1st quartile, median, mean, 3rd quartile and maximum of the Euclidean distances. The output for the birth data is as follows and illustrated with the histogram shown in Figure 3: From all the 815 409 pairwise distances, 30 570 are of less than 2 000 meters and will be used for modelling of the local spatial correlation structure. ### Semi-variogram model fitting The function vario.mod() enables the simultaneous output of multiple exponential semi-variogram models fitted for a range of maximal distances and bin numbers. Thereby, the focus lies on the ability of the function to provide multiple estimation results depending on various specifications for the meta parameters max.dist and nbins. It is advised to try out different values for both parameters and choose the model with the best fit. Commonly, the fit is evaluated by visual checks. An additional check can be performed by comparing the sample variance with the estimated variance according to the semi-variogram model \(\hat{\sigma^{2}}=\hat{c_{0}}+\hat{\sigma_{0}^{2}}\)(Sauzet et al., 2021). The chosen maximal distance value specifies the subset of data pairs that are actually used for the semi-variogram estimation. Only data pairs with an Euclidean distance \(\leq\) max.dist Figure 1: Coordinates plot for outcome birthweight are taken into account. For a first exploration, it might be useful to try a range of maximal distances to locate where the range might be situated by ``` vario.mod(birth,max.dist=c(2000,1500,1000,500),nbins=13,pdf=T,pdf.directory=getwd(),pdf.name="Birthweight") ``` The above code will save a PDF file showing all fitted semi-variograms and will produce the **shiny**(Chang, Cheng, Allaire, Sievert, Schloerke, Xie, Allen, McPherson, Dipert, and Borges 2022) output shown in Figure 4. Each row of the printed output table contains the estimated parameters of the exponential semi-variogram model with one of the stated maximal distances. More precisely, the table columns contain: * index: model number, * max.dist: maximal distance used in the estimation of the empirical variogram, * nbins: number of bins specified for the empirical variogram estimation, * nbins.used: number of bins used for the empirical semi-variogram estimation (can differ from nbins in case of colocated data points), Figure 2: Coordinates plot for outcome inc with 30 random missing values * nugget: the estimated nugget effect \(\hat{c_{0}}\), * partial.sill: the estimated partial sill \(\hat{\sigma}_{0}^{2}\), * shape: the estimated shape parameter \(\hat{\phi}\), * prac.range: the practical range of the exponential model, * RSV: the relative structured variability, * rel.bias: the relative bias between the sum of the estimated partill sill and nugget and the sample variance (which theoretically are the same). The maximal distance of 1000 meters seems to provide the best fit among the tried and we can now refine the analysis by considering smaller maximal distances ``` vario.mod(birth,max.dist=c(1000,800,600),nbins=13) ``` leading to the output presented in Figure 5. Because a maximal distance of 800 meters provides the best fit for the exponential model with a low relative bias and a good visual fit, we investigate further the role of the number of bins for this maximal distance. Figure 3: Histogram of pairwise distances for the birth data The nbins parameter specifies the number of lags of the empirical semi-variogram to be estimated. On the one hand, a high number of lags might lead to small within-lag-sample-size and thus to an unstable estimate. On the other hand, a too small number of lags might lead to a model, that does not detect a spatial correlation structure at all. To decide on one or multiple values for nbins, taking a look at the histogram plot of the pairwise distances (see Figure 3) obtained by distance.info() may help. Trying out multiple nbins specifications by \[\texttt{vario.mod(birth, max.dist = 800, nbins = c(11, 12, 13))}\] we obtain the output presented in Figure 7. All models provide similar results but model 3 with max.dist = 800 and nbins = 13 gives a slightly better fit and could be selected as final model with respect to the health outcome birthweight. ### Modelling the spatial correlation structure of residuals Figure 4: Shiny ouput from vario.mod(birth, max.dist = c(2000,1500,1000,500), nbins=13) Instead of modelling the correlation structure of a health outcome, the vario.mod() function can be used to model the spatial correlation structure of residuals from a (hierarchical) linear regression. To do so, the studentized residuals from a (hierarchical) linear regression model are extracted via the vario.reg.prep() function. We want to investigate if adjusting for some predictors of birthweight might explain some or all of the observed spatial correlation structure. In the first step, we fit the following regression model and investigate the output: res <- lm(birthweight - datediff + primiparous + bmi, data = birth) summary(res) ## Call: ## lm(formula = birthweight - datediff + primiparous + bmi, data = birth) ## Residuals: ## Min 1Q Median 3Q Max ## -1109.92 -274.10 -14.14 260.87 1373.98 Figure 5: Output from vario.mod(birth, max.dist = c(1000, 800, 600), nbins=13) # Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 3402.687 59.886 56.819 < 2e-16 *** ## datediff -24.217 1.444 -16.773 < 2e-16 *** # primiparous -108.669 28.424 -3.823 0.000141 *** # bmi 6.551 2.506 2.614 0.009092 ** # --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1'' 1 # Residual standard error: 415.9 on 899 degrees of freedom ## Multiple R-squared: 0.247,Adjusted R-squared: 0.2445 ## F-statistic: 98.32 on 3 and 899 DF, p-value: < 2.2e-16 All predictors are significant. Using the vario.reg.prep() function we now assign the studentized residuals of the regression model to the spatial coordinates and investigate which maximal distance provides the best exponential semi-variogram model fit for the studentized residuals. Figure 6: Output from vario.mod(birth, max.dist = 800, nbins = c(11,12,13)) We start at similar distances to the ones chosen for the raw data obtaining the output shown in Figure 7. Then we opt out of the **shiny** display option to save the output of vario.mod() in models for future use: v.prep <- vario.reg.prep(res, data = birth) models <- vario.mod(v.prep, max.dist = c(1000,800,600), nbins = 13, shinyresults = FALSE) The results point towards a reduced spatial correlation structure with a well fitting maximal distance reduced to only 600 meters and much less regularity in the empirical semi-variogram (see Figure 7). Analoguely to the unadjusted case (see section 3.4), we try out multiple nbins values given the maximal distance set to 600 meters: vario.mod(v.prep, max.dist = 600, nbins = c(11, 12, 13)) Based on the resulting graphics and table (see Figure 8) we come to the conclusion that the Figure 7: Output from vario.mod(v.prep, max.dist = c(1000,800,600), nbins = 13) models with max.dist = 600 and nbins = 12 (see Figure 8) or max.dist = 600 and nbins = 13 (see Figure 7) provide visually very similar and the best fits. ### Filtered bootstrap standard errors The function par.uncertainty() provides filtered bootstrap standard errors for all three exponential model parameters. Standard errors are important to conduct proper inference (Bard 1974). Moreover, they can be helpful to get an impression about the reliability of the estimated model and provide an objective tool to compare two or more models which seem to provide an equally good fit when evaluated visually as demonstrated in the last section 3.5. Because the execution of the filtered bootstrap algorithm can take some time (depending on the sample size and number of bootstrap repetitions), the par.uncertainty() function is not automatically called within vario.mod() so that the bootstrap is not executed in all models estimated by vario.mod(). This is left to the choice of the user by selecting the model number in the option mod.nr and thereby saving execution time. Due to the bootstrap component within the standard error calculation, repeatedly estimating Figure 8: Output from vario.mod(v.prep, max.dist = 600, nbins = c(11, 12, 13)) the standard errors for one fixed semi-variogram model will slightly vary. If the filtered bootstrap results are wished to be reproducible, a seed has to be set prior to the application of the par.uncertainty() command. We save the vario.mod.output object containing the two best semi-variogram models of the residuals according to visual inspection: models <- vario.mod(v.prep, max.dist = 600, nbins = c(12, 13)) Based on that, we can estimate the parameter standard errors for model 1 (max.dist = 600, nbins = 12): unc1 <- par.uncertainty(models, mod.nr = 1, threshold.factor = 3) unc1$unc.table ## Estimate Std. Error ## nugget effect 0.5575581 0.3177576 ## partial sill 0.4275874 0.5015041 ## shape 42.6818620 166.6720236 and model 2 (max.dist = 600, nbins = 13): unc1 <- par.uncertainty(models, mod.nr = 2, threshold.factor = 3) unc2$unc.table ## Estimate Std. Error ## nugget effect 0.5996697 0.3172093 ## partial sill 0.3825593 0.5561611 ## shape 34.5024985 203.0448548 According to the above R output the standard error estimates of model 1 appear to be slightly smaller than the standard error results of model 2 confirming our impression that the models are very similar. The concrete standard error values allow us to make an objective comparison and encourage us to select model 1 as final model. ## 4 Conclusion The R package **EgoCor** proposes a range of functions to explore which empirical semi-variogram meta parameters (maximal distance and number of bins) provide the best fitting exponential semi-variogram model. Practitioners can easily use the package to apply the concept of correlation neighbourhood of Sauzet et al. (2021) which is an ego-centred approach to assess the existence of unmeasured neighbourhood effects. Moreover, the package provides an implementation of the filtered bootstrap method for the estimation of standard errors of Dyck and Sauzet (2023) based on the work of Olea and Pardo-Iguzquiza (2011); Pardo-Iguzquiza and Olea (2012) which has been adapted to the characteristics of survey data: large sample sizes and low number of observations at close distances. ## Computational details The results in this paper were obtained using R 4.2.1 with the **EgoCor** 1.1.0 package. R itself and all packages used are available from the Comprehensive R Archive Network (CRAN) at [https://CRAN.R-project.org/](https://CRAN.R-project.org/).
2310.20670
Interactions Between Different Birds of Prey as a Random Point Process
The two-dimensional Coulomb gas is a one-parameter family of random point processes, depending on the inverse temperature $\beta$. Based on previous work, it is proposed as a simple statistical measure to quantify the intra- and interspecies repulsion among three different highly territorial birds of prey. Using data from the area of the Teutoburger Wald over 20 years, we fit the nearest and next-to-nearest neighbour spacing distributions between the respective nests of Goshawk, Eagle Owl and the previously examined Common Buzzard to $\beta$ of the Coulomb gas. Within each species, the repulsion measured in this way deviates significantly from the Poisson process of independent points in the plane. In contrast, the repulsion amongst each of two species is found to be considerably lower and closer to Poisson. Methodologically we investigate the influence of the terrain, of a shorter interaction range given by the two-dimensional Yukawa interaction, and the statistical independence of the time moving average we use for the yearly ensembles of occupied nests. We also check that an artificial random displacement of the original nest positions of the order of the mean level spacing quickly destroys the repulsion measured by $\beta> 0$. A simple, approximate analytical expression for the nearest neighbour spacing distribution derived from non-Hermitian random matrix theory proves to be very useful.
Gernot Akemann, Nayden Chakarov, Oliver Krüger, Adam Mielke, Meinolf Ottensmann, Patricia Päßler
2023-10-31T17:32:08Z
http://arxiv.org/abs/2310.20670v2
# Interactions between different birds of prey ###### Abstract. The two-dimensional Coulomb gas is a one-parameter family of random point processes, depending on the inverse temperature \(\beta\). It is proposed as a simple statistical measure to quantify the intra- and interspecies repulsion among three different highly territorial birds of prey, by comparing to the spacing between their nests in the plane. Using data from the area of the Teutoburger Wald over 20 years, we fit the nearest and next-to-nearest neighbour spacing distributions between the respective nests of Goshawk, Eagle Owl and the previously examined Common Buzzard to \(\beta\) of the Coulomb gas. Within each species, the repulsion measured in this way deviates significantly from the Poisson process of independent points in the plane. In contrast, the repulsion amongst each of two species is found to be considerably lower and closer to Poisson. Methodologically we investigate the influence of the terrain, of a shorter interaction range given by the two-dimensional Yukawa interaction, and the statistical independence of the time moving average we use for the yearly ensembles of occupied nests. A simple, approximate analytical expression for the nearest neighbour spacing distribution derived from non-Hermitian random matrix theory proves to be very useful, being valid for the two-dimensional Coulomb gas close to Poisson at \(\beta=0\). ## 1. Introduction The study of the statistics of random point processes in one (1D) and two dimensions (2D) is a very active area of research, with many applications in physics and other sciences. Examples in 2D, on which we will focus here, include quantum optics [1], quantum chaos [2, 3, 4], condensed matter physics [5], statistics [6] and ecology [7, 8, 9]. The goal of such an analysis is to clarify, first, whether the points coming from experimental data or numerical simulations are independent or not, and if not to quantify their correlations. The former case is called Poisson point process, where the points are distributed independently on a line in 1D, or in the plane in 2D. Closed form expressions exist for arbitrary dimension \(D\), cf. [4, Appendix A]. Many point processes found in nature show correlations, and in particular repulsion between points in a characteristic, universal way, such that simple models from statistical mechanics apply. Before describing the point processes we use in more detail, let us explain what kind of understanding could be expected from such an approach to biological systems, which typically have a high degree of complexity. A central question in biology is to understand the distribution of animals or plants in space and time. Furthermore, one would like to disentangle the effect of direct interaction within one species, between different species, and the effect of the environment. Such an understanding is of great value for conservation etc. Competition is one of the most ubiquitous features of ecology [10]. Animals compete for limited resources with individuals of their own (intraspecific competition) or another species (interspecific competition), see [11]. One of the easiest and hence most often used estimates of competition is the nearest neighbour distance [7]. It has been shown to be a useful measure, with the underlying assumption that a too near neighbour of the same or another species ultimately leads to decreased reproduction or survival [12, 13]. Two examples where 2D point processes have been applied in biology are the spacial distribution of trees, see [8] for a review, and the distribution of nests of birds of prey in space and time in our previous work [9]. Using data from [14, 15, 13] on the annual locations of occupied nests of the Common Buzzard in an area of 300 km\({}^{2}\) in the Teutoburger Wald, Germany, cf. Section 2.1, we have been able to draw the following conclusions. The distribution of their nests differs significantly from 2D Poisson statistics. We have been able to quantify the repulsion between the locations of nests of these highly aggressive birds of prey, by fitting the spacing distribution between nearest (NN) and next-to-nearest neighbours (NNN) in radial distance to those of a static 2D Coulomb gas of point charges in a confining (Gaussian) potential. Such a point process is also called Gibbs ensemble, cf. [6]. The single fit parameter used is the coupling strength \(\beta\), representing the inverse temperature in the 2D Coulomb gas. Details about this approach will be given in Section 2.2. Moreover, with this simple parametrisation we have been able to identify a change in time of the interaction strength measured by \(\beta\), over the observed period of 20 years, as a function of the increasing population density. Let us emphasise that the fitted value of \(\beta\) does not have a direct biological meaning, but merely serves to quantify the repulsion, in particular the deviation from 2D Poisson statistics at \(\beta=0\). A prime example from physics for such a transition from Poisson to correlated random variables is the Bohigas-Giannoni-Schmit or quantum chaos conjecture [18, 19, 20]. It relates to a further example for point processes that is frequently used in physics [22], the eigenvalues of random matrices [23]. For self-adjoint matrices these are real (1D), whereas non-Hermitian matrices have complex eigenvalues in 2D. In both cases these eigenvalues are strongly correlated random variables, originating from independently distributed matrix elements in the classical Gaussian ensembles. The quantum chaos conjecture states that eigenvalue statistics of the Hamiltonian of generic integrable quantum systems will follow Poisson statistics, according to Berry-Tabor [21], while that of fully chaotic Hamiltonians obeys random matrix statistics (distinguished by their global symmetry under time reversal). Here, the NN spacing distribution between eigenvalues has played an important role to quantify the path from integrability to chaos in 1D, and ample numerical and analytical evidence has been collected [22, 24]. This conjecture has been extended to 2D for dissipative quantum systems [25], and evidence has been presented, e.g. from the spectrum of the corresponding Liouville operator [2, 3, 4]. The repulsion of random matrix eigenvalues is very well understood. It follows the logarithmic 2D Coulomb interaction, both for real eigenvalues in 1D and complex ones in 2D, with only particular values for \(\beta=1,2,4\) occurring in 1D, and \(\beta=2\) in 2D 1. The reason is that random matrix eigenvalues represent determinantal or Pfaffian point processes and are thus integrable, in the sense that all eigenvalue correlation functions are explicitly known [23, 27]. Their universality in the limit of a large number of points \(N\) (matrix dimension) has been proved in 1D, see [28] and references therein, and in 2D, cf. [29] and [3] for the NN spacing distribution. This means that they do not depend on the choice of a Gaussian distribution of matrix elements. The quantum chaos conjecture has raised the obvious question of how to interpolate between the Poisson point process, that can be viewed as a special case of diagonal random matrices at \(\beta=0\), and the specific \(\beta\)-values occurring for random matrices. In 1D, heuristic, approximate descriptions exist, e.g. the Brody distribution, see [22] for other proposals. In 2D, it has been proposed [3] to directly use the 2D Coulomb gas at intermediate values of \(\beta\), and to determine the NN and NNN spacing distributions numerically, that can then be compared with data. These findings are reviewed in Section 2.2, in particular a corresponding approximate surmise for the 2D NN spacing distribution valid for small values of \(\beta\)[26], cf. Appendix B. These NN and NNN spacings were then used for fits as a function of \(\beta\), for the spacing of eigenvalues of the Liouville operator of dissipative, boundary driven systems in 2D [3], and in the annual spacing distributions of nests of the Common Buzzard [9]. Here, every year represents a realisation of the ensemble, and we have taken averages over all years, or windows of time moving averages over several consecutive years, in order to be sensitive to the time dependence. The goal of this article is to go beyond the analysis of a single species [9]. For the three species of competing birds of prey living in the observed area, the Common Buzzard, Goshawk and Eagle Owl, annual data for the locations of their nests over the same area and period of time have been collected [15, 13, 16, 17]. Our goal is to try to quantify the interaction between each two species, and to compare it to the interaction found within each species individually. Therefore, in a first step we have repeated the analysis from [9] for Goshawks and Eagle Owls individually, see Section 3.1. This includes the dependence on the respective change in population, ranging between 1-30 pairs. Because the spacing distribution for Eagle Owls differs considerably, whether they nest in the Teutoburger Wald (F) or in the adjacent plains (P), we have further split them into 2 groups. Because of comparably low statistics we have been unable to address the time dependence for the interaction between different species so far, to be discussed in Section 3.2. A further purpose of the present work is to analyse methodological issues in Section 4, that came up in discussions when presenting the results of [9]. Because all 3 species discussed only nest in forested patches, a first question addressed in Section 4.1 is whether the repulsion we observe within a species via a fitted \(\beta\neq 0\) is due to the (fractal) area of the forest, or represents a true repulsion. We have generated a Poisson point process on the given forest area in the monitored region, and made a fit to a real \(D>0\), for a \(D\)-dimensional Poisson NN spacing distribution. As we will see, this makes the repulsion between nests even more pronounced, and is clearly not an effect of the local area. In [9] the NN and NNN spacings distribution of nests were found to be described by different (typically decreasing) values of \(\beta\), clearly indicating a shorter interaction range than 2D Coulomb. Therefore we have investigated numerically the spacing distribution of the Yukawa interaction in 2D in Section 4.2. It depends on 2 parameters, with a shorter interaction than logarithmic at larger distance. It reduces to 2D Coulomb in a limiting case and at small distance. Furthermore, the question of "independence" of nests occupied in consecutive years is analysed in Section 4.3. Here, we quantify the reusage of nests within one or all species. A large abundance of old nests exists in the area, and the precise locations of nests in the monitoring process allows to quantify the respective percentage of reusage. For a comparison to the NN or NNN spacing distribution from the 2D Coulomb gas, the mean density of the data has to be normalised to unity, a procedure called unfolding. For 1D this is unique and easier than in 2D, cf. [22] vs. [30, 3]. As an alternative quantity that does not require unfolding, spacing ratios have been introduced in 1D [31] and 2D [4]. However, they are only known analytically for Poisson and for random matrix statistics, the complex Ginibre ensembles in 2D [4, 32]. In contrast to 1D [33], where the spacing ratio was derived as a function of \(\beta\), the interpolation in 2D is not so clear, as we will see from our data analysis presented in Appendix A. All our findings, open questions and possible future directions are discussed in Section 5. ## 2. Description of Setup: Data and Point process ### Biological description and data collection - birds of prey in the Teutoburger Wald In this subsection we describe the collection of data we use and the geographical setup. The locations of occupied nests of three kinds of birds were collected in late winter and early spring during the years 2000-2020 in an area in and around the Teutoburger Wald close to Bielefeld. The three species are the Common buzzard (Buteo buteo L.), Goshawk (Accipiter gentilis) and Eagle Owl (Bubo bubo). The area of 300 km\({}^{2}\) (8\({}^{\circ}\)25'E and 52\({}^{\circ}\)6'N) is located in Eastern Westphalia. It consists of two 125 km grid squares and 50 km\({}^{2}\) edge areas, see Fig. 1 for illustration. The main habitat of these birds of prey is the Teutoburger Wald and a cultivated landscape to the north and south of it. This is a low mountain region of height up to 315 m above sea level, and we will treat the area as approximately flat, that is two-dimensional. All three species nest in forest patches. Their size varies from rows of trees to large patches, of more than 10 km\({}^{2}\). In total approximately 17% of the study site is forested. The area has been intensively monitored for birds of prey, and the resulting spatial data have been published and used before extensively [14, 15, 13]. The forest patches were visited in March and April each year to look for incubating birds, and if a nest was occupied in both months, the pair was classified as breeding. The locations of these nests Figure 1. **Left:** Distribution of all observed occupied bird nests in the year 2020: Eagle Owls (large pink points), Goshawks (yellow points) and Common Buzzard (dark blue points). There exist occupied nests outside the area with marked points, which have not been monitored. The green shaded area marks the approximate extend of the Teutoburger Wald based on the roads on either side, cf. right plot. Eagle Owl nests within this green area are distinguished by a black ring around the large pink points. **Right:** A classification of the observed area in three types of terrain: City (black), forest (green) and cultivated land in the plains (white). The lack of population of birds within the smaller cities of Halle or Werther is well visible on the left plot. The forest found in the North of the Teutoburger Wald in that area consists of many small patches of irregular shape. were marked in large scale maps or using GPS-devices to monitor the spacial distribution of occupied nests. In Fig. 1 left, a snapshot for the year 2020 is shown, with occupied nests of the three species marked with coloured points, see caption. The classification of the monitored area into forest, city or plains is shown in Fig. 1 right. The example year 2020 chosen has amongst the highest population for all three kinds of birds. Already by eyesight several features emerge. Common Buzzards are much more abundant than Goshawks and Eagle Owls. Whereas Common Buzzards and Goshawks spread relatively evenly over the monitored area, the density of Eagle Owls inside the Teutoburger Wald is much higher than outside. Because of this difference in distribution of Eagle Owl nests, we decided to split the group of Eagle Owls in two: those nesting in the forest area (F) in green in the left plot (nests marked by a black rings around the pink dot), and those in the plains (P) (nests marked by a pink dot only). Notice that the plains were populated by Eagle Owls only after 2010, see Fig. 5 bottom right for their yearly populations. The marking of the forest area in green in Fig. 1 left is of course approximate. In particular, one might argue whether the two outliers of Eagle Owls South of the Teutoburger Wald should be counted as (F) or (P), in view of the two large patches of forest in Fig. 1 right. Although living in large forest patches, they are separated from the main forest by cities and are thus counted as (P) here. Such a choice does not change the quantitative picture, with the low statistics for Eagle Owls being difficult anyhow. A second feature is visible by eye: There are holes in the population densities around cities (marked in black in Fig. 1 right). Although effects of the edge of a density are known in point processes, see the discussion at the end of Subsection 2.2 below, we will disregards them here and treat all points as bulk points. The third feature that is immediate from Fig. 1 left is that the mean spacing between nests within one species is very different for Goshawks, Common Buzzards, Eagle Owls (F) and Eagle Owls (P). It is approximately given by the inverse density (for a constant density). While this mean in meters may encode important biological information about the range of interaction within one (or different) species, we are interested in comparing with (universal) features from spatial statistics in simple 2D point processes. Such a comparison can only be made quantitative, if the mean density is normalised, and the fluctuations around this mean, as given for instance by the local spacing distribution, is measured and compared with the predictions from (equally normalised) point processes. The procedure of normalising the mean density is called unfolding and very well studied in applications of random matrix theory, see [22]. For data in one spatial dimension it is unique and mostly straight forward, see [22, Sect. 3.2.1]. In two spatial dimensions it is more delicate, see [30, 3] for different procedures. Here, we will apply the method proposed in [3]. Taking for example the Common Buzzards only, each blue point in Fig. 1 left is replaced by a Gaussian distribution of a certain width, the sum of which defines the approximate mean density \(\rho_{\text{ave}}\) of points in 2D. The measured spacing around point \(z=(x_{0},y_{0})\) is then normalised by rescaling with \(\sqrt{\rho_{\text{ave}}(x_{0},y_{0})}\), to achieve a spacing with respect to a mean density of unity. This process is done for each year for each of the 4 sets of birds, recalling that we have split the Eagle Owls into 2 sets, yielding an ensembles for each set. This can then be compared to random point process to quantify the degree of repulsion, to be described in the next subsection. ### Random point processes - from Poisson to Coulomb gas in 2D In this subsection we introduce the random point processes and their corresponding nearest neighbour (NN) and next-to-nearest neighbour (NNN) spacing distribution that we will use for a comparison to data for the distribution of occupied nests of birds of prey. We begin with the Poisson (Poi) random point process in two dimensions, cf. Section 4.1 for general dimension \(D\). It consists of \(N\) independent, uncorrelated points in the plane. The following normalised spacing distributions in radial distance between NN and NNN are known in the limit of a large number of points \(N\): \[p_{\rm Poi}^{\rm(NN)}(s) = \frac{\pi}{2}s\ {\rm e}^{-\pi s^{2}/4}\quad\sim s\, \tag{2.2}\] \[p_{\rm Poi}^{\rm(NNN)}(s) = \frac{\pi^{2}}{8}s^{3}\ {\rm e}^{-\pi s^{2}/4}\,\sim s^{3}\, \tag{2.1}\] see e.g. [24, 4] for details of the derivation. The NN distribution has its first moment normalised to unity to set the scale (for the NNN spacing the first moment follows from this scale and is given by \(3/2\)). These spacing distributions can be obtained by distributing \(N\) uncorrelated points on a disc of radius \(R\). In the limit \(N\to\infty\), after rescaling the mean spacing between points as \(\bar{s}=R\sqrt{\pi/(4N)}\), eqs. (2.1) and (2.2) are obtained in units of \(\bar{s}\). Despite resulting from uncorrelated points, the 2D area measure in polar coordinates, \(dxdy=sds\,d\theta\) for \(z=x+iy=se^{i\theta}\), leads to a linear (cubic) repulsion in the NN (NNN) spacing distribution. We will not consider higher order spacings distributions in the following. As we will see below, the situation of a uniform density on a disc can also be obtained for a 2D Coulomb gas. Let us emphasise that the above spacing distributions quantify local correlations amongst points at distance \(\sim 1/\sqrt{N}\), compared to global distances of the order of unity on the (unit) disc. Let us move to the 2D static Coulomb gas (Cou). For a finite number \(N\) of (charged) points it is given by the equilibrium distribution at inverse temperature \(\beta=(k_{B}T)^{-1}\), subject to the logarithmic long-range interaction and a confining potential \(V(z)\). The latter is chosen to be Gaussian here for simplicity, \(V(z)=|z|^{2}\), \[{\mathcal{P}}_{{\rm Cou},\beta}(z_{1},\ldots,z_{N}) = \frac{1}{{\mathcal{Z}}_{N,\beta}}\exp\left[\beta\sum_{j,k=1;j<k} ^{N}\log|z_{k}-z_{j}|-\sum_{j=1}^{N}|z_{j}|^{2}\right], \tag{2.3}\] where \({\mathcal{Z}}_{N,\beta}\) is the normalising partition function. The \(N\) points are represented by complex coordinates \(z_{j}\in{\mathbb{C}}\), \(j=1,\ldots,N\), with the usual identification \({\mathbb{C}}\sim{\mathbb{R}}^{2}\). Compared to standard conventions, where \(\beta\) multiplies the entire Hamiltonian including the confining potential, we have absorbed \(\beta\) in front of the potential by rescaling the coordinates \(\beta V(z)\to V(z)\). The rescaling is made in order to allow us to take the limit \(\beta\to 0\) to reach Poisson statistics, while maintaining a confining potential. Furthermore, the charge is set to unity (or absorbed into \(\beta\)). Such static Coulomb gases have been much studied in the recent mathematical literature, and we refer to [34] for a recent review. After a further rescaling of the potential \(V(z)\to NV(z)\), in the large-\(N\) limit the mean density \(\rho(z)\) of points condenses on the so-called droplet \(S\). It is given by the Laplacian of the potential, \(\rho(z)=\frac{2}{\beta}\partial_{z}\partial_{\bar{z}}V(z)\), Frostman's equilibrium measure. For general rotationally invariant potentials \(V=V(|z|)\) the support \(S\) is given by a disc of fixed radius, that can be rescaled to the unit disc. For the local correlations among points at distance of order \(1/\sqrt{N}\) very little is known for fixed \(\beta>0\), apart from Poisson statistics at \(\beta=0\) (that extends to \(\beta\sim 1/N\), cf. [35]) and the integrable case \(\beta=2\), when the point process (2.3) becomes determinantal, cf. [27]. Inspired by the Wigner surmise for the 1D Dyson gas, and its generalisation to general \(\beta\) based on a \(2\times 2\)\(\beta\)-ensemble [36], a surmise (sur) for the NN spacing distribution was derived from complex normal \(2\times 2\) random matrices in [26]2 Footnote 2: Notice a typo in the normalisation constant in [26]: \(\alpha^{\beta}\) there should be replaced by \(\alpha^{1+\beta/2}\) as here. \[p^{(\text{NN})}_{\text{sur},\beta}(s)=\frac{2\alpha^{1+\beta/2}}{\Gamma[1+ \beta/2]}\,s^{1+\beta}\exp[-\alpha s^{2}]\sim s^{1+\beta}, \tag{2.4}\] where \(\alpha=\Gamma[(3+\beta)/2]^{2}/\Gamma[1+\beta/2]^{2}\). Its behaviour at small values \(s\to 0\) is as expected heuristically from (2.3), with one power from the radial measure and a power \(\beta\) from the Vandermonde determinant. Unfortunately, it is well known [25] from the integrable case at \(\beta=2\) in 2D, cf. (2.9) below, that here \(N=2\) is not a good approximation to the large-\(N\) limit. However, in the limit \(\beta\to 0\) eq. (2.4) exactly reproduces the NN spacing of the Poisson distribution (2.1). This characteristic is opposite to the Wigner surmise in 1D, which becomes more accurate for increasing values of \(\beta\), rather than for \(\beta\to 0\). In order to improve the approximation of the surmise (2.4) in 2D to larger values of \(\beta\) (and with exact results for \(N>2\) being unavailable), an effective \(\beta_{\text{eff}}\) was introduced in [26], by fitting a third order polynomial in \(\beta\) to the numerically determined spacing of the 2D Coulomb gas in the range of \(\beta\in[0,3]\): \[\beta_{\text{eff}}(\beta)=2.108\beta-0.190\beta^{2}+0.030\beta^{3}. \tag{2.5}\] Consequently in \(p^{(\text{NN})}_{\text{sur},\beta_{\text{eff}}}(s)\) the normalisation constant has to change too, \(\alpha(\beta)\to\alpha(\beta_{\text{eff}})=\alpha_{\text{eff}}\), to ensure a normalised spacing and first moment equal to unity. This leads a reasonable approximation up to \(\beta\approx 0.5\), with a standard deviation of up to \(\sigma=2.6\cdot 10^{-2}\). See Appendix B for plots in this range, and Fig. 2 right for a comparison at \(\beta=2\), where the surmise clearly fails. In [26] a more detailed comparison is presented, including higher \(\beta\)-values and their Kolmogorov-Smirnov distances. Obviously, the fit using an effective \(\beta_{\text{eff}}\) spoils the heuristically expected proportionality at very small argument \(\sim s^{1+\beta_{\text{eff}}}\). However, the limit \(\beta\to 0\) is still reproduced exactly. Being based on \(2\times 2\) matrices, there is no approximate prediction for the NNN spacing possible. Figure 2. **Left**: The approximate, surmised NN spacing distribution \(p^{(\text{NN})}_{\text{sur},\beta_{\text{eff}}}(s)\) from (2.4) and (2.5), varying from \(\beta=0\) (red) to \(\beta=1\) (violett) in steps of \(0.1\). The maximum increases and moves from left to right. **Right**: At \(\beta=2\) the surmise (full line) is no longer close to the exact answer from the Ginibre ensemble (2.9), truncated at \(N=20\) with rescaled, normalised first moment (dashed line). In our comparison to data in the next section we will use the cumulative distribution function (CDF) of (2.4), in order to avoid any dependence on the choice of histograms. It is given by \[E_{\mathrm{sur},\beta_{\mathrm{eff}}}^{(\mathrm{NN})}(s) = 1-\frac{\Gamma\left(1+\frac{\beta_{\mathrm{eff}}}{2},\alpha_{ \mathrm{eff}}s^{2}\right)}{\Gamma\left(1+\frac{\beta_{\mathrm{eff}}}{2} \right)}, \tag{2.6}\] where \(\Gamma(n+1,x)=\int_{x}^{\infty}t^{n}e^{-t}dt\) is the upper incomplete Gamma function. For completeness we give the analytical result for the NN [25] and NNN [38] spacing distribution for \(\beta=2\), that follows from the Ginibre ensemble [37]. Here, the 2D Coulomb gas (2.3) has a representation in terms of complex eigenvalues of complex non-Hermitian random matrices with Gaussian distribution. In this case, the logarithmic interaction term can be written in terms of the modulus square of the Vandermonde determinant, \[\Delta_{N}(z_{1},\ldots,z_{N})=\det[z_{i}^{j-1}]_{i,j=1}^{N}=\prod_{k>l}^{N}(z _{k}-z_{l})\, \tag{2.7}\] of the \(N\) eigenvalues \(z_{j}\). Hence the point process is determinantal, and all complex eigenvalue correlation functions are explicitly known at finite \(N\). The spacing distributions can be derived from the limiting CDF or gap probability \(E_{\mathrm{Gin}}(s)\), to find an eigenvalue at the origin and the closest non-zero complex eigenvalue at radial distance \(s\), \[E_{\mathrm{Gin}}^{(\mathrm{NN})}(s)=\prod_{j=1}^{\infty}\frac{\Gamma(1+j,s^{2 })}{j!}. \tag{2.8}\] For finite \(N\) the product extends only to \(N-1\). The limiting spacing distributions at infinite matrix dimension follow from differentiation, compare [25] for NN and [38] for NNN: \[p_{\mathrm{Gin}}^{(\mathrm{NN})}(s) = E_{\mathrm{Gin}}^{(\mathrm{NN})}(s)\sum_{j=1}^{\infty}\frac{2s^ {2j+1}\mathrm{e}^{-s^{2}}}{\Gamma(1+j,s^{2})}\sim s^{3}, \tag{2.10}\] \[p_{\mathrm{Gin}}^{(\mathrm{NNNN})}(s) = E_{\mathrm{Gin}}^{(\mathrm{NN})}(s)\sum_{j,k=1;k\neq j}^{\infty }\frac{\gamma(1+j,s^{2})}{\Gamma(1+j,s^{2})}\frac{2s^{2k+1}\mathrm{e}^{-s^{2} }}{\Gamma(1+k,s^{2})}\sim s^{5}, \tag{2.9}\] where \(\gamma(1+k,s^{2})=\int_{0}^{s^{2}}t^{k}\mathrm{e}^{-t}\mathrm{d}t\) is the lower incomplete Gamma function, and we give again the behaviour at \(s\to 0\). The products converge very rapidly and both spacing distributions are normalised to unity. For simplicity, we have given the expressions above where the first moment \(\bar{s}_{1}\) of the NN spacing is not yet normalised to unity. It can only be determined numerically, for a product truncated at sufficiently high value of, say \(N\approx 20\), or larger. The NN with first moment of unity is then obtained by rescaling \(\hat{p}_{\mathrm{Gin}}^{(\mathrm{NN})}(y)=\bar{s}_{1}p_{\mathrm{Gin}}^{( \mathrm{NN})}(\bar{s}_{1}y)\), and the correspondingly rescaled NNN spacing reads \(\hat{p}_{\mathrm{Gin}}^{(\mathrm{NNN})}(y)=\bar{s}_{1}p_{\mathrm{Gin}}^{( \mathrm{NNN})}(\bar{s}_{1}y)\). The spacing distributions (2.9) and (2.10) hold throughout the bulk of the spectrum inside the supporting unit disc and are known to be highly universal. That is they hold for a much larger class of confining potentials \(V(|z|)\) than Gaussian, in general random normal matrix ensembles [29]. In the Ginibre ensemble at \(\beta=2\), the complex eigenvalue correlations at the edge of the droplet have also been investigated. They are also universal and agree with the correlations at an inner edge, that is a droplet with a hole, that can be studied in the induced Ginibre ensemble [39]. In our data there is no outer edge in Fig. 1, it is simply given by the area of observation and there are birds nesting outside the area containing dots. However, we do observe inner edges, as the cities of Bielefeld, Halle or Werther show up as holes in Fig. 1. We have not been able to address such edge correlations, mainly due to lack of statistics, but also because it is not so clear where to draw the boundaries of the inner edges. In our analysis we have thus treated all points as bulk points. In the next section we will use both to the approximate NN spacing \(p_{\mathrm{sur},\beta_{\mathrm{eff}}}^{(\mathrm{NN})}(s)\) from the surmise [26], and the numerically determined NN and NNN spacing from [3] as functions of \(\beta\), in order to quantify the repulsion within one species, and between different species. ## 3. Interaction of species ### Interaction within one species: Common Buzzard, Goshawk and Eagle Owl Let us describe the procedure initiated in [9] for Common Buzzards, to quantify the supposedly repulsive interaction between these birds of prey, by fitting the NN and NNN spacing distributions between occupied nests. Each year is observed for each species and treated as an ensemble. In principle, for each year a value of \(\beta\) can be determined for each species, as it is done in Fig. 3 left column for the NN spacing distribution in the year 2020 as an example. As it can be seen, the quality of the fit decreases rapidly with the amount of data available, in particular for the Eagle Owls. We show both the fit using the Coulomb gas (full line) with the spacings determined numerically in [3] are in discrete steps of size \(0.1\) for \(\beta\), and the explicit formula (2.4) for the surmise (dashed line), with \(\beta\) varying continuously. The fits are done using the CDF, compare (2.6), to avoid any dependence on the choice of binning. To guide the eye we nevertheless show the spacing distribution compared to histograms of the data. The fitted \(\beta\)-values vary over a wide range. For example, in the year 2020 in Fig. 3 left column, we find the strongest repulsion among Goshawks (\(\beta=3\)), followed by the Common Buzzard (\(\beta=1.1\)) down to Eagle Owls (F) with \(\beta=0.7\). Apart from the latter plot with very low statistics, the surmise and Coulomb gas fit agree well. Notice that the continuous respectively discrete fit in \(\beta\) adds to the discrepancy between the two, see Appendix B for a comparison at equal values. We find that all fitted values are far from 2D Poisson statistics at \(\beta=0\). Let us emphasise, that the fitted parameter \(\beta\) does not have a biological meaning, but that it is the relative strength that allows us to compare between different species of birds, or their interaction in the next Subsection 3.2. The population of pairs of Common Buzzards and Goshawks per year is shown as red crosses in Fig. 4 bottom left, respectively right, and for Eagle Owls in Fig. 5 bottom in the forest (F, left) and plains (P, right). Between Common Buzzards and Goshawks there is a factor of 10 in abundance, and another factor of 2 between Goshawks and Eagle Owls, with the latter being completely absent in the plains up to the year 2010. To remedy the resulting low statistics, we have introduced a time moving average, averaging over 10 consecutive ensembles (years) for all three species. The correspondingly averaged population is shown as a black line in the population plots in Figs. 4 and 5, respectively. For comparison, because of the better statistics for the Common Buzzards, in [9] we choose a time moving average of 5 years, that gave a better resolution of the time dependence of the fitted \(\beta\) as a measure of repulsion. In [9] we also presented the fits for all individual years (compare Fig. 4 left in [9]). This result is very noisy and makes it difficult to see an overall trend in time, which is why we do not present such plots here. The fitted \(\beta\)-value for such time averaged NN spacing distributions for each species is shown in Fig. 3 middle column, choosing the average over the period 2011-2020 as an example, that includes the single year 2020 from the left column. It is striking to see that in all three cases the ensemble average leads to a much lower \(\beta\)-value. This is especially true for the Goshawks, which now show a repulsion comparable to the Common Buzzards. There is a clear trend that the fitted \(\beta\) value goes down for the Goshawks over the years, see Fig. 4. Notice that their population is peaked in 2011 and 2012. For the Eagle Owls (F) we obtain a fitted value \(\beta=0.1\), being very close to 2D Poisson in this window of time average. Also here the averaged fitted \(\beta\) goes down over time, see Fig. 5 top left. Finally, if we abandon any time resolution and make an ensemble average over all available years, we obtain closer values for \(\beta\) for all three species, as shown in Fig. 3 right column. The strongest repulsion is again amongst Goshawks (\(\beta=0.8\)), followed by Common Buzzards (\(\beta=0.6\)), and almost equally among Eagle Owls (F) with \(\beta=0.5\). The particularly high \(\beta\) for intraspecific repulsion in Goshawk makes a lot of ecological sense, as the Goshawk is one of the most territorial species in Europe [40]. The Goshawk preys on larger prey and needs exclusive access to hunting areas [41]. Therefore, intruding individuals are always met with high aggression and are regularly killed by the territory owners [42, 40, 43]. All values are significantly above 2D Poisson statistics (\(\beta=0\)). Because of the strong growth in population for Common Buzzards, and Eagle Owls (F), see Figs. 4 and 5 Figure 3. Comparison of the fitted \(\beta\)-values from the NN spacing distribution from 2D Coulomb in steps of 0.1 (full line), respectively from the surmise (2.4) and (2.5) with continuous \(\beta\) (dashed line), for Common Buzzards (top row), Goshawks (middle row), and Eagle Owls (F, bottom row). We present examples for fits for a single year 2011 (left column), an average over 10 consecutive years from 2011-2020 (middle column), and over all years (right column). Shown are histograms and spacing distribution, whereas the fits were obtained form the cumulative distributions. respectively, the later years are weighted much higher here in the average over all years, because they contribute to more spacings per year. In the following we will discuss our findings for the \(\beta\)-values fitted to the time moving average. Two general observations can be made. First, the values obtained using the surmise agree quite well with the NN spacings form the 2D Coulomb gas, apart from the years with low statistics for the Eagle Owls (F). In particular, they follow the same trend as seen from the NN Coulomb gas fits and thus can be Figure 4. **Top row:** Fit of \(\beta\) for the NN spacing for Common Buzzards (left) and Goshawks (right) from the Coulomb gas (blue crosses) and surmise (circles), as well as for the NNN spacings distribution (red crosses). We use a time moving average of 10 years, where the middle year is indicated on the x-axis (e.g. 2004+1/2 for the period 2000-2009). **Bottom row:** Population of birds per year (red crosses) for Common Buzzards (left) and Goshawks (right), over the entire period of time 2000-2020. The resulting average populations using a time moving average of 10 years is shown in comparison (black line), where again the middle year is chosen as a label, as in the top plots. For a better comparison these points are connected by the line of the moving average. used as a very simple approximate, but analytical tool. Second, the \(\beta\)-values obtained from NN and NNN fits do not agree in general, sometimes lying systematically below (Common Buzzards, Eagle Owls) or above (Goshawks) the NN value. This discrepancy may be used to compare the observed interaction range to that of the Coulomb interaction. The same observation was already made in [9] and motivated us to introduce and study an interaction with shorter range, the 2D Yukawa interaction, see Section 4.2. It turned out, however, that also with such a two-parameter family (with \(\beta\) and \(\gamma\)) the phenomena of obtaining two different values for \(\beta\) from NN and NNN remains, see the discussion there. Therefore, we kept the simpler 2D Coulomb gas model as a measure for repulsion. In Figs. 4 and 5 we do not give error bars for the fitted values of \(\beta\). In our previous paper [9] we made an attempt to estimate the error by fitting \(\beta\) to approximately the same number of points from a Coulomb gas simulation, that is \(N=200\) for the Common Buzzards. This estimate (which is not related to the data of occupied nests) seemed to overestimate the fluctuations, e.g. when comparing to a linear fit for the trend in \(\beta\). Because in the ensembles of Goshawks and Eagle Owls the number of nests per year is smaller by a factor of 10, we refrain from giving such rough estimates for the errors. For the Common Buzzards the following picture emerges from Fig. 4 left, that was already found in [9]. The NN values vary in range between \(\beta=0.3\) and 0.8 over the observed period. Although they still fluctuate considerably for this time moving average, an increasing trend can be identified. In contrast, the NNN values remain close to \(\beta=0\) for half of the observed period, and then jump approximately to the NN values from the mid-year 2010 onwards. In comparison, the population growth is approximately linear from around 100 pairs on average to above 200 (10 year average given by the black curve in Fig. 4 bottom left). Apparently below a certain population threshold the interaction range is rather limited to NN, and then increases in range to be Coulomb like up to NNN. Notice that because of unfolding the data, the mean spacing is the same in all years. Thus the trivial effect of having a smaller spacing for a larger density is removed here. For the Goshawks we observe in Fig. 4 right that there is a clear trend for the fitted \(\beta\)-value to go down from approximately 1.2 to 0.4 for NN and from 1.6. to 0.5 for NNN. In contrast to the Common Buzzards, the NNN values are systematically above the NN values, apart from the last middle year. This means that the interaction range is longer than 2D Coulomb, and the repulsion decreases over time. The population development is also different from the Common Buzzards when comparing the bottom rows in Fig. 4. Notice the difference in abundance of a factor of 5-10 between the two species. Although the Goshawk population fluctuates between 12 and 29 pairs, the average only slightly goes up from 17.5 to 22.5, stabilises and then slightly goes down again. Thus a different factor than the Goshawk population seems to be at work here. If it lies in the interaction between the two other species, it will be answered (negatively) in the next Subsection 3.2. Finally let us discuss the findings for the Eagle Owls. Because of the lowest statistics, see Fig. 5, about a factor of 2 in abundance below the population of Goshawks, this is very difficult, especially for the much smaller group nesting in the plains (P). For those nesting in the forest (F), the \(\beta\) value from NN clearly decreases from large values above 1.5 down to 0.2-0.1, which is almost comparable with 2D Poisson at \(\beta=0\). In contrast, the NNN values remains consistent with \(\beta=0\) over the entire period. A comparison with the population development from 5 pairs on average to 12 shows an approximate linear increase on average, see Fig. 5 bottom left. The effect is thus opposite to the Common Buzzards, the Eagle Owls (F) and Goshawks show a decreasing repulsion, with short range interaction, despite an increase or stabilisation in population, respectively. It should be noted that for very few spacings of the order of unity, the distribution is close to a delta peak, thus leading to very high values for the fitted \(\beta\). For a better resolution the first NN value at \(\beta=3\) is suppressed in Fig. 5 top left. The time dimension of the strength of the repulsion also reflects the species' biology as well as population trends in the study area. In Common Buzzard, we see an increase of beta over time, most likely due to the increase in the population density until carrying capacity might have been approached in the last five years. The strength of intraspecific competition is expected to increase with increasing population density. With regard to the Goshaw, a decreasing beta over time is mirrored by a decrease in population density over the last five years, most likely due to displacement by Eagle Owls [13]. Why \(\beta\) shows a decrease for Eagle Owls, the population of which has increased very rapidly over the last 20 years, is difficult to explain. It could be that the population is still not approaching carrying capacity and hence progressively smaller repulsions measured through NN spacings are observed. One remark regarding 2D Poisson is in order here. In Subsection 4.1 we ask the question, if the scattered patches of forest (see Fig. 1) where all birds prefer to nest, does not introduce an effective Figure 5. The same plots as in Fig. 4 for Eagle Owls (F) left column, and for the Eagle Owls (P) right columns. **Top row:** corresponding \(\beta\)-fits for time moving averages (F) left, and a fit over all years (P) right. Values above \(\beta=2\) are not shown. **Bottom row:** corresponding populations of Eagle Owls. Notice the lack of nests in the plains (right) until 2010. lower dimension than 2. The effective reduction we find there, by generating a Poisson point process on the forest patches only, is from \(D=2\) to approximately \(D=1.66\), see Subsection 4.1 for more details. We could therefore conclude, that even when finding \(\beta\approx 0\) for the fit, this may not yet indicate a complete absence of repulsion. ### Interaction among two species In this subsection we quantify the repulsion between all combinations of two different species of birds, while keeping the two groups of Eagle Owls (F) and (P) separate, see Figure 1 left. Following the Coulomb gas analogy, it would be perhaps natural to associate to each species a different charge, according to their observed (average) repulsion strength. However, such a multi-component Coulomb gas would depend on the ratio of the different charges, which may change over the years. If one charge is very abundant, we may consider the others to be screened, but this is also not always the case. The multi-parameter fit is therefore not easy to do, and we stick to the simple one-parameter fits for each pair of species instead, see Figure 6. We thus fit one \(\beta\)-value to the spacings between species \(A\) and \(B\). In order to avoid an over counting of spacings, we always go through the number of points (nest) of the less abundant species between the two, and find its NN of the more abundant species in a given year, defining an ensemble. Because in all years the number of pairs of Common Buzzards is larger than the pairs of Goshawks which in turn is larger than the number of pairs of Eagle Owls ((F) or (P)), the ordering is clear. For example, for the Common Buzzard - Goshawk interaction we go through all Goshawk nests in a given year and find its NN Common Buzzard nest. Our statistic is thus limited by the population of Goshawks or Eagle Owls, respectively. For that reason, we choose Figure 6. 2D Coulomb gas and surmise fits for the \(\beta\)-values of the NN spacing distributions (full respectively dashed line) between all pairs of species for all years, where for Eagle Owls we distinguish (F) and (P) in the top left row and bottom left row, respectively. When the best fit to the 2D Coulomb gas gives \(\beta=0\) (bottom middle and right), we fit instead to the Poisson distribution with \(D<2\) from Eq. (4.1) (dashed line), with the resulting value for \(D\) given in the inset. to make an average over all ensembles of 21 years in this section. Furthermore, we restrict ourselves to NN spacings only. The unfolding of the data has been made using all occupied nests from all three species per year, to get an approximate mean global density. This certainly represents a simplification, but does not introduce any extra bias. The following picture emerges from Fig. 6. All fitted values are closer to \(\beta=0\), with the largest repulsion observed between Common Buzzards and Eagle Owl (F) at \(\beta=0.15\). Thus the 2D Coulomb gas and surmise values are very close. The NN spacing between Goshawks and Eagle Owls (P) is not well fitted by Poisson at \(D\leq 2\), nor by a 2D Coulomb gas at \(\beta>0\). Because we average over all years, we have to compare with the values in Fig. 3 right column, for the repulsion within each species for all years: \(\beta=0.6\) for Common Buzzards, \(\beta=0.8\) for Goshawks and \(\beta=0.5\) for Eagle Owls (F). Clearly, the repulsion measured in \(\beta\) is much weaker between different species than within one species. The strongest interspecies repulsion measured in this way is between Common Buzzards and Eagle Owls (F) with \(\beta=0.15\), followed by that between Goshawks and Eagle Owls (F), and Common Buzzards and Eagle Owls (P) at \(\beta=0.1\) each. Do these low values of \(\beta\) close to zero indicate, that there is almost no repulsion between different species, as in a random Poisson point process in 2D? As already mentioned in the previous subsection, we investigate in Subsection 4.1 if the random point process on the forest patches reduces the dimension, finding \(D=1.66\) as effective dimension. This seems to indicate that even low \(\beta\)-values close to zero represent a certain repulsion. For the interaction between Common Buzzards and Goshawks we (perhaps coincidentally) find that the best fit is obtained by the Poisson distribution (4.1) close to this effective dimension at \(D=1.64\), and in that sense there is no repulsion also according to this measure between the two. The plot with the lowest statistics between Goshawks and Eagle Owls (P) could be interpreted in the same way, beside the low quality of the fit. The finding of a small repulsion between the three species is in line with empirical findings that intraspecific competition is commonly stronger than interspecific competition in avian predators [11, 44]. The effect of Eagle Owls is rather different and has been shown to be clearly negative for both Common Buzzard and Goshawk [15, 13]. These interactions are, however, very dynamic at a very small scale [15, 13], and hence the simple models employed here are perhaps at their limits with regard to detect these interactions. ## 4. Methodology - Variations of the random point process ### Poisson point process in varying dimension \(D\) In this section we investigate if the 2D Poisson point process used so far indeed describes the situation of nests placed as independent random variables in the plane. It has been observed that all three species of birds of prey invariably breed in forest patches. A look on Fig. 1 right thus poses the question, if the forest represents a lower dimensional, fractal domain in the full two-dimensional area. Notice that we completely ignore the elevation of the terrain, by treating it as two-dimensional. It varies from about 70 m to 300 m in height above sea level, compared to the dimension of roughly \(12\times 25\) km extent of the monitored area. We will try to answer this question by generating a Poisson point process solely on the forested area, determine its NN spacing distribution numerically, and compare it to the analytic result for the NN spacing distribution of a Poisson point process for general dimension \(D\), by fitting \(D\) as a free parameter. This distribution is well known for integer dimension \(D\), see e.g. [4], and we simply analytically continue to real \(D>0\) here: \[p_{\rm Pois,D}^{(\rm NN)}(s)=D\left(\frac{\Gamma(1/D)}{D}\right)^{D}s^{D-1} \exp\left[-\left(\frac{\Gamma(1/D)}{D}\right)^{D}s^{D}\right]\sim s^{D-1}. \tag{4.1}\] It is normalised with first moment equal to unity. The repulsion \(\sim s^{D-1}\) originates entirely from the (fractional) area measure. The Poisson point process on the forest is generated as follows. A certain number \(N\) of points is distributed independently in the monitored area. If they fall onto a green forest patch in Fig. 1 right, they are accepted, else they are rejected. Note that we accept all areas with trees, i.e., not just the main forest depicted as a green band in Figure 1. In the limit \(N\gg 1\) with a fixed area, we would recover a collection of two-dimensional patches, given the number of points per patch is large enough. In order to capture roughly the same length scale as the distances between occupied nests, we stop after having \(N=200\) accepted points, which is about the number of Common Buzzard pairs in the entire area. For the fit we use a Kolmogorov-Smirnov fit to the cumulative distribution of eq. (4.1), see Figure 7 for the result. We find that the Poisson point process generated as described indeed leads to slight reduction of dimension from \(D=2\) to an effective dimension of \(D=1.66\). Consequently, we may conclude that a fit of data to a Coulomb gas with \(\beta=0\), or equivalently the Poisson point process with \(D=2\), still reflects a small repulsion, compared to the process on the forested area. Consequently this makes the repulsion found for small \(\beta\) more pronounced. In Subsection 3.2, where the repulsion between different species is quantified by fitting to the NN spacing of the 2D Coulomb gas with \(\beta\geq 0\), we encounter the situation that the fit to \(\beta=0\) is apparently still not satisfactory, because the maximum of the NN data is further to the left. In that cases we rather fit the dimension \(D\) of the Poisson NN distribution (4.1), that has a maximum further to the left, see Fig. 6 and the discussion there. ### Varying correlation length: 2D Yukawa interaction In the previous Section 3 we have seen that fitting \(\beta\) independently to the NN and NNN spacing distribution may lead to different values. This indicates that the repulsion between the nests cannot always be described by a 2D Figure 7. **Left**: NN spacing distribution of the Poisson point process (4.1) for various dimensions from \(D=2\) (red) down to \(D=1\) (green), in steps of \(0.1\). The maximum is moving from right to left when going from \(D=2\) to \(D=1\). **Right**: Fit of the dimension \(D\) in (4.1) (solid curve) to the random point process generated on the forested area as described in the main text (histograms). From the fit we obtain an effective dimension of \(D=1.66\). For comparison, the NN spacing distribution in \(D=2\) is also shown (dashed curve). Coulomb interaction at a single inverse temperature \(\beta\). For instance, when finding \(\beta_{\rm NN}>\beta_{\rm NNN}\) we expect an interaction weaker than Coulomb, or of shorter range. Likewise, for \(\beta_{\rm NN}<\beta_{\rm NNN}\) we expect an interaction stronger than Coulomb, or of longer range on that scale. This motivates us to study a Coulomb-like interaction in this subsection, where the interaction range can be varied: The Yukawa potential \(V_{\rm Yukawa}\) in \(D=2\) dimensions. We note in passing that the Yukawa interaction arises in particle physics from the scattering of _distinguishable_ Fermions (points) in the non-relativistic limit, see [47] for a standard work on quantum field theory, which we follow here for the derivation. The Yukawa potential can be defined in \(D\) (integer) dimensions3 by taking the inverse Fourier transformation of the propagator with mass \(m\) and coupling constant \(g\): Footnote 3: In most textbooks only \(D=3\) is considered, including [47]. \[V_{\rm Yukawa}^{(D)}(\vec{x})=\int\frac{dq^{D}}{(2\pi)^{D}}\frac{-g^{2}}{| \vec{q}|^{2}+m^{2}}e^{i\vec{q}\cdot\vec{x}},\quad\vec{x}\in\mathbb{R}^{D}. \tag{4.2}\] The integral can be performed using polar coordinates, and in \(D=2\) we obtain \[V_{\rm Yukawa}^{(2)}(\vec{x}) = -\frac{g^{2}}{4\pi^{2}}\int_{0}^{\infty}dqq\int_{0}^{2\pi}d\theta \frac{e^{iqr\cos\theta}}{q^{2}+m^{2}} \tag{4.3}\] \[= -\frac{g^{2}}{2\pi}\int_{0}^{\infty}dqq\frac{J_{0}\left(qr\right) }{q^{2}+m^{2}}\] \[= -\frac{g^{2}}{2\pi}K_{0}\left(mr\right)\.\] It only depends on the radial distance \(r=|\vec{x}|\). Here, we used the standard integral representation of the Bessel function of the first kind \(J_{0}(y)\), and \(K_{0}(y)\) denotes the modified Bessel function of the second kind. The last equality follows from [48, 6.532]. It has a logarithmic singularity at the origin, \[K_{0}\left(mr\right)\sim-\log(mr/2)-C,\quad\mbox{for }mr\to 0, \tag{4.4}\] where \(C\approx 0.577\) is the Euler-Mascheroni constant, and vanishes exponentially for large distance as \[K_{0}\left(mr\right)\sim\sqrt{\frac{\pi}{2mr}}\ e^{-mr},\quad\mbox{for }mr\to\infty. \tag{4.5}\] If we define a length scale by \(\gamma=1/m\), we obtain a one-parameter deformation of the 2D Coulomb interaction. Namely, for fixed distance \(r\) and large scale \(\gamma\gg 1\) (\(m\ll 1\)) we are back to the logarithmic repulsion (shifted by a constant). At fixed \(\gamma\), however, the interaction range of the 2D Yukawa potential is much shorter and decays exponentially, while still being logarithmic at short distances. In order to match with the point process (2.3) of the 2D Coulomb gas at a given inverse temperature \(\beta\), we use the folloqing shifted potential (4.3): \[V_{\rm Yukawa}^{(2)}(r) = -\beta\left(K_{0}\left(r\gamma^{-1}\right)-\log(2\gamma)+C \right)\, \tag{4.6}\] identifying \(\frac{g^{2}}{2\pi}=\beta\). A comparison between the two potentials at fixed \(\beta=1\) is shown in Figure 8 left. The NN (NNN) spacing distributions shown in Fig. 8 middle (right) have to be determined numerically again, as it was done for the 2D Coulomb interaction in [9]. That is, we use a Metropolis-Hastings algorithm [50, 49] with (4.6) as the potential. The points are initialised independently, and then perturbed iteratively. Each perturbation is always accepted if it leads to lower energy, and accepted with probability \(e^{-(V_{\rm after}-V_{\rm before})}\) if it leads to higher energy, otherwise it is rejected. (Note that by including \(\beta\) in Equation (4.6), we have made it dimensionless.) After a number of iterations, the points are considered to be a sample of the potential. On average, each point is perturbed 100 times. This is considered enough, as increasing the amount of iterations does not change the NN spacing distribution. Unfolding is necessary when computing the NN spacing distribution of the Yukawa potential for \(\gamma<\infty\), as the global spectrum is non-uniform. While the potential \(V_{\text{Yukawa}}^{(2)}(r)\) differs considerably for the parameter values chosen, the spacing distribution converges rather rapidly to the one of the Coulomb potential at \(\gamma\to\infty\), in particular for NN. The reason is that it probes local correlations among the points (which are closer for NN than for NNN), which seems to be rather robust under deformations of the potential. As an aside, such a universality has been proven rigorously in 1D for all fixed values of \(\beta\) for deformed potentials that behave only locally as a logarithm [51]. The fitted value of the scale \(\gamma\) could in principle be translated into an actual correlation length, but the effect of lowering the parameter \(\gamma\) is qualitatively similar to lowering \(\beta\), and only becomes noticeable at relatively small values of \(\gamma\) for the NN. Additionally, in order to compare our data with the spacing distribution we first need to unfold. This makes a translation back into a real correlation distance that may depend on the terrain (local density) rather cumbersome. The two-parameter fit with the Yukawa potential does not reconcile the discrepancy between different \(\beta\)-values obtained for the NN and NNN distributions, and we therefore did not pursue the approach further. In other words, adding a length scale does not allow us to fit both NN and NNN with the same parameter values. Recall that the actual value of \(\beta\) does not have a biological meaning. It rather serves as a relative measure in comparing different species, and the sole purpose of adding a length scale was to compensate for differences in the fitted \(\beta\) for NN and NNN distributions. Fitting both NN and NNN at the same time would not improve this, as we would merely observe an average of the two distributions. Because of the relatively small amount of available data, especially for the Eagle Owls, Figure 8. **Left:** Comparison of the 2D Coulomb potential (\(\gamma=\infty\), dashed) and the the 2D Yukawa potential (4.6) varying from \(\gamma=0.01\) (bottom red full line) to \(\gamma=10\) (top blue full line), see also inset in the right plot for the colour coding. **Middle and right:** numerically determined NN and NN spacing distributions, respectively, for the same values of \(\gamma\). The maximum increases with \(\gamma\) and moves from left to right towards the rightmost curve, given by the 2D Coulomb potential. it is probably not correct to weight the contributions from NN and NNN equally in determining \(\beta\) and \(\gamma\). However, any other choice of weighting them easily becomes arbitrary. ### Independence of points: Reuse of nests One of the key assumptions in deriving the Poisson NN and NNN spacing distributions in (2.1) and (2.2) is the independence of points in this process. In this subsection we investigate if in the absence of correlation (repulsion), which we found in some data sets by agreeing with Poisson, it is justified. The reason for this question is that some of the nests are reused, not only within one species but also between different species, and we will present data about this fact below. Such an analysis is possible because all old and new nests were marked precisely with GPS coordinates, up to an error of about 10 m. The amount of reusage of occupied nests from one year to the next year varies between 30% (Common Buzzards), 45% (Goshawks) and 65% (Eagle Owl) within one species, on average. The reuse of any nest from any previous year is about 10-20% higher. The frequent re-use of nest sites over years and even decades has been documented previously for the study area and is typical of avian predators [44]. Habitat heterogeneity in many dimensions is commonly observed in these species [40], and individual preferences for certain habitat features are also common [44], leading to frequent re-use of nests. In other species, the same nest has been in use for centuries [45] and even millennia [46]. What is the consequence for our data analysis? In Section 3 we treated every set of occupied nests per year as an independent ensemble, in order to compare it to a statistical mechanics model, that represents an equilibrium configuration. The above data show that this is a simplification, that is only partly justified. The fact that a substantial fraction of nests is reused, both for the repulsion within one species or among two different species, means that not all NN and NNN positions are new. In order to improve our statistics we have then made an ensemble average over 10 consecutive years (in [9] over 5 years). We have not analysed if after two or more years the positions become more Figure 9. Percentage of nests that have been reused per year by each species. The black crosses indicate the fraction of nests that were reused from the previous year by the same species, i.e. Common Buzzards reusing a Common Buzzard nest in the left plot. The black full line indicates the mean over the entire period of observation. The red circles indicate the fraction of nests that one species reused from all nests occupied by any of the 3 species in any previous year. i.e. a Common Buzzard reusing a nest from a Common Buzzard, Goshawk or Eagle Owl from any previous year in the left plot. The red dashed line again indicates the mean value. Notice that both percentages may sometimes coincide. independent or randomised, but it seems reasonable to assume that. Our situation could be compared to ensemble generation in statistical mechanics. If a new configuration is generated from an old one, e.g. with the Metropolis-Hastings algorithm used to generate the 2D Coulomb and 2D Yukawa ensembles in this paper, one has to monitor the autocorrelation time before a new configuration can be accepted as independent. In the generation of points, we could simply let enough iterations go by, but for the observed data we cannot wait one or more years, until the next set of occupied nests is more independent, without losing much information (and statistics). This is because the repulsion within one or more species can change over time, as we have observed. There are ecological reasons why certain locations of nests are more popular than others, which is why they get reused. Plus it takes time and effort to build a new nest. Certain nest sites are associated with higher reproductive success, independent of the individuals using these nests [44]. The proximity of high quality hunting habitat, less exposure to predators and parasites, and less disturbance from humans are just some of the key factors [44, 15]. The fact that there is a large number of used nests in the observed area seems to guarantee at least to some extent, that a kind of "random sampling" can take place between consecutive years. There an estimate on the total number of 500-600 reusable nests in the area for birds of prey. As a final remark, there seems to be no correlation between the population growth and the percentage of reusage of nests, which is approximately constant in Fig. 9. As we have seen in Fig. 4 for the Common Buzzard, their population has grown from 50 pairs to over 250, and from a single pair to over 20 for all Eagle Owls in total, see Fig. 5. The Goshawk population has also grown but more slowly, and then declined again. Together with the high number of empty nests available each year this could be argued to be in favour of a certain independence of the locations of nests each year, despite the high fraction of reusage. ## 5. Dicussion and Open Questions This study has shown that a simple, one-parameter model from statistical mechanics, without any biological background, nevertheless has surprisingly well explained important features in this three-species guild of avian predators. We have shown that the intraspecific repulsion measured through \(\beta\) clearly deviates from Poisson statistics. The relative ranking in repulsion strength amongst the three species, averaged over all years, has been found compatible with experience from ecology. The correlation between \(\beta\) and the time dependence of the population have been found plausible for at least two of the three species analysed. For the Common Buzzard the steep rise in population from about 50 to 250 pairs has been paralleled by a clear rise in \(\beta\), in particular in the NNN spacing, showing also an increased interaction range. For the Goshawk the population has been more stable around 20 pairs, with a decline after a peak around 10 years ago. Both NN and NNN show a clear decrease in the fitted \(\beta\). In contrast for the Eagle Owls, with the population increasing from a single pair to over 15, we have seen a steep decrease in \(\beta\) for NN and no repulsion between NNN. This relation is not clear to us and may be due to the limitation of the model, or an artefact due to very low statistics. The change in \(\beta\) over time might hence be a fruitful further topic for investigation. At the same time, the results also clearly indicate some limitations when it comes to interspecific interactions. We have found the repulsion between Eagle Owls in the forest on the one hand, and the Common Buzzard respectively Goshawk on the other hand to be significantly smaller than the respective intraspecific repulsion, which is plausible. Both interspecies repulsions have values of \(\beta=0.15-0.1\) above Poisson at \(\beta=0\). Taken together with the effect of the terrain, effectively lowering the dimension of the Poisson distribution, this makes the repulsion found more pronounced and significant. For the Eagle Owls in the plains the same order of repulsion was only observed with the Common Buzzard, and no repulsion was seen for the Goshawk, neither among the latter two. In view of the continuing growth of population in Common Buzzards and Eagle Owls, notably populating the plains for the latter, and the population of Goshawks being under pressure, this seems to be a limitation of the model (and present statistics). From the point of view of the statistical mechanics model, we have shown that a simple formula based on a surmise from random matrix theory allows to quantify and discuss the intra- and inter-species NN repulsion as well, instead of using the numerically generated spacings distributions from a full 2D Coulomb gas, that also include NNN. Methodologically, we have quantified the effect of the terrain. In particular, it is not responsible for the repulsion observed. We have also discussed the 2D Yukawa interaction and spacing ratios as alternative models. The extensive reusage of nests, especially among Eagle Owls, has put the assumption of independence of ensembles as realised by consecutive years under scrutiny. However, at least from a random matrix perspective we know that there, spectral statistics shows a certain robustness or universality under such perturbations. We feel that this example of model transfer between disciplines has been an insightful exercise for both disciplines involved. It remains to be seen whether interspecific interactions might be better captured with another decade of data with increased statistics, and with all three species at their ecological carrying capacity. _Acknowledgements:_ We acknowledge funding by the German Research Foundation (DFG) through grant CRC 1283/2 2021 "Taming uncertainty and profiting from randomness and low regularity in analysis, stochastics and their applications" (GA and PP), and CRC/Transregio 212 "A Novel Synthesis of Individualisation across Behaviour, Ecology and Evolution: Niche Choice, Niche Conformance, Niche Construction (NC)3" (OK). AM would like to thank Kurt Mielke for improving the speed of the Metropolis-Hastings code and providing a server for it, which made both the many parameter combinations of the Yukawa potential and larger \(N\) feasible. We thank Michael Baake for useful discussions and comments. ## Appendix A Alternative to Unfolding: Complex Spacing Ratios In this appendix we briefly discuss complex spacing ratios as introduced in [4] and compare a subset of our data to several moments of these. However, the outcome is not conclusive, as we will see. Complex spacing ratios have become a popular tool in analysing two-dimensional data sets, compared to NN or NNN spacing distributions in radial direction. The advantage compared to the latter is that, first, no unfolding is necessary, which is often more difficult in two compared to one dimension. Second, it also gives both angular and radial information about the interaction between the points. It is defined as follows. Suppose we have an eigenvalue at \(z_{k}\), with NN at \(z_{k}^{\text{NN}}\), and NNN at \(z_{k}^{\text{NNN}}\), in radial distance. The complex spacing ratio is then defined as (A.1) \[u_{k}=\frac{z_{k}^{\text{NN}}-z_{k}}{z_{k}^{\text{NNN}}-z_{k}}\.\] The goal is then to determine the probability distribution \(\rho(u=re^{i\theta})=\rho(r,\theta)\) of this spacing ratio, in the limit of a large number of particles \(N\to\infty\) in the bulk of the spectrum, for a given point process. For the Poisson process, it is known to be flat on the unity disc [4], (A.2) \[\rho_{\text{Poi}}(u)=\frac{1}{\pi}\Theta(1-|u|)\.\] For the complex Ginibre ensemble explicit \((N-1)\)-fold integral representation are known [4], as well as approximate expressions that converge very rapidly [32]. There are several disadvantages in our situation, however. First, the amount of data we have available is rather small - even if very large for biological standards. This makes even a qualitative comparison to 2D plots very hard, if not impossible. The difficulty to make 2D-fits was anticipated in [4], and thus alternatively integrals over the angle or the radius where proposed, (A.3) \[\rho(r)=\int_{0}^{2\pi}d\theta r\rho(r,\theta)\,\quad\rho(\theta)=\int_{0}^{1} drr\rho(r,\theta)\,\] and then to consider moments thereof. Here, we assume that the limiting support of complex eigenvalues is normalised to the unit disc, as in Section 2.2 for the general 2D Coulomb gas. Analytic and approximate formulas have been worked out both for the Poisson case, and the complex Ginibre ensemble, which we reproduce here for completeness. Inserting the expression (A.2) for the Poisson ensemble into the definition (A.3), we obtain [4] (A.4) \[\rho_{\mathrm{Poi}}(r)=2r\,\quad\rho_{\mathrm{Poi}}(\theta)=1/(2\pi)\.\] This leads to the following expression for the first moments in the Poisson case: (A.5) \[\begin{split}\langle r\rangle_{\mathrm{Poi}}=&\int _{0}^{1}dr\,r\rho_{\mathrm{Poi}}(r)\ =\ \frac{2}{3}\,\\ \langle r^{2}\rangle_{\mathrm{Poi}}=&\int_{0}^{1} dr\,r^{2}\rho_{\mathrm{Poi}}(r)\ =\ \frac{1}{2}\,\\ \langle\cos(\theta)\rangle_{\mathrm{Poi}}=&\int_{0} ^{2\pi}d\theta\cos(\theta)\rho_{\mathrm{Poi}}(\theta)\ =\ 0\,\\ \langle\cos^{2}(\theta)\rangle_{\mathrm{Poi}}=& \int_{0}^{2\pi}d\theta\cos(\theta)^{2}\rho_{\mathrm{Poi}}(\theta)\ =\ \frac{1}{2}\,\\ \langle r\cos(\theta)\rangle_{\mathrm{Poi}}=& \int_{0}^{2\pi}d\theta\int_{0}^{1}drr\,r\cos(\theta)\rho_{\mathrm{Poi}}(r, \theta)\ =\ 0\.\end{split}\] Each moment is normalised. The angular and radial integrals decouple in all cases. The moments for complex Ginibre are found in [32] and reproduced here in Table A for completeness. For the 2D Coulomb gas with intermediate values \(0<\beta<2\), to our knowledge no such relations have been worked out, in particular if the corresponding values are in between the Poisson and Ginibre \begin{table} \begin{tabular}{c|c|c} limiting moment & Ginibre & Poisson \\ \hline \(\langle r\rangle\) & 0.739 & 2/3 \\ \(\langle r^{2}\rangle\) & 0.581 & 1/2 \\ \(\langle\cos(\theta)\rangle\) & -0.247 & 0 \\ \(\langle\cos(\theta)^{2}\rangle\) & 0.450 & 1/2 \\ \(\langle r\cos(\theta)\rangle\) & -0.189 & 0 \\ \end{tabular} \end{table} Table 1. Numerically determined moments in the complex Ginibre ensemble from [32], rounded to 3 digits (middle column), and the corresponding values for Poisson from (A.5) [4]. values. Furthermore, the effect of the interaction range not being of Coulomb type on the moments is not clear. These are crucial for our quantitative comparison. In Figure 10 we compare the moments of the complex spacing rations from Poisson and complex Ginibre ensemble to those of the spacings among nests of Common Buzzards (left) and Goshawks (right), for which we have the most data. As in Section 3 we use a time moving average. In the comparison Fig. 10 for the Common Buzzards (left) the top row with the radial moments shows a clear tendency over time, where the value for the moments increases from above Poisson towards Ginibre. This is consistent with the findings from Section 3.1 for the fitted \(\beta\)-values for NN and NNN spacing distribution for Common Buzzards, see Figure 4 left, where an increase was seen for NN (NNN) from about 0.4 to 0.7 (0.0 to 0.8). In contrast, the angular moments in the bottom row are less conclusive. While the left and right plots are close to Poisson and show a decrease towards the negative value for Ginibre, the middle plot is scattered somewhat between Poisson and Ginibre and rather decreasing towards Poisson. For the Goshawks the picture is even less clear from the complex spacing ratios. While the radial moments in the top row show values above Ginibre (which is above Poisson), with a slight tendency going down towards the Ginibre value, the bottom plots with angular moment (left and right plot) are closer to Poisson, with no clear trend. The bottom middle plot is quite scattered, starting in between Poisson and Ginibre and then moving clearly below the Ginibre value. In comparison, in Section 3.1 for the fitted \(\beta\)-values for NN and NNN spacing distribution for Goshawks, see Figure 4 right, a clear decent from \(\beta\)-values from about 1.5 down for 0.5 for both NN and NNN was detectable. In the comparison of \(\beta\)-values within one species and amongst different species, it was important to have even approximate values for \(\beta\), that quantify the repulsion. With the predictions from moments of complex spacing rations such a quantitative comparison does not seem to be easily possible, at least not with the predictions we have at hand. Second, we cannot exclude an influence of the spherical geometry on the predicted complex spacing ratios, a situation we do not have for our observed data. Figure 10. The moments from complex spacing ratios in (A.5) for Poisson and from Table A for complex Ginibre [32] are compared to the corresponding moments for Common Buzzards (left) and Goshawks (right). For a better comparison we have put the moments where the Ginibre (Poisson) value is higher in the top (bottom) row. This seems to be indicated by the unclear trend form the angular momenta. The spacing distributions we use in the main text are local quantities which a priori seem to be less sensitive to the geometry. ## Appendix B Comparison surmise and Coulomb gas NN spacing for small \(\beta\) In this appendix we illustrate the deviation between the numerically determined NN spacing distribution from [3] and the surmise [26] given in (2.4) together with (2.5), for small values of \(\beta\). The deviation quantified in terms of the standard deviation and the Kolmogorov-Smirnov distance can be found in [26, Table 1.]. The fits in \(\beta\) from the 2D Coulomb gas to the data presented in Section 3 are done in steps of 0.1 in \(\beta\), except for an intermediate step at \(\beta=0.15\) for small \(\beta\), see Fig. 11 top middle. In contrast, the fits to the surmise in Section 3 are made with continuous values for \(\beta\), as it becomes apparent form Figure 11. Comparison of the surmise (2.4) with (2.5) (dashed red line), and the numerically determined NN spacing distribution from the 2D Coulomb gas [3] (full blue line). the values given in the respective insets. This leads to an additional deviation between the two, which is why we display them at equal values for comparison here.
2308.15481
Online Job Failure Prediction in an HPC System
Modern High Performance Computing (HPC) systems are complex machines, with major impacts on economy and society. Along with their computational capability, their energy consumption is also steadily raising, representing a critical issue given the ongoing environmental and energetic crisis. Therefore, developing strategies to optimize HPC system management has paramount importance, both to guarantee top-tier performance and to improve energy efficiency. One strategy is to act at the workload level and highlight the jobs that are most likely to fail, prior to their execution on the system. Jobs failing during their execution unnecessarily occupy resources which could delay other jobs, adversely affecting the system performance and energy consumption. In this paper, we study job failure prediction at submit-time using classical machine learning algorithms. Our novelty lies in (i) the combination of these algorithms with Natural Language Processing (NLP) tools to represent jobs and (ii) the design of the approach to work in an online fashion in a real system. The study is based on a dataset extracted from a production machine hosted at the HPC centre CINECA in Italy. Experimental results show that our approach is promising.
Francesco Antici, Andrea Borghesi, Zeynep Kiziltan
2023-06-30T07:40:59Z
http://arxiv.org/abs/2308.15481v1
# Online Job Failure Prediction ###### Abstract Modern High Performance Computing (HPC) systems are complex machines, with major impacts on economy and society. Along with their computational capability, their energy consumption is also steadily raising, representing a critical issue given the ongoing environmental and energetic crisis. Therefore, developing strategies to optimize HPC system management has paramount importance, both to guarantee top-tier performance and to improve energy efficiency. One strategy is to act at the workload level and highlight the jobs that are most likely to fail, prior to their execution on the system. Jobs failing during their execution unnecessarily occupy resources which could delay other jobs, adversely affecting the system performance and energy consumption. In this paper, we study job failure prediction at submit-time using classical machine learning algorithms. Our novelty lies in (i) the combination of these algorithms with Natural Language Processing (NLP) tools to represent jobs and (ii) the design of the approach to work in an online fashion in a real system. The study is based on a dataset extracted from a production machine hosted at the HPC centre CINECA in Italy. Experimental results show that our approach is promising. ## 1 Introduction High Performance Computing (HPC) is a term used in Computer Science to represent the practice of aggregating computing power to solve complex problems. HPC machines are organized in clusters and they consist of several computing units (nodes) networked together to work in parallel and boost processing speed. Nodes are connected through a low-latency internal network bus, which routes traffic to mimic the behaviour of a single computer. The last decades have witnessed a massive increase in the number of components, accelerators and consequently consumption of computational power of HPC centers. This trend has been fuelled by the development of computational- hungry techniques, indeed HPC systems play a fundamental role in the field of data science, and are widely used for computationally intensive tasks in various fields, such as quantum mechanics, weather forecasting, climate research. Latest HPC systems have reached exascale performance, namely \(10^{18}\) operations per second, and in the future more systems are expected to have similar characteristics [3]. Machines of such scale must comply with certain standard of performance and energy efficiency, hence it is fundamental to develop strategies to optimize their workload management. One strategy is to highlight the jobs that are most likely to fail, prior to their execution on the system. Jobs failing during their execution unnecessarily occupy resources which could delay other jobs, adversely affecting the system performance and energy consumption. We distinguish between failures due to external factors, such as problems with the computing nodes, networking issues, workload manager downtime (_exogenous_ failures)[12], and those due to internal reasons, such as wrongly configured submission scripts and software bugs (_endogenous_ failures)[6]. We here focus on the latter category. Forecasting failures due to internal factors a priori would allow to adopt ad-hoc workload management strategies. In this paper, we present a Machine Learning (ML) based classification approach to predict endogenous job failures. Our approach is applicable to data that can be collected from a production machine and leverages only the information available at job submission time (hence does not require any instrumentation of the users' code nor any change to standard workload submission workflow). This information might have different formats, and text is among them. To extract more meaningful job information from such textual data, we employ Natural Language Processing (NLP) tools and improve the classification performance of the ML models. To the best of our knowledge, this is the first work that exploits an NLP method to represent jobs during classification. Contrary to the majority of the past studies which work on random splits of historical data, the proposed methodology can be deployed in an _online_ context where jobs are continuously submitted by users to a real production system. We demonstrate the validity of our approach on a dataset collected from a production machine Marconi100 hosted at the HPC centre CINECA1 in Italy. Footnote 1: [https://www.hpc.cineca.it/hardware/marconi100](https://www.hpc.cineca.it/hardware/marconi100). ## 2 Related Work In this paper, we restrict the related work to the study of failures in large-scale systems at job/application level. In [7], the authors analysed workload traces in a grid, showing the correlations between failure characteristics and performance metrics. Works like [4, 8] tackled application failure prediction in cloud computing by using recurrent neural networks on resource usage data and performance logs, extracted from Google cluster workload traces. Also in [14] the authors relied on the resource usage data of a job to predict its failure, but in the scope of an HPC center. These approaches do not take into account the human factors (error in the code, the submission, etc.), which are responsible for many job failures [10]. Therefore, the trend is shifting towards the use of data collected from a workload manager to predict failure using job features, as done in [10, 9, 1]. In [1], the authors use a decision tree algorithm to predict job failure on two HPC workloads. In [9], they survey several ML techniques to perform the same task on a Google cluster workload trace and other two HPC workloads. A similar approach is reported in [10] on another workload; in addition, they use NLP techniques to assign similar names to similar jobs executed by the same user. All this past work, which are most related to ours, evaluate their approach on random splits of data, which is not realistic because testing could be done on data which is chronologically placed in between the training data traces. Our work differs in two ways: (i) we propose to use NLP techniques to represent jobs for classification via all the job information available at job submission time, (ii) our approach can be deployed in a more realistic _online_ context and is thus evaluated on a streaming data, by continuously retraining the classification model on recent (past) data, and testing it on (future) data which has not been seen. ## 3 Background In this section, we first present our workload dataset and then the ML models we employ for job failure prediction. ### M100 Dataset The data used in this study is extracted from the M100 workload [2] which is the result of more than two years of monitoring on Marconi100, an HPC system hosted at CINECA2 in Italy. Marconi100 is a tier-0 supercomputer deployed in production since May 2020 and, at the time of writing, is ranked 24\({}^{th}\) in the top500 list3. The cluster is composed of 980 computing nodes, each equipped with two 16-cores IBM POWER9 AC922 processors at 3.1 GHz, four NVIDIA Volta V100 GPUs, and 256 GB RAM. The resources are accessed through eight login nodes, and all the components are connected by a Mellanox Infiniband EDR DragonFly+ 100 Gb/s network infrastructure. Resources are allocated through job submission to Slurm, the workload manager installed in the system. Footnote 2: [https://www.cineca.it](https://www.cineca.it) Footnote 3: [https://www.top500.org](https://www.top500.org) Footnote 4: [https://gitlab.com/ecs-lab/exadata/-/blob/main/documentation/plugins/job_table.md](https://gitlab.com/ecs-lab/exadata/-/blob/main/documentation/plugins/job_table.md) M100 contains data ranging from the computing nodes' internal information such as core load, temperature, power consumption, to the system-wide information, including the liquid cooling infrastructure, the air-conditioning system, the power supply units, workload manager statistics, and job-related information. For the purposes of our work, we focus on the data which describes the jobs present in the workload by features related to their submit-time, run-time and end-time. The first category contains the information available when a job is submitted, such as submission time, requested resources, user information and system state. The second category comprises the information about the job launch, such as waiting time, execution start time, and the actually allocated resources. At job termination, the end-time features are collected, e.g., ending time, duration and outcome of the execution. The full list of job features is available at the dataset repository.4 One feature related to the execution outcome is the job Exit State (ES) label, which is assigned to each job by Slurm as an interpretation of the job's Exit Code (EC). This code is formed by a pair of numbers; we consider only the first one, which refers to a system response that reports success, failure, or the reason of an unexpected result from job launch. An EC value of 0 means successful completion, while any EC \(\neq 0\) represents an error encountered during execution. Table 1 describes the ES labels assigned to the jobs in our dataset, along with their distribution. As seen in the table, the dataset is highly unbalanced. This is not surprising, because in a real production machine the failures should be minimized to guarantee correct functioning of the system. Nevertheless, the percentage of the jobs not successfully completed is more than 20% (more than 1 out of 6 millions of jobs), representing an important threat to the system performance. ### Classification and NLP Models We approach the prediction task as a binary classification problem. We exploit supervised and unsupervised techniques for classification, as well as a pre-trained state-of-the-art NLP model to represent jobs during classification. As for supervised algorithms, we consider the widely adopted Decision Tree, Random Forest and Logistic Regression. Decision Tree (DT) is a non-parametric method used for classification and regression, which predicts the value of a target variable by learning simple decision rules inferred from the data features. Random Forest (RF) is an ensemble method based on creating a diverse set of DT classifiers by introducing randomness in each DT construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers. Individual DTs typically exhibit high variance and tend to overfit. The aim of the ensemble method is to remove the error by taking an average of those predictions. Logistic Regression (LR) instead maps the probability of a label given the features of the data. It is usually faster than the other techniques and because of that is one of the most popular classification algorithms. As an unsupervised algorithm, we employ \(k\)-Nearest Neighbors (KNN) which is a type of instance-based learning that does not construct a general internal model, but rather project data points into a \(N-\)dimensional feature space and then consider their distances. Classification is computed from a simple majority \begin{table} \begin{tabular}{|l|l|l|} \hline **Name** & **Description** & **\%** \\ \hline Completed & Job completed execution without errors & 79\% \\ Failed & Job terminated for an unknown reason & 10\% \\ Cancelled & Job did not start execution due to an error in submission & 8\% \\ Timeout & Job terminated due to reaching the time limit & 2\% \\ Out of memory & Job terminated due to more memory access than allocated & 0.6\% \\ Preempted & A higher-priority job delayed the job execution & 0.1\% \\ Node fail & Job terminated due to a failure in an allocated node & 0.01\% \\ \hline \end{tabular} \end{table} Table 1: Job ES labels and their distribution in the M100 dataset. vote of the \(k-\)nearest neighbors of each data point, where \(k\) is a hyperparameter. The \(k-\)nearest neighbors are computed based on a distance metric, which could be for instance Cosine Distance (CD) and Minkowski Distance of order \(p\) (\(\text{MWD}_{p}\)). Given two vectors \(X\) and \(Y\), representing the data points in the feature space, the distances are calculated as \(CD(X,Y)=1-\cos\theta=1-\frac{X\cdot Y}{\|X\|\|Y\|}\) and \(MWD_{p}(X,Y)=\sqrt[p]{\sum_{i=1}^{n}|X_{i}-Y_{i}|^{p}}\) where \(\theta\) is the cosine of the angle between the vectors and \(\text{MWD}_{p}\) is a generalization of the Euclidean distance. Sentence-BERT (SBERT) [11] is a modification of the pre-trained BERT [5] language model. BERT is a well-known family of models based on the transformer architecture [13], used to give a numeric representation of words (or subwords) that takes into account the context in which these words are used. While BERT works well with classification tasks, it does not work equally well with regression tasks, such as sentence similarity. SBERT produces representations of sentences, not individual words, that are particularly apt for regression tasks. The representation of a string of text produced by SBERT is a fixed-size 384-dimensional floating-point array. ## 4 Methodology In this section, we describe our methodology to job failure prediction. The workflow can be divided into two phases: (i) data preparation and (ii) job failure prediction. ### Data preparation To train and test our classifiers, we consider a part of the dataset5 and use only the data collected between May 2020 and October 2020. The reason is that this is the only period where the dataset contains information on the requested resources and the job EC, which we need for our prediction task. We collect the job data in a data frame and then prepare it for model training and inference. Footnote 5: [https://doi.org/10.5281/zenodo.7588815](https://doi.org/10.5281/zenodo.7588815) Feature selectionIn order to describe the characteristics of a job in a classification task, we need to associate it with certain features. We focus only on job submit-time features, as we want to compute a prediction before job allocation. The features available in the dataset are listed in Table 2 along with their description. Jobs submitted by the same user and close in time tend to be similar because in a production HPC, users often submit jobs in batches referring to similar experiments and jobs in the same batch tend to have similar names and command. Thus, we believe that all these features are useful for our purposes. We note that user name and similar private data are omitted in the public dataset. However, CINECA granted us access under a non-disclosure agreement. _Job exit state labels_ For the training data, we need to assign a label to each job, indicating whether it has failed or not. In Section 3.1, we presented the job ES labels as they are present in the dataset, which are assigned by Slurm based on job EC. According to the Slurm official documentation, the labels assigned by the scheduler may not be coherent with the actual EC, due to lack of proper synchronization between the signal emitted by the job exit and the data collected in the database. We therefore inspect the data and identify any possible discrepancy, e.g., a job with an ES label _completed_ and an EC \(\neq 0\). Our analysis reveals that more than 70K jobs labelled differently than _completed_ have an EC value of 0. This is confirmed by the difference between the percentage of the completed jobs (83%) and the jobs having an EC of 0 (89%). As a consequence, we discard the original labels and create new labels based on the job EC. Despite the discrepancy between the original ES labels and EC, the highly unbalanced nature of the entire dataset (see Sec. 3.1) is observed also in the subset data we use in this study. In particular, while the percentage of jobs with EC \(=1\) is 9%, the percentage with EC \(>1\) is 2%. We therefore group all types of failures under the same category; discriminating among different fail modes is outside the scope of this work. Moreover, we are interested in failure caused by the workload itself, so we remove from the dataset all the jobs originally labelled as _cancelled_ (failure due to user) and _node fail_ (failure due to hardware). Eventually, we re-label the remaining data according to the following policy: for every job, we assign an ES label of _completed_ if its EC is 0, _failed_ otherwise. The final dataset after the relabelling is composed of 924,252 (89%) completed and 113,027 (11%) failed jobs. The distribution of the labels, throughout the months, is reported in Figure 1. We can observe that imbalance between the two classes of jobs appears in all the months, while the ratio between them changes considerably, showing that the workload is highly variable across time. \begin{table} \begin{tabular}{|l|l|l|} \hline **Name** & **Description** & **Type** \\ \hline Name & Job name assigned by the user & String \\ Command & Command executed to submit the job & String \\ Account & Account to be charged for job execution & String \\ User id & ID of the user submitting the job & Integer \\ Dependency & Jobs to wait for completion before execution & String \\ Group id & Group of the user submitting the job & Integer \\ Requested nodes & Specific nodes requested & List[String] \\ Num tasks per socket & Number of tasks to invoke on each socket & Integer \\ Partition & Name of the assigned partition & String \\ Time limit & Maximum allowed run time in minutes or infinite & Integer \\ Qos & requested quality of service & String \\ Num cpu & Number of requested CPUs & Integer \\ Num nodes & Number of requested nodes & Integer \\ Num gpus & Number of requested GPUs & Integer \\ Submit time & Time of job submission & Timestamp \\ \hline \end{tabular} \end{table} Table 2: Job features description. ### Job failure prediction Feature encodingIn order to compute a prediction for a job, we need to represent it suitably to feed into the classification models presented in Section 3.2. We achieve that by relying on job feature values, and we propose two different ways to encode them. In the first (INT), we assign an integer to the values which are not numerical, i.e. _name, command, account, dependency, requested nodes, partition, qos, submit time_, while setting all the missing values in the other fields (_num tasks per socket, time limit_) to a default value of 0. In the second encoding (SB), we first concatenate all the feature values into a comma divided string, e.g. _job1, run_job1.sh, [1, 10], 2020-10-01 15:30:00, account_1, partition_1, 0, normal, 4, 100, 2, etc._ Then we encode the string with SBERT, obtaining a 384-dimensional floating-point array. We believe that with SBERT we can extract more fine-grained insights about job features expressed in natural language (e.g. _name, command, account_). This is because SBERT is designed to result in similar encodings with sequences with semantically similar contents. As we discussed in Section 4.1, jobs with similar names and command could belong to the same submission batch running similar operations. Therefore, features like _submit_time, name, account, command_ could reveal important patterns on the nature of the job and its workload. This is hard to recognize with the INT encoding, since similar natural language values will be mapped to different integer values, while they would have similar representation in SB, due to semantic similarity. Classifier training and testingIn our prediction task, it would not be realistic to do inference on a job by learning from the data of the future jobs submitted at a later time. We thus create the training and test sets by considering the timeline of the job data, keeping in the training set the data that comes before in chronological order the data of the test set. We identify two settings in which a classifier can be trained and tested on a dataset. The first is the _offline_ setting, where we consider the job data as a Figure 1: Job ES label distribution throughout the months in the final dataset. whole, train the model once on one portion of it, and test it using the data of the other portion in chronological order. To do this, we sort the jobs based on their submission time, split them into two, use the first split preceding in time as the training set, and the other as the test set. The second setting, which we refer to as _online_, is more suitable to our context. We treat the job data as live and streaming in time, retrain the model periodically on a fixed size of recent data, and test it on future data that comes later (but near) in time. As we discussed in Section 4.1, the workload of an HPC system can be very similar in a short period, while may vary in the long term. As our experimental results confirm, a model trained once on data which slowly gets further in time to the test data could classify poorly compared to a model which is retrained continuously on data closer in time to the test data. In the _online_ setting, we use the time information provided by the _submit_time, start_time and end_time_ features in order to simulate job submission and execution on a machine, and add the _day_ feature as the submission date by extracting it from _submit_time_. We consider as the first training set all the jobs that were submitted in the first \(\alpha\) days and not finished after the date of the first test set. Starting from the submission time of the first job not present in the first training set, we divide the data in batches in chronological order, where each batch contains the jobs submitted in the next \(\omega\) days. We then iterate over each batch, considering it as a new test set. At every iteration, the training set is updated with the data of the last \(\alpha\) days and the supervised models are retrained. With the unsupervised models, no actual re-training takes place, however the training set is extended for each new job in the test set with the jobs that finished before the submission time of the new job (with negligible overhead). ## 5 Experimental Study In this section, we report our experimental study and discuss our results. Experimental settingAll the experiments are conducted on a node of a small cluster equipped with two Marvell TX2 CPUs with 32 cores and 256 GB of RAM. No accelerator, such as GPU, is used in the experiments. The classification algorithms are implemented with _scikit-learn_ Python library. The sequence encoder model is provided by the _sentence transformers_ library6, while the weights for SBERT are pulled from huggingface.7 We use the pre-trained model _all-MiniLM-L6-v28_, since it is the best trade-off between prediction performance and speed [11]. All the models are instantiated with the default setting provided by the library. Footnote 6: [https://www.sbert.net](https://www.sbert.net) Footnote 7: [https://huggingface.co](https://huggingface.co) Footnote 8: [https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) We set the hyperparameters as follows after an initial empirical evaluation. We use MWD of order \(p=2\) and set \(k=5\) in the KNN algorithm. As discussed in Section 4, the testing period strictly follows the training period. For the offline setting, we take the first 70% of the data as the training set and the remaining 30% as the test set. For the online, we fix the training interval \(\alpha\) to 30 days, based on the trade-off between prediction performance and training/inference time. The time-span of data in each test set is \(\omega=1\) day. The implementation is available in a GitHub repository.9 Footnote 9: [https://github.com/francescoantici/job-failure-predictor/](https://github.com/francescoantici/job-failure-predictor/) The results are reported in Tables 3 and 4, where we distinguish between the job feature encodings (INT and SB), the supervised algorithms (DT, LR, RF), and the distance metrics of the KNN algorithm (CD and MWD). Each classification algorithm is evaluated using the two feature encodings and are compared with two simple baselines, namely majority and random. Both baselines ignore the input feature values. The majority returns the most frequent label observed in the training data, while the random generates predictions uniformly from the list of unique labels, so each class has equal probability. The results reported in Table 4 are averaged over 5 months between June 2020 and October 2020. ResultsWe evaluate our models with metrics typically used for classification tasks, namely f1, precision and recall. Table 3 reports the results of the offline setting. The model that gives the best results overall is INT+RF. It achieves a f1 score of 71% and is very good at classifying the completed jobs, as the f1 score computed over such jobs is 98%. The prediction of the failures is somewhat harder, with a f1 score of 43%. Overall, we observe that the supervised techniques perform better, but all the models struggle with the classification of the failed jobs, as most of them (with the exception of INT+DT) have lower recall than the random baseline in the failed class. Conversely, the classification of completed jobs is much easier, with \begin{table} \begin{tabular}{|l|l|l|l||l|l|l|l|l|l|} \hline Model & T F1\({}_{m}\) & T Prec\({}_{m}\) & T Rec\({}_{m}\) & C F1 & C Prec & C Rec & F F1 & F Prec & F Rec \\ \hline Supervised & & & & & & & & & \\ INT+DT & 0.30 & 0.50 & 0.48 & 0.55 & 0.96 & 0.38 & 0.06 & 0.03 & **0.57** \\ INT+LR & 0.54 & 0.62 & 0.53 & **0.98** & 0.97 & 0.99 & 0.10 & 0.26 & 0.06 \\ INT+RF & **0.71** & **0.72** & **0.69** & **0.98** & **0.98** & 0.98 & **0.43** & **0.47** & 0.39 \\ SB+DT & 0.38 & 0.50 & 0.50 & 0.70 & 0.97 & 0.55 & 0.06 & 0.03 & 0.45 \\ SB+LR & 0.66 & 0.70 & 0.63 & **0.98** & **0.98** & 0.99 & 0.34 & 0.43 & 0.28 \\ SB+RF & 0.55 & 0.54 & 0.61 & 0.95 & 0.97 & 0.92 & 0.16 & 0.11 & 0.30 \\ \hline Unsupervised & & & & & & & & & \\ INT+CD & 0.52 & 0.52 & 0.58 & 0.92 & 0.97 & 0.87 & 0.11 & 0.07 & 0.28 \\ INT+MWD & 0.39 & 0.50 & 0.50 & 0.72 & 0.97 & 0.58 & 0.06 & 0.03 & 0.42 \\ SB+CD & 0.42 & 0.50 & 0.52 & 0.76 & 0.97 & 0.63 & 0.07 & 0.04 & 0.42 \\ SB+MWD & 0.42 & 0.50 & 0.52 & 0.76 & 0.97 & 0.63 & 0.07 & 0.04 & 0.42 \\ Majority & 0.49 & 0.50 & 0.48 & **0.98** & 0.97 & **1.00** & 0.00 & 0.00 & 0.00 \\ Random & 0.36 & 0.50 & 0.50 & 0.66 & 0.97 & 0.50 & 0.06 & 0.03 & 0.49 \\ \hline \end{tabular} \end{table} Table 3: Results in the offline setting, for both classes (T), completed class (C) and failed class (F) using precision (Prec), f1 and recall (Rec). In (T), we consider the macro averaged metrics (F1\({}_{m}\), Prec\({}_{m}\), Rec\({}_{m}\)). The model name is composed of the feature encoding and the classification algorithm/distance metric. Best results are highlighted in bold. the precision being \(\geq 96\%\); this is probably due to the imbalance in the dataset (completed jobs are more abundant). This is compounded with the proportion between the completed and failed jobs varying significantly across different periods, as shown in Figure 1. Thus, with the offline setting, the model has a high risk of overfitting on the completed job examples (being more numerous) and of spectacularly underperforming when tested on jobs that fail. This behaviour can be mitigated by retraining the models to adapt them to the workload and the class distribution shift over time. Indeed, Table 4 shows the results of the online setting, with notable improvements in the classification of the failed jobs. The SB encoding coupled with the clustering classifier using the Minkowski distance (SB+MWD) yields the best results overall, suggesting that properly extracting meaningful job information from textual data is beneficial. In terms of the f1 score, SB+MWD reaches 70%, outperforming all the supervised models, which arrive to a maximum of 64% with SB+RF and INT+RF. The classification of the completed jobs is good for all the models and their f1 scores are always above the 80%; the clustering methods have the highest precision (87%), while SB+RF has better recall (91% with respect to 83%). There is some minor drop in performance in the completed class compared to the offline setting (less overfitting), but the results are still solid. In the failed class, the clustering methods (SB+CD, SB+MWD) obtain a f1 score of 54% outperforming all the supervised algorithms. We observe a significant improvement with respect to the offline setting. Indeed, the best f1 score obtained over failed jobs in the offline setting (INT+RF) is increased by 20% by the best model in the online setting (SB+MWD and SB+CD); clearly, retraining the models helps to classify job failures. As can be observed in both tables, the use of the SB encoding has a marginal impact with the supervised models, while the training time increases significantly in the online context (e.g., the training time of INT+RF is 25 seconds, \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l||l|} \hline Model & T F1\({}_{m}\) & T Prec\({}_{m}\) & T Rec\({}_{m}\) & C F1 & C Prec & C Rec & F F1 & F Prec & F Rec & Time \\ \hline Supervised & & & & & & & & & & \\ INT+DT & 0.60 & 0.64 & 0.63 & 0.80 & 0.84 & 0.79 & 0.41 & 0.44 & 0.46 & 1.27 + 0.005 \\ INT+LR & 0.46 & 0.53 & 0.51 & 0.85 & 0.79 & 0.95 & 0.06 & 0.26 & 0.06 & 78 + 0.3 \\ INT+RF & 0.64 & 0.69 & 0.64 & 0.84 & 0.84 & 0.87 & 0.43 & 0.54 & 0.41 & 25 + 0.12 \\ SB+DT & 0.61 & 0.62 & 0.63 & 0.80 & 0.84 & 0.78 & 0.41 & 0.39 & 0.47 & 455 + 0.09 \\ SB+LR & 0.60 & 0.66 & 0.60 & 0.85 & 0.82 & 0.89 & 0.34 & 0.50 & 0.30 & 84 + 0.4 \\ SB+RF & 0.64 & 0.70 & 0.63 & 0.86 & 0.83 & 0.91 & 0.41 & **0.57** & 0.35 & 922 + 0.4 \\ \hline Unsupervised & & & & & & & & & & \\ INT+CD & 0.68 & 0.69 & 0.69 & 0.84 & 0.86 & 0.82 & 0.52 & 0.52 & 0.56 & **N.A. + 0.3** \\ INT+MWD & 0.68 & 0.69 & 0.69 & 0.84 & **0.87** & 0.83 & 0.52 & 0.51 & 0.55 & **N.A. + 0.3** \\ SB+CD & 0.69 & 0.70 & 0.71 & 0.84 & **0.87** & 0.83 & **0.54** & 0.54 & **0.59** & N.A. + 0.7 \\ SB+MWD & **0.70** & **0.71** & **0.71** & 0.85 & **0.87** & 0.83 & **0.54** & 0.54 & **0.59** & N.A. + 0.7 \\ \hline Majority & 0.44 & 0.40 & 0.50 & **0.87** & 0.79 & **1.00** & 0.00 & 0.00 & 0.00 & N.A. \\ Random & 0.44 & 0.50 & 0.50 & 0.61 & 0.79 & 0.5 & 0.28 & 0.21 & 0.5 & N.A. \\ \hline \end{tabular} \end{table} Table 4: Results in the online setting, presented similarly to Table 3. The time (in sec) is the avg. training time per day and the avg. inference time per job (including the SB encoding time where applicable – “N.A.” indicates the cases where SB is not applicable). while SB+RF requires 922 seconds). The increase in training time is not surprising, as the extraction of the text features through NLP involves the usage of a computationally hungry DN. We note, however, that the inference time remains very small and this is the operation that needs to be performed in real time without affecting the machine's normal workload (the retraining can be scheduled in less busy periods). On the other hand, in the case of the unsupervised models, SB improves the performance by 1-2% in almost every metric while no training time is incurred and the inference time always remains under a second. As we discussed in Section 4, with these models retraining is simply extending the training set (with negligible overhead) and classifying a new job requires a simple inference step (i.e., the new job is compared with those in the training set, projected in the feature space). ## 6 Conclusions and Future Work We presented an ML-based classification approach to predict endogenous job failures in HPC systems, using only the information available at job submission time. The methodology can be deployed in an _online_ context where jobs are continuously submitted by users to a real production system. We thoroughly validated our approach with a two-fold battery of test using supervised and unsupervised learning algorithms. In the first, we considered an _offline_ setting and split the job data in time-consecutive sets for training and testing. We showed that in this setting the models poorly classify the failed jobs - which is what we are more interested in - while they are pretty accurate in predicting the completion. We then deployed our approach _online_, where we treated the job data as live and streaming in time, retrained the model periodically on recent (past) data, and tested it on (future) data that comes later (but near) in time. We observed an improvement in prediction accuracy by the use of this setting, especially in predicting the job failures. We also showed that an unsupervised technique like KNN is more suitable in the online setting, and the use of an NLP-based encoding to represent job features improves the classification accuracy. Our contribution can be seamlessly integrated into the existing operational data analytic frameworks deployed in modern systems. The marginal overhead increase is not worrying, as adopting hardware accelerators (GPUs, TPUs, etc.) or deploying the models to scalable architectures will make the inference time almost negligible. In future work, we want to study continuous learning techniques and investigate different retraining strategies. We also plan to take into account the uncertainty of the ML models and investigate policies to handle jobs with high failure risk (in accordance to the Service-Level-Agreements (SLAs) between the HPC provider and the users). For instance, the workload deemed to be at high risk of failure can be postponed, and the user can be asked to revise the job submission. As another example, the high-risk workload (according to failure classification) can be directly discarded if the confidence of the classifier surpasses a threshold defined by the SLA. The user can then be encouraged to resubmit, which can be treated as higher priority not to incur in additional delays.
2307.16725
Beyond-adiabatic Quantum Admittance of a Semiconductor Quantum Dot at High Frequencies: Rethinking Reflectometry as Polaron Dynamics
Semiconductor quantum dots operated dynamically are the basis of many quantum technologies such as quantum sensors and computers. Hence, modelling their electrical properties at microwave frequencies becomes essential to simulate their performance in larger electronic circuits. Here, we develop a self-consistent quantum master equation formalism to obtain the admittance of a quantum dot tunnel-coupled to a charge reservoir under the effect of a coherent photon bath. We find a general expression for the admittance that captures the well-known semiclassical (thermal) limit, along with the transition to lifetime and power broadening regimes due to the increased coupling to the reservoir and amplitude of the photonic drive, respectively. Furthermore, we describe two new photon-mediated regimes: Floquet broadening, determined by the dressing of the QD states, and broadening determined by photon loss in the system. Our results provide a method to simulate the high-frequency behaviour of QDs in a wide range of limits, describe past experiments, and propose novel explorations of QD-photon interactions.
L. Peri, G. A. Oakes, L. Cochrane, C. J. B. Ford, M. F. Gonzalez-Zalba
2023-07-31T14:46:43Z
http://arxiv.org/abs/2307.16725v5
Beyond-adiabatic Quantum Admittance of a Semiconductor Quantum Dot at High Frequencies: Rethinking Reflectometry as Polaron Dynamics ###### Abstract Semiconductor quantum dots operated dynamically are the basis of many quantum technologies such as quantum sensors and computers. Hence, modelling their electrical properties at microwave frequencies becomes essential to simulate their performance in larger electronic circuits. Here, we develop a self-consistent quantum master equation formalism to obtain the admittance of a quantum dot tunnel-coupled to a charge reservoir under the effect of a coherent photon bath. We find a general expression for the admittance that captures the well-known semiclassical (thermal) limit, along with the transition to lifetime and power broadening regimes due to the increased coupling to the reservoir and amplitude of the photonic drive, respectively. Furthermore, we describe two new photon-mediated regimes Floquet broadening, determined by the dressing of the QD states, and broadening determined by photon loss in the system. Our results provide a method to simulate the high-frequency behaviour of QDs in a wide range of limits, describe past experiments, and propose novel explorations of QD-photon interactions. ## I Introduction Semiconductor quantum dots (QDs) are a promising platform for developing solid-state quantum technologies. In the area of quantum computing [1], six-qubit quantum processors [2] and the fabrication of two-dimensional arrays of 4\(\times\)4 quantum dots have been shown [3]. In conjunction, demonstrations of qubit control at the threshold for fault-tolerant computing [4; 5; 6], advanced manufacturing [7; 8], and integration with classical semiconductor electronics [9; 10] indicate that QDs are a compelling platform for quantum computation. More recently, new electronic applications of QD devices are beginning to emerge, particularly when manipulated at high frequencies. Their non-linear admittance, consisting of a combination of circuit equivalents such as the Sisyphus resistance [11] and quantum and tunnelling capacitances [12], can be utilised for quantum sensing -- in the form of fast electrometers [13; 14; 15] and local thermometers [16; 17] -- and for electronic signal conversion -- in the form of frequency multipliers [18] and quantum amplifiers [19]. As these technologies become established, developing accurate electrical models of semiconductor QDs at high frequencies becomes essential to simulate their performance as stand-alone elements in hybrid quantum-classical circuits or circuit quantum-electrodynamics architectures [20; 21; 22; 23]. Semiclassical models accurately predict the behaviour in the limit of slow and weak driving along with weak coupling to the environment [24; 25]. Furthermore, quantum models have been developed that further describe the admittance in the limit of strong coupling to the electron bath [26; 27]. However, a consolidated model that accurately describes the interaction of a QD with a charge reservoir and the quantum nature of the photonic bath across multiple regimes is missing. Here, we present a Markovian Master-Equation (ME) formalism for the charge dynamics in a driven QD tunnel-coupled to a charge reservoir (CR), hereafter, a semiconductor-based Single Electron Box (SEB) (see Fig 1a) to obtain a general form of the effective quantum admittance of the SEB. The entire system can be described from a quantum mechanical point of view as three interacting subsystems: a QD with a single discrete electrochemical level, a CR in thermal equilibrium at temperature \(T\), and a Photon Bath (PhB) representing microwave radiation. The QD can exchange charged particles with the CR, and both the photon number and frequency can be controlled by external means. We describe the role of temperature and charge tunnel rate to the reservoir with a smooth transition from thermal to lifetime broadening. Furthermore, we capture the impact of increasing the driving amplitude, resulting in power broadening of the charge transition. Our description of the coupled electron-photon states in the QD as a polaron will shed new and physically insightful light on this process. Moreover, we present a novel kind of broadening, Floquet Broadening, which is dictated by the photonic part of the polaron. The Heisenberg uncertainty principle predicts this effect, but its intrinsically quantum nature makes it inaccessible to a simple extension of semiclassical adiabatic theories. Finally, we describe the effects due to non-idealities in the photon bath and how this affects the equivalent admittance of the SEB. This result is the direct analogue of known phenomena in superconductor transmon qubits [28; 29; 30], and it shows how the formalism developed here lays the foundations to bridge the gap between equivalent circuit simulation and circuit quantum electrodynamics with mixed bosonic and fermionic states. In particular, we identify five quantities that mostly dictate the SEB dynamics, which give rise to the same number of possible regimes. Because of the nature of the SEB admittance at the fundamental frequency, which takes the form of a single peak, we distinguish these regimes by the limiting factor that causes a broadening of the lineshape. (1) Thermal Broadening (TB) occurs when the peak is limited by the charge temperature of the CR. (2) Lifetime Broadening (LB) is where the broadening is due to the finite lifetime of the charge in the QD. (3) Floquet Broadening (FB), a high-frequency broadening caused by the discrete nature of the photon energy. (4) Power Broadening (PB) is a large-signal effect where the peak is dictated by a large amplitude of the drive. (5) Photon Loss Broadening (PLB), where the large-signal response is modified by a finite rate of photon loss in the photonic bath. ## II Statement of key results In this section, we introduce the necessary concepts and notation and summarize the key finding of this work while providing the required context and presenting physical implications. Section III describes the SEB and its time evolution by framing the charge dynamics quantum mechanically and proves that it only depends on the QD-CR tunnel rates. Section IV outlines the tunnel rates semiclassically and in a fully quantum setting. In the process, we will develop a formalism for a self-consistent quantum Master Equation of the SEB, which is general to other quantum systems embedded in classical circuits, for which it is interesting to consider effective circuit analogues. These results are used in Section V to derive the equivalent SEB admittance, which is discussed from an electrical and physical point of view. While the formalism here developed is capable of capturing strong QD-PhB coupling effects, the discussion is carried forward only in the weak coupling regime. The strong QD-CR coupling case is however fully discussed. Finally, Section VI presents the equivalent SEB admittance and its reflectometry signals, exploring the roles of the various physical and dynamic properties of the system. The discussion identifies several regimes of SEB operation, depending on the parameter dominating the charge dynamics, casting new insight into previous well-known results and identifying new effects when the semiclassical approximations are no longer valid. While Sections III and IV give precise mathematical insight into the results of this work, especially highlighting the breakdown of the semiclassical approach, an effort is made from Section V onwards in restating key concepts to understand the effective SEB admittance. Therefore, the reader not directly interested in the derivation of the self-consistent quantum Master Equation are welcome to skip to Section V, where the point of view of equivalent circuit simulation is adopted. This reader, in particular, may be interested in Section V.2, which, in light of the rigorous quantum calculation, offers a more direct physical interpretation to guide the intuition towards the results in Section VI and the various regimes presented therein. In Section VI we focus on measurable quantities of experimental relevance, predicting novel regimes and proposing experiments within reach of the current state of the art. ### Gate Current and Equivalent Quantum Admittance A SEB consists of a single QD connected via a tunnel barrier to a CR (at temperature \(T\)) and whose electrochemical potential can be capacitively changed via a Gate, see Fig. 1a. We consider the CR to be connected to ground. In this work, we focus on the case in which a sinusoidally varying gate potential drives the SEB, \[V_{g}(t)=V_{0}+\delta V_{g}\cos\omega t, \tag{1}\] and calculate the resulting Gate current \(I_{g}(t)\) due to CR-QD charge tunnelling events to understand the functional relationship between \(V_{g}\) and \(I_{g}\) introduced by the SEB. More specifically, the induced gate charge due to a charge tunnelling event reads \[Q_{g}(t)=\alpha eP(t), \tag{2}\] where \(P(t)\) is the time-dependent probability of occupation and \(\alpha\) is the lever arm that couples the Gate to the QD. For legibility, in this Section, we will not distinguish between driving and collection lever arms (see Appendix C). The Gate current can be expressed as \[I_{g}(t)=\alpha e\frac{d}{dt}P(t), \tag{3}\] which is the quantity we shall derive in this work. Previous works have focused on the small-signal regime where the SEB can be replaced by an equivalent admittance linking \(I_{g}\) and \(V_{g}\)[12; 24; 25; 31]. However, such a picture fails outside that limit due to the inherent nonlinearities of the quantum dynamics in the SEB. In this work, we will prove that the concept of an equivalent admittance can be extended to arbitrarily sized signals, as the Gate current can be expressed as a Fourier series containing higher-order harmonics as \[I_{g}(t)=\sum_{N}I_{N}(t)=\frac{\delta V_{g}}{2}\sum_{N}Y_{N}e^{iN\omega t}+c.c. \tag{4}\] where the equivalent quantum admittance at the \(N\)-th harmonic is \[Y_{N}=\frac{\int_{0}^{\frac{2\pi}{\omega}}e^{iN\omega t}I_{g}(t)dt}{\int_{0}^ {\frac{2\pi}{\omega}}e^{i\omega t}V_{g}(t)dt}. \tag{5}\] Equation 5 becomes of particular physical interest when the SEB is coupled with an electrical resonator, for example, in a reflectometry setup, [12; 14; 18], allowing the exploration of the different Fourier terms. An equivalent circuit is best thought of as in Fig. 1c, with the SEB appearing as a parallel combination of a nonlinear resistor and capacitor to the _input_ of the setup, while its effect on the _output_ is best modelled as a parallel of Voltage-Controlled Current Sources (VCCS) associated to the aforementioned Fourier terms, whose output current is then filtered by the resonator (Appendix C). From a quantum perspective, the role of the resonator can be thought of as an electrodynamic cavity coupled to the SEB, whose role is simultaneously to amplify the resonant signals and select the desired harmonic. In Appendix C, we show how, if the \(N\)-th harmonic is the only one in the bandwidth of the resonator, one can approximate, in the usual regime where \(|Y_{Res}|\gg|Y_{N}|\), \[V_{out}^{N}=\frac{Y_{N}}{Y_{Res}}V_{in} \tag{6}\] where \(V_{out}^{N}(t)=\Re\left[V_{out}e^{iN\omega t}\right]\) and \(V_{in}(t)=\Re\left[V_{in}e^{i\omega t}\right]\) and \(Y_{Res}\) is the resonator admittance. Therefore, one only needs to determine the equivalent admittance of the system at the relevant harmonic to determine its reflection and transmission coefficients through a cavity. We emphasise that this result and the methodology developed are _general_ for any QD system. Finally, we shall notice from Eq.(3) that \(Y_{N}\) only depends on the probability of the QD being occupied. In Section III.2 we show how, the time evolution of the SEB can be generally expressed in a Master Equation of the form \[\frac{d}{dt}P=-\Gamma P+\Gamma_{-}(t), \tag{7}\] where \(\Gamma\) is the total QD-CR tunnel rate (independent of time because of conservation of charge) and \(\Gamma_{-}(t)\) is the tunnel rate _out_ of the QD. Thus, the admittance solely depends on the quantity \(\Gamma_{-}(t)\) (which, as well as \(\Gamma\), has units Hz throughout this work). In this work, we shall present a self-consistent fully quantum way of deriving such a rate, which when compared with the semiclassical result used thus far in the literature, will shed new light on the high-frequency behaviour of the SEB. In particular, in Section V we derive a general expression for \(Y_{N}\), and in Section VI we identify different regimes differentiated by the physical process dominating the charge dynamics. While we present a complete and thorough discussion of the various regimes, for the benefit of the reader Tables 2 and 3 present a short summary of the result derived in this work. A short glossary of the relevant symbols can be found in Tab. 1. In particular, we emphasise the response of the SEB at its fundamental frequency \(N=1\) (Tab. 2), as this is the result of most experimental interest. ### Reflectometry as Polaron Dynamics While the main result of this work is the derivation in closed-form of the effective SEB admittance \(Y_{N}\), a prerequisite was to derive a self-consistent fully quantum formalism for an open quantum system subject to sinusoidal driving. The results and mathematical toolbox assembled in this process lay the foundation to go beyond semiclassical master equations in the description of equivalent electronic circuits. In particular, describing a driven QD as a polaron sheds new light on the concept of Gate Current and the process of reflectometry, also serving \begin{table} \begin{tabular}{l l} \hline \hline & \multicolumn{2}{c}{Glossary of Symbols} \\ \hline \(\varepsilon_{0}\) & Average QD-CR electrochemical potential \\ & detuning \\ \(\delta\varepsilon\) & Amplitude of QD-CR detuning variation \\ \(\omega\) & Frequency of the PhB \\ \(\Gamma\) & QD-CR tunnel rate \\ \(T\) & CR charge temperature \\ \(\kappa\) & PhB photon loss rate \\ \(g\) & QD-PhB coherent coupling \\ \(\gamma=\frac{\kappa\omega^{2}}{(2g/\hbar)^{2}+\kappa^{2}}\) & Photon Loss Broadening Rate \\ \(\alpha\) & Gate lever arm \\ \(J_{n}\) & \(n\)-th Bessel Function of the first kind \\ \(\psi_{0}\) & Digamma function \\ \(\psi_{1}\) & Trigamma function \\ \hline \hline \end{tabular} \end{table} Table 1: Glossary of the symbols present in the effective SEB admittance (Tabs. 2 and 3) and their physical meaning. Figure 1: Schematics of the driven single electron box as its low-frequency circuit equivalent (a) and energy level diagram (b). The AC voltage source is seen as a Photonic Bath (PhB), and ground is seen as a Charge Reservoir (CR), connected to the QD via a tunnel barrier. Panel (c) shows an effective high-frequency model of the system, though of a parallel combination of the variable tunneling capacitance and Sisyphus resistance as seen from the input and as a combination of parallel of Voltage Controlled Current Sources at all the harmonics of the drive as seen from the output. In the schematic, a resonant circuit is added as a spectroscopic tool to investigate specific harmonics. \begin{table} \begin{tabular}{||c|c||} \hline **Regimes** & **Fundamental Admittance** \\ \hline Thermal Broadening (TB) & \\ \(k_{B}T\gg h\Gamma,\hbar\omega,\delta\varepsilon\) & \\ \hline Lifetime Broadening (LB) & \\ \(h\Gamma\gtrsim k_{B}T\gg\hbar\omega,\delta\varepsilon\) & \\ \hline Lifetime Broadening (LB) & \\ \(h\Gamma\gtrsim k_{B}T\gg\hbar\omega,\delta\varepsilon\) & \\ \hline Floquet Broadening (FB) & \\ \(\hbar\omega\gtrsim k_{B}T,h\Gamma\gg\delta\varepsilon\) & \\ \hline Floquet Broadening (FB) & \\ \(\hbar\omega\gtrsim k_{B}T,h\Gamma\gg\delta\varepsilon\) & \\ \hline Power Broadening (PB) & \\ \(\delta\varepsilon\gg k_{B}T,h\Gamma,\hbar\omega\) & \\ \hline Photon Loss Broadening (PLB) & \\ \(\delta\varepsilon\gtrsim\hbar\omega\sqrt{\Gamma/\gamma}\) & \\ \(\kappa\ll\Gamma+\gamma(\delta\varepsilon/\hbar\omega)^{2}\) & \\ \hline \end{tabular} \end{table} Table 2: Quantum Admittance at the fundamental of the driving frequency \(Y_{1}\) of the SEB in the various regimes presented in this work: TB (Sec.VI.1), LB (Sec.VI.2), FB (Sec.VIC), PB (Sec.VI.4), and PLB (Sec.VI.5). A glossary of the symbols and their physical interpretation can be found in Tab. 1. \(\Re/3\) indicate the real and imaginary parts of the complex argument. \begin{table} \begin{tabular}{||c|c||} \hline **Regimes** & **Fundamental Admittance** \\ \hline Thermal Broadening (TB) & \\ \(k_{B}T\gg h\Gamma,\hbar\omega,\delta\varepsilon\) & \\ \hline Lifetime Broadening (LB) & \\ \(h\Gamma\gtrsim k_{B}T\gg\hbar\omega,\delta\varepsilon\) & \\ \hline Floquet Broadening (FB) & \\ \(\hbar\omega\gtrsim k_{B}T,h\Gamma\gg\delta\varepsilon\) & \\ \hline Power Broadening (PB) & \\ \(\delta\varepsilon\gg k_{B}T,h\Gamma,\hbar\omega\) & \\ \hline Photon Loss Broadening (PLB) & \\ \(\delta\varepsilon\gtrsim\hbar\omega\sqrt{\Gamma/\gamma}\) & \\ \(\kappa\ll\Gamma+\gamma(\delta\varepsilon/\hbar\omega)^{2}\) & \\ \hline \end{tabular} \end{table} Table 3: Quantum Admittance at the \(N\)-th harmonic of the driving frequency \(Y_{N}\) of the SEB in the various regimes presented in this work: TB (Sec.VI.1), LB (Sec.VI.2), FB (Sec.VI.3), PB (Sec.VI.4), and PLB (Sec.VI.5). A glossary of the symbols and their physical interpretation can be found in Tab. 1. \(\Re/3\) indicate the real and imaginary parts of the complex argument. as a cautionary tale when semiclassical methods venture beyond the limits imposed by the Heisenberg Uncertainty Principle. The quantum description of the SEB begins by considering the semiclassical drive as the manifestation of (weak) coupling with a (coherent) Photonic Bath (PhB), which is held by a source in a coherent state. We can now begin by describing the SEB in a second-quantization formalism, as outlined in Section III.1, by representing the dynamics via the annihilation of the QD, CR, and PhB and their respective adjoint operators. Semiclassically, we would group the systems in Fig. 1a _horizontally_, i.e. consider first the QD as coupled with a CR, which preserves the conservation of charge during the time evolution. However, this leads our intuition to think of the SEB as a _completely incoherent_ system, where the effect of the PhB is merely to _drive_ stochastic tunnelling events in and out of the QD. Formally, as we show in Section IV, this relies on the Instantaneous Eigenvalues Approximation (IEA) [32], which consists of a secular approximation of the quantum dynamics, where the driven QD acts as a single level whose energy periodically oscillates in time. Such an adiabatic treatment of the problem forgoes the quantum nature of the dynamics. In particular, while the final ME will be Markovian and thus time-local, the expression to derive the tunnel rates necessarily contains a time convolution [33], preventing a simple extension of adiabatic result in the time-dependent case [34]. We shall see, however, how the Lindblad formalism drastically simplifies when we group Fig. 1a _vertically_ and consider a mixed electron-photon state (i.e., a polaron) whose fermionic part stochastically interacts with the CR. Most interestingly, in Section IV.2, we show how the CR-polaron interaction reads \[H_{CR-Pol}\propto c^{\dagger}Dd+h.c., \tag{8}\] where \(c\) and \(d\) are the annihilation operators of the CR and QD, respectively. \[D=\exp\left(-\frac{g}{\hbar\omega}\left(a^{\dagger}-a\right)\right) \tag{9}\] is known as a displacement operator of the PhB, where \(a\) is the annihilation operator of the PhB, \(\hbar\omega\) is the photon energy, and \(g\) is the coherent QD-PhB coupling. For the benefit of the reader, we recall that \[D\propto\left(\sum_{k}\frac{1}{k!}\left(-\frac{g}{\hbar\omega}a^{\dagger} \right)^{k}\right)\left(\sum_{k}\frac{1}{k!}\left(\frac{g}{\hbar\omega}a \right)^{k}\right). \tag{10}\] Fundamentally, this shows that in a driven QD, a tunnelling event can only occur through the creation or annihilation of a polaron. This does not alter the total number of photons but only changes their state. Eq.(8) gives a direct physical interpretation of the concept of Gate current. In the Lindblad model, we assume the bath to remain unperturbed by the state of the QD. The Gate current is a manifestation of those _extra_ photons that change state as a back-action of charge tunnelling event, and which manifest semiclassically as an alternating current (AC). This formalism allows us to go beyond the limits of the semiclassical approach. In particular, we can describe the effects of a high-frequency drive or a short lifetime of the charge in the QD, both cases in which the semiclassical picture breaks the time-energy Uncertainty Principle. By describing reflectometry as polaron dynamics, we shall see that the Heisenberg principle is recovered in both limits, which will be reflected in the reflectometry lineshapes at the fundamental \(Y_{1}\). Notably, we stress that this _must_ be the case. Formally, we can think of the limit of a zero temperature CR as a _spectroscopic_ probe of the effective density of states of the SEB. In this case, \(Y_{1}\) takes the exact form of the _spectrum_ of the system [35], and thus, in a rigorous quantum description, the Heisenberg Uncertainty Principle must be satisfied. More physically, we note that \(Y_{1}\) directly derives from the Fourier transform of a physical observable, i.e. the charge in the QD. Considering that the time-energy Uncertainty Principle derives from the relation between real and reciprocal space, we expect it to be conserved. ## III Quantum dynamics of the single electron box In this section, we shall define the SEB in terms of its separate subsystem: a QD, a CR, and a PhB. We will also show that any ME in Lindblad form can be written without loss of generality in terms of a single Ordinary Differential Equation for the probability of occupation of the QD as a function of time. ### System Hamiltonian Regarding its separate subsystems, the complete SEB Hamiltonian reads \[H=H_{QD}+H_{CR}+H_{PhB}+H_{DR}+H_{DP}, \tag{11}\] where \(H_{QD}\), \(H_{CR}\), and \(H_{PhB}\) represent the unperturbed Hamiltonians of the QD, CR and PhB, respectively. While \(H_{DR}\) and \(H_{DP}\) describe the QD-CR and QD-PhB coupling, respectively. In particular, we take the zero of energy at the Fermi level of the CR and consider Hamiltonians of the form \[H_{QD}=\varepsilon_{0}\ d^{\dagger}d \tag{12}\] \[H_{R}=\sum_{\epsilon}\ \epsilon\ c^{\dagger}_{\epsilon}c_{\epsilon}\] (13) \[H_{PhB}=\hbar\omega\ a^{\dagger}a, \tag{14}\] with \(d\), \(c\), and \(a\) representing the destruction operators of the QD, CR, and PhB, respectively. The parameter \(\varepsilon_{0}\) describes the static energy detuning of the single QD level (Fig. 1a), and \(\epsilon\) is the energy of the (potentially discrete) CR levels. We assume the microwave radiation to be monochromatic at the single frequency \(\omega\). For ease of future notation, we can define \[H_{0}=H_{QD}+H_{R}+H_{PhB} \tag{15}\] as the Hamiltonian describing the free evolution of the (uncoupled) quantum systems. Those are then coupled by interaction Hamiltonians, which in the second quantization formalism are [36] \[H_{DR} =\sum_{\epsilon}\ V_{\epsilon}c_{\epsilon}d^{\dagger}+V_{ \epsilon}^{*}c_{\epsilon}^{\dagger}d \tag{16}\] \[H_{DP} =g(a+a^{\dagger})d^{\dagger}d, \tag{17}\] where we have considered the experimentally-relevant case of longitudinal QD-PhB coupling. The parameter \(g\) is the coherent QD-PhB coupling, and \(V_{\epsilon}\) is the CR-QD coupling at energy \(\epsilon\). ### A General Master Equation Equation (11) describes the complete quantum dynamics of the QD-CR-PhB system. In this work, we are interested in the QD charge dynamics. Therefore, in the subsequent sections, we shall employ the Lindblad formalism to write a ME that only describes the QD degree of freedom by appropriately tracing over the CR and PhB. The final Lindblad Master Equation (LME) will take the standard form \[\hbar\frac{d}{dt}\rho(t)=-i[H_{0}(t),\rho(t)]+\sum_{i=+,-}h\Gamma_{i}(t){\cal D }[L_{i}], \tag{18}\] where \[{\cal D}(L_{i})[\rho(t)]=L_{i}^{\dagger}\rho(t)L_{i}-\frac{1}{2}\left\{L_{i}^ {\dagger}L_{i},\rho(t)\right\} \tag{19}\] are the dissipation superoperators that account for the non-unitary dynamics caused by the coupling to the CR. The latter is described by the _jump_ operators \[L_{+} =d^{\dagger} \tag{20}\] \[L_{-} =d\] linked to the corresponding tunnel rates \(\Gamma_{\pm}(t)\), which we shall derive in the subsequent sections. We can simplify the problem further by formulating it as a two-level system, where the QD is either occupied (\(|o\rangle\)) or empty (\(|e\rangle\)). A closer look at the jump operators in Eq.(20) leads to identifying \(\Gamma_{\pm}(t)\) as the probability per unit time of an electron entering (exiting) the QD from the CR. Therefore, a simple argument based on the Fermi Golden Rule leads to the rate of jumping in (out) of the QD only depending on the number of occupied (empty) states in the CR. Consequently, \[\Gamma_{+}(t)+\Gamma_{-}(t)=\Gamma, \tag{21}\] with \(\Gamma\) being a constant representing the total charge tunnel rate [37; 35]. Moreover, the previous interpretation of the QD as a _two_-level system immediately leads us to write the density matrix as \[\rho=\begin{pmatrix}P&C\\ C^{*}&1-P\end{pmatrix}, \tag{22}\] where \(P(t)\) is the probability of occupation of the QD and \(C=\text{tr}\{|e\rangle\langle o|\rho\}\) quantifies the degree of coherent superposition between the two states. It is now fruitful to examine the problem in the Fock-Liouville space by considering the flattened density matrix [33] \[\rho\rightarrow\vec{\rho}_{FL}=\begin{pmatrix}P,&C,&C^{*},&1-P\end{pmatrix}^ {T}. \tag{23}\] This picture allows us to explicitly write the LME in Eq.(18) in its matrix form, which, after some algebra [38] and making use of Eq.(21), reads \[\frac{d}{dt}\begin{pmatrix}P\\ C\\ C^{*}\\ 1-P\end{pmatrix}=\begin{pmatrix}-\Gamma_{+}(t)&0&0&\Gamma_{-}(t)\\ 0&-\Gamma/2-i\varepsilon(t)/\hbar&0&0\\ 0&0&-\Gamma/2+i\varepsilon(t)/\hbar&0\\ \Gamma_{+}(t)&0&0&-\Gamma_{-}(t)\end{pmatrix}\begin{pmatrix}P\\ C\\ C^{*}\\ 1-P\end{pmatrix}. \tag{24}\] After matrix multiplication, we arrive at an expression for the coherence \[\frac{d}{dt}C=\left(-\frac{\Gamma}{2}-i\frac{\varepsilon(t)}{\hbar}\right)C. \tag{25}\] Intuitively, Eq.(25) shows how the CR exponentially destroys any possible coherence between the full and empty QD. The system becomes a _statistical_ mixture of the two kets because of the _stochastic_ tunnelling events to and from the CR. Finally, we arrive at the key result that, generally, the charge dynamics are given by the statistical properties of \(P(t)\), which are uniquely determined by the functional form of \(\Gamma_{-}(t)\), \[\frac{d}{dt}P+\Gamma P=\Gamma_{-}(t) \tag{26}\] In the following sections, we shall derive the expressions for \(\Gamma_{-}(t)\) in different limits and levels of complexity: the semiclassical limit (Section IV.1) and with a fully quantum self-consistent approach (Section IV.2). These sections give detailed mathematical insight into the results of this work, especially highlighting the breakdown of the semiclassical approximation and the importance of quantum theory in describing the SEB. Readers not directly interested in the derivation of the tunnel rates and the underlying Lindblad formalism are welcome to proceed to Section V, where the key results are restated, with additional physical insight regarding their precise functional form. ## IV Derivation of the tunnel rates In the previous section, we showed how the tunnelling rate \(\Gamma_{-}(t)\) fully defines the charge dynamics. In this section, we derive such tunnel rates in the semiclassical and self-consistent quantum formalism. In particular, we highlight the nontrivial approximations made in the semiclassical model and stress the comparison with the fully quantum formalism. ### Semiclassical Lindblad Master Equation A simple method to describe the longitudinal QD-PhB coupling is to write \[H_{0}(t)=\left\langle H_{QD}+H_{DP}\right\rangle_{PhB}=\left(\varepsilon_{0}+ \delta\varepsilon\cos\omega t\right)d^{\dagger}d, \tag{27}\] where \(\left\langle\cdot\right\rangle_{PhB}\) represents the partial trace over the PhB, and we have used the fact that \[\left\langle g\left(a^{\dagger}+a\right)\right\rangle_{PhB}=\delta\varepsilon \cos\omega t. \tag{28}\] We will derive this more thoroughly in the subsequent section. It stems from the semiclassical insight that \(\left(a^{\dagger}+a\right)\) is proportional to the electric field operator, which we expect to be monochromatic. We can extend this parallel more deeply by observing from Eq. (27) that \(\delta\varepsilon\) represents the amplitude of the energy-detuning oscillation. Therefore, we can rederive the conventional result by identifying \[\delta\varepsilon=\alpha e\delta V_{g}, \tag{29}\] where \(\delta V_{g}\) is the peak-to-peak voltage amplitude applied to the QD gate and \(\alpha\) the correspondent gate lever arm (Fig. 1a). Therefore, the longitudinal coupling is the quantum equivalent of a sinusoidally-oscillating detuning \[\varepsilon(t)=\varepsilon_{0}+\delta\varepsilon\cos\omega t. \tag{30}\] We shall note that Eq. (27) is formally obtained by considering the PhB at first order in the Lindblad Theory and describing the system as an interacting QD-CR pair following the time-dependent Hamiltonian \[H_{SC}=H_{0}(t)+H_{R}+H_{DR}. \tag{31}\] This is the semiclassical description of an SEB considered thus far in the literature [24; 25]. In this section, however, we shall briefly retrace the steps to derive a LME from Eq.(31) to highlight the differences between the semiclassical result and the new self-consistent ME derived in section IV.2. By defining the Liouville superoperators as \[\mathcal{L}(t)=\left[H_{0}(t)+H_{R}(t),\cdot\right] \tag{32}\] \[\mathcal{L}^{\prime}(t)=\left[H_{DR}(t),\cdot\right], \tag{33}\] we can write the semiclassical LME in the known form [36; 39] \[\begin{split}\hbar\frac{d}{dt}&\rho=-i\left[H_{0}( t),\rho(t)\right]-\\ &-\int_{0}^{+\infty}d\tau\langle\mathcal{L}^{\prime}(t)\mathcal{G}( t,\tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rangle_{R}\rho(t), \end{split} \tag{34}\] where \[\mathcal{G}(t-\tau)=\mathcal{T}e^{-i\int_{t}^{\tau}\mathcal{L}(t^{\prime})dt^{ \prime}} \tag{35}\] is the free propagator of the unperturbed dynamics, and \(\langle\cdot\rangle_{R}\) represents the partial trace over the CR. The notation \(\mathcal{T}\) represents the time-ordered integral. We shall note that in Eq.(34), we have already assumed Markovian dynamics when extending the upper bound of the integral to \(\tau\rightarrow+\infty\). As usual in the Lindblad formalism, we can compute the partial trace in Eq.(34) via the Born approximation by requiring that the dynamics in the reservoir be faster than the CR-QD interaction timescales, and thus the CR can be considered constantly in thermal equilibrium. Therefore, \[\left\langle c_{\epsilon}\right\rangle_{R}=\left\langle c_{\epsilon }^{\dagger}\right\rangle_{R}=0 \tag{36}\] \[\left\langle c_{\epsilon}^{\dagger}c_{\epsilon}\right\rangle_{R}= f(\epsilon)\] (37) \[\left\langle c_{\epsilon}c_{\epsilon}^{\dagger}\right\rangle_{R}= f(-\epsilon)=1-f(\epsilon), \tag{38}\] where \[f(\epsilon)=\frac{1}{e^{\epsilon/k_{B}T}+1} \tag{39}\] is the Fermi-Dirac distribution. After making what is commonly referred to as the Rotating Wave Approximation (RWA) [24; 25; 36; 40; 41; 42; 43; 44] or the Instantaneous Eigenvalues Approximation (IEA) [45; 32], Eq.(34) becomes equivalent to Eq.(18), with \[\Gamma_{\pm}(t)=\Re\left[\sum_{\epsilon}|V_{\epsilon}|^{2}f(\pm\epsilon)\int_ {0}^{\infty}d\tau e^{-i(\epsilon+\epsilon(t))\tau}\right]. \tag{40}\] We can now take the continuous limit of the CR (\(\sum_{\epsilon}\rightarrow\int\mathcal{D}(\epsilon)d\epsilon\), with \(\mathcal{D}(\epsilon)\) the density of states in the CR) and take the wide-band limit of both \(\mathcal{D}(\epsilon)\) and \(V_{\epsilon}\) being weakly dependent on \(\epsilon\). In this limit, Eq.(40) becomes \[\Gamma_{\pm}(t)=\Gamma f(\mp\varepsilon(t)), \tag{41}\] which is a well-known result [12; 14; 18; 19; 41]. Notably, this is effectively equivalent to a stationary phase approximation of Eq.(40), and would be exact if \(\varepsilon\) does not depend on time. In the driven case, the IEA implicitly assumes an _adiabatic_ (secular) approximation of the propagator, and thus the evolution of the quantum phase, with \(\omega\) much slower than any other timescale (i.e. \(\Gamma\)) [46; 47; 32; 43]. This will become apparent in the further discussion of the effective admittance. Not only it immediately proves the ansatz in Eq.(21), but it also defines \(\Gamma=|V|^{2}\mathcal{D}\)[42; 36]. It ought to be apparent now how we should expect Eq.(21) from the fermionic character of the CR and thus the identity \(\{c,c^{\dagger}\}=0\). We shall stress how semiclassically, consistently with the name IEA, the ME only considers the reservoir states at the instantaneous energy of the QD, thus treating the level as a Dirac Delta in energy space and neglecting any broadening that may occur because of the finite coupling to the CR. Moreover, it highlights the _secular_ nature of the approximation. Therefore, it fails to consider that the dynamics generated by a periodically-driven Hamiltonian are far richer than a simple oscillation of the QD energy. Therefore, we expect it to break down if the driving frequency is comparable with the other energy and time scales of the dynamics. These concerns will be addressed by the self-consistent approach discussed in the next section. In the following, we shall refer to Eq.(26) with the rates defined in Eq.(41) as the Semiclassical Master Equation (SME) [48]. ### Self-Consistent Single Electron Box Master Equation In this section, we derive a fully quantum ME, now treating the PhB quantum mechanically. To do so, we begin by re-defining the Liouville superoperators as \[\mathcal{L}(t)=[H_{QD}+H_{R}+H_{PhB},\cdot] \tag{42}\] \[\mathcal{L}^{\prime}(t)=[H_{DR}+H_{DP},\cdot]\,. \tag{43}\] Therefore, the LME is now written in a similar form to Eq.(34), as \[\begin{split}\hbar\frac{d}{dt}\rho&=-i\left[H_{QD} (t),\rho(t)\right]-\\ &-\int_{0}^{+\infty}d\tau\langle\mathcal{L}^{\prime}(t)\mathcal{ G}(t,\tau)\mathcal{L}^{\prime}(\tau)\mathcal{G}^{\dagger}(t,\tau)\rangle\rho(t), \end{split} \tag{44}\] where the partial trace must now be taken over both the CR and the PhB. #### iv.2.1 Polaron Transformation Before tackling Eq.(44), we shall consider electron-photon interaction in the Lang-Firsov formalism and perform a canonical transformation into the polaron frame of reference[49; 50; 51]. This is achieved by considering the operator \[S=-\frac{g}{\hbar\omega}\left(a^{\dagger}-a\right)d^{\dagger}d \tag{45}\] and studying the polaron-transformed Hamiltonian \[\tilde{H}=e^{-S}He^{S}. \tag{46}\] This is equivalent to applying the displacement operator \[D=\exp\left(-\frac{g}{\hbar\omega}\left(a^{\dagger}-a\right)\right), \tag{47}\] which selectively depends on the state of the QD. In Appendix E, we show how, in the polaron frame \[\begin{split}& e^{-S}\left(H_{PhB}+H_{QD}+H_{DP}\right)e^{S}= \\ &\left(\varepsilon_{0}+\frac{g^{2}}{\hbar\omega}\right)d^{ \dagger}d+\hbar\omega a^{\dagger}a,\end{split} \tag{48}\] where we can see the longitudinal coupling merely becomes a Lamb Shift of the QD detuning, taking the form \[\tilde{H}_{QD}=\left(\varepsilon_{0}+\frac{g^{2}}{\hbar\omega}\right)d^{ \dagger}d=\tilde{\varepsilon}_{0}d^{\dagger}d. \tag{49}\] Moreover, in Appendix E, by employing the Baker-Campbell-Hausdroff (BCH) theorem, we find that [42; 36] \[e^{-S}\left(H_{DR}\right)e^{S}=\sum_{\epsilon}\;V_{\epsilon}c_{\epsilon}D^{ \dagger}d^{\dagger}+V_{\epsilon}^{*}c_{\epsilon}^{\dagger}Dd. \tag{50}\] #### iv.2.2 Self-Consistent Propagator In Section IV.1, we saw how the SME does not consider level broadening. The reason for this is the combination of the IEA with Eq.(34) using the _free_ propagator \(\mathcal{G}(t)\) in the Born approximation. We can remedy this by taking into account the self-consistent Born approximation, where we replace \(\mathcal{G}(t)\) with a _self-consistent_ propagator [52] \[\mathcal{U}(t,\tau)=\mathcal{T}e^{-i\int_{t}^{\tau}(\mathcal{L}+\mathcal{L}^{ \prime})(t^{\prime})dt^{\prime}}. \tag{51}\] We now consider the operators in the Heisenberg Picture [53; 54] \[\mathcal{U}(0,t)[d]=\left\langle e^{-i\tilde{H}t}de^{-i\tilde{H}t}\right\rangle, \tag{52}\] with \(\tilde{H}\) representing the _full_ polaron-transformed Hamiltonian and the partial trace taken over the CR and PhB _after_ the time evolution. Using the BCH theorem, one can prove that this is equivalent to requiring [36; 42] \[\begin{cases}&\hbar\frac{d}{dt}d(t)=i\tilde{\varepsilon}_{0}d(t)-i\sum_{\epsilon} V_{\epsilon}D^{\dagger}(t)c_{\epsilon}(t)\\ &\hbar\frac{d}{dt}c_{\epsilon}(t)=iec_{\epsilon}(t)-iV_{\epsilon}^{*}D(t)d(t) \end{cases} \tag{53}\] and that \(D(t)\) evolves normally, as the phonon operators only appear in the free Hamiltonian \(H_{PhB}\). Interestingly, the time evolution of the ladder operators of the QD and CR are now coupled, reflecting how, for finite \(V_{\epsilon}\), the state in the QD is metastable. Taking the Laplace transform of Eq.(53) in the wide-band limit of the CR yields the intuitive result [36; 42] \[\mathcal{U}(0,t)[d]=de^{-i\tilde{\varepsilon}_{0}t}e^{-\Gamma t} \tag{54}\] \[\mathcal{U}(0,t)[d^{\dagger}]=d^{\dagger}e^{i\tilde{\varepsilon} _{0}t}e^{-\Gamma t}. \tag{55}\] #### iii.2.3 Self-Consistent Tunnel Rates Using the result in Eq.(55), we can write the LME in the polaron frame as \[\begin{split}\hbar\frac{d}{dt}\tilde{\rho}=-i\left[\tilde{H}_{S},\tilde{\rho}(t)\right]-\\ -\int_{0}^{+\infty}d\tau\left\langle\tilde{\mathcal{L}}^{\prime} (t)\mathcal{U}(t,\tau)\tilde{\mathcal{L}}^{\prime}(\tau)\mathcal{U}^{\dagger} (t,\tau)\right\rangle\tilde{\rho}(t),\end{split} \tag{56}\] which we can similarly recast in the form of Eq.(18) with the equivalent of Eq.(40) reading \[\Gamma_{-}(t) =\frac{\Gamma}{\pi}\Re\left[\int d\epsilon f(\epsilon)\mathcal{ K}(\epsilon,t)\right] \tag{57}\] \[\Gamma_{+}(t) =\frac{\Gamma}{\pi}\Re\left[\int d\epsilon(1-f(\epsilon))\mathcal{ K}^{*}(\epsilon,t)\right], \tag{58}\] where we have defined \[\mathcal{K}(\epsilon,t)=\int_{0}^{\infty}d\tau e^{-i(\epsilon-\varepsilon_{0}) \tau}e^{-\Gamma\tau}\left\langle D(t)D^{\dagger}(\tau-t)\right\rangle_{PhB}. \tag{59}\] To compute the partial trace over the PhB, we notice that the semiclassical result in Eq.(28) can be obtained simply assuming that the radiation is in a coherent state \(|\delta\varepsilon/2g\rangle\). This view, although simplistic, can be justified by considering the typical phase noise characteristic of rf sources, together with the fact that, as made evident in, Eq.(59), in the self-consistent picture, phase-coherence is only required on timescales of the order \(1/\Gamma\)1. We take the occasion also to stress that the Markovian assumption becomes natural in the self-consistent Born approximation, thanks to the self-consistent propagator decaying exponentially on the same timescales. Footnote 1: Typical figures for phase noise for a 1GHz Local Oscillator are of -100 dBc at 100 Hz offset and -40dBc at 1Hz offset, while experimentally \(\Gamma\) varies between 10 MHz and 100 GHz. Thus, the excitation can be considered monochromatic for experimental purposes. In this case, we can use the composition properties of displacement operators to write \[\begin{split}&\left\langle D(t)D^{\dagger}(\tau-t)\right\rangle_{PhB }=\exp\left(-\left(\frac{g}{\hbar\omega}\right)^{2}(1-e^{-i\omega\tau}) \right)\cdot\\ &\cdot\exp\left(-i\left(\frac{\delta\varepsilon}{\hbar\omega} \right)\left(\sin\omega(\tau-t)+\sin\omega t\right)\right),\end{split} \tag{60}\] where we have separated the contribution of photon absorption and stimulated emission, containing both rotating and counter-rotating terms, and spontaneous emission, containing only counter-rotating terms independent of the microwave amplitude. We shall note that the spontaneous emission tends to 1 in the regime \(g\ll\hbar\omega\). This is the case of weak QD-PhB coupling, which is the case of all the results we shall consider in this work. Therefore, for the sake of clarity and ease of notation, we shall disregard this term from now on. We can compute the integral in Eq.(59) by making use of the Jacobi-Anger Identity (JAI) \[\exp\left(-ix\sin\omega\tau\right)=\sum_{m=-\infty}^{+\infty}J_{m}\left(x \right)e^{-im\omega\tau}, \tag{61}\] where \(J_{m}\left(x\right)\) is the Bessel function of the first kind. This solution allows us to write, neglecting spontaneous emission, \[\begin{split}\mathcal{K}(\epsilon,t)=e^{i\left(\frac{\delta \varepsilon}{\hbar\omega}\right)\sin\omega t}&\sum_{m=-\infty}^ {+\infty}J_{m}\left(\frac{\delta\varepsilon}{\hbar\omega}\right)e^{-im\omega t }\cdot\\ &\cdot\int_{0}^{+\infty}d\tau e^{-i(\epsilon-\varepsilon_{0}-m \omega)\tau}e^{-\Gamma\tau}.\end{split} \tag{62}\] Notably, modulo the self-consistent term \(e^{-\Gamma\tau}\), this expression resembles previous results obtained in the non-adiabatic quantum ME framework or Floquet-Lindblad formalisms [32; 34; 43; 47; 55; 56]. However, our treatment not only allows us to include the effect of the finite QD lifetime in the dynamics but immediately gives us a physical interpretation of the coefficients, as determined from the photonic part of the polaron, rather than having to derive them separately from a Floquet decomposition [34; 47; 55; 57]. Using the Fubini-Tonelli theorem, we can write \[\Gamma_{\pm}(t)=\Gamma\Re\left[e^{i\left(\frac{\delta\varepsilon}{\hbar \omega}\right)\sin\omega t}\sum_{m=-\infty}^{+\infty}J_{m}\left(\frac{\delta \varepsilon}{\hbar\omega}\right)e^{-im\omega t}\mathcal{F}_{m}^{\pm}(\varepsilon _{0})\right], \tag{63}\] where we have defined \[\mathcal{F}_{m}^{\pm}(\varepsilon_{0})=\frac{\Gamma}{\pi}\int_{-\infty}^{\infty }\quad\frac{f(\mp\epsilon)}{\Gamma^{2}+((\epsilon-\varepsilon_{0})/\hbar-m \omega)^{2}}\quad d\epsilon \tag{64}\] being the convolution of the Fermi-Dirac distribution and the effective Lorentzian density of states of the QD caused by the coupling with the CR. Here, we have disregarded the well-known diverging Lamb shift arising from taking the trace of the (unbound) CR number operator [42; 58]. Notably, in Eq.(63) appear the semiclassical phase of the driven level \(\exp\left(-i\int_{0}^{t}dt^{\prime}\varepsilon(t^{\prime})/\hbar\right)\), as well as an expansion on the Floquet modes of the system (see Section. V.2), recalling the functional form expected from Floquet-Lindblad theory [55; 34]. Computationally, it is easier to explicitly write the tunnelling rate in terms of its Fourier components. Therefore, after another application of the JAI, Eq.(63) reads \[\begin{split}\Gamma_{-}(t)=\Gamma\sum_{m=-\infty}^{+\infty}\sum_ {n=-\infty}^{+\infty}& J_{n}\left(\frac{\delta\varepsilon}{ \hbar\omega}\right)J_{m}\left(\frac{\delta\varepsilon}{\hbar\omega}\right) \cdot\\ &\cdot\Re\left[e^{-i(m-n)\omega t}\mathcal{F}_{m}^{-}(\varepsilon_ {0})\right]\end{split} \tag{65}\] Within Appendix D, we showed how this (lifetime-) broadened Fermi-Dirac can be rewritten analytically as \[\mathcal{F}_{m}^{\pm}(\varepsilon_{0})=\frac{1}{2}-\frac{1}{\pi}\Im\left[\psi _{0}\left(\frac{1}{2}+i\frac{\Theta_{m}^{\pm}}{2\pi}\right)\right], \tag{66}\] where \(\psi_{0}\) is Euler's digamma function and \[\Theta_{m}^{\pm}=\pm\frac{\varepsilon_{0}+m\hbar\omega}{k_{B}T}-i\frac{h\Gamma }{k_{B}T}. \tag{67}\] The numerator in Eq.(67) (as we shall see in Section V.2) can be interpreted as the (complex) quasi-energies of the metastable Floquet Modes of the damped-driven SEB[40]. We can now use the property that, thanks to the anticommutation relations of the fermionic CR operators, \[\mathcal{F}_{m}^{-}(\varepsilon_{0})=1-\mathcal{F}_{m}^{+}(\varepsilon_{0}) \tag{68}\] to show that the property in Eq.(21) still holds and, thus, we re-obtain a Self-Consistent Quantum Master Equation (SCQME) of the form in Eq.(26). ## V Charge dynamics and effective admittance So far, with Eq.(26), we have introduced the ME formalism that can calculate the time-dependent QD occupation probability in different limits by appropriately selecting the relevant tunnel rate \(\Gamma_{-}(t)\). In this section, we shall explain how to use this to derive an effective SEB admittance. ### Semiclassical and Quantum Admittance Firstly, we notice Eq.(26) has the steady-state solution \[P(t)=e^{-\Gamma t}\int_{-\infty}^{t}e^{\Gamma\xi}\Gamma_{-}(\xi)\ d\xi. \tag{69}\] For a periodic function \(\Gamma_{-}(t)\) with frequency omega, this can be rewritten as [18] \[P(t)=\frac{\omega}{2\pi}\sum_{N}\frac{1}{\Gamma+iN\omega}\int_{0}^{\frac{2\pi}{ \omega}}e^{iN\omega t}\Gamma_{-}(t)\ \ dt+c.c. \tag{70}\] Recalling the definition of Gate current and quantum admittance in Eqs. (3) and (5), we can therefore write \[\begin{split} Y_{N}&=i\frac{(e\alpha)^{2}}{\delta \varepsilon}\frac{N\omega}{\Gamma+iN\omega}\int_{0}^{\frac{2\pi}{\omega}}e^{iN \omega t}\Gamma_{-}(t)\ \ dt=\\ &=\frac{\mathcal{C}_{N}}{\delta\varepsilon}\int_{0}^{\frac{2\pi}{ \omega}}e^{iN\omega t}\frac{\Gamma_{-}(t)}{\Gamma}\ \ dt,\end{split} \tag{71}\] where, for conciseness, we have defined the coefficient \[\mathcal{C}_{N}=i(e\alpha)^{2}\frac{N\omega\Gamma}{\Gamma+iN\omega}, \tag{72}\] which accounts for the semiclassical high-pass filtering effect of the capacitive coupling of the collection gate. Semiclassically, we have shown in Section IV.1 that the tunnelling rate reads \[\Gamma_{-}(t)=\Gamma f(\varepsilon_{0}+\delta\varepsilon\cos\omega t), \tag{73}\] where \(f(\varepsilon)\) is the Fermi-Dirac distribution. This result allows us to directly compute the impedance from the Semiclassical Master Equation (SME) as \[Y_{N}^{SME}=2\frac{\mathcal{C}_{N}}{\delta\varepsilon}\int_{0}^{\frac{2\pi}{ \omega}}e^{iN\omega t}f(\varepsilon_{0}+\delta\varepsilon\cos\omega t)\ \ dt. \tag{74}\] To compute the admittance for the Self-Consistent Quantum Master Equation (SCQME), instead, we can use the rate from Eq.(65), which reads \[\begin{split}\Gamma_{-}(t)=\Gamma\sum_{m=-\infty}^{+\infty}\sum_ {n=-\infty}^{+\infty}& J_{n}\left(\frac{\delta\varepsilon}{ \hbar\omega}\right)J_{m}\left(\frac{\delta\varepsilon}{\hbar\omega}\right) \cdot\\ &\cdot\Re\left[e^{-i(m-n)\omega t}\mathcal{F}_{m}^{-}(\varepsilon_ {0})\right],\end{split} \tag{75}\] where in Section IV.2, we have defined \(\mathcal{F}_{m}^{-}\) as the self-consistent distribution of the SEB is the convolution between the Fermi-Dirac distribution and a Lorentzian representing the effective density of states of the metastable QD level, and \(J_{n}\) are Bessel functions of the first kind. We remind the reader that the only approximation in Eq.(75) is to be in the weak QD-PhB coupling regime. We can obtain the effective admittance in Eq.(71) by selecting the correct Fourier component from Eq.(75), yielding \[\begin{split} Y_{N}^{SCQME}&=\frac{\mathcal{C}_{N}}{ \delta\varepsilon}\sum_{m=-\infty}^{+\infty}J_{m}\left(\frac{\delta\varepsilon} {\hbar\omega}\right)\cdot\\ &\cdot\left(J_{m+N}\left(\frac{\delta\varepsilon}{\hbar\omega} \right)+J_{m-N}\left(\frac{\delta\varepsilon}{\hbar\omega}\right)\right) \mathcal{F}_{m}^{-}(\varepsilon_{0}).\end{split} \tag{76}\] It is worth noting that in the case \(N=1\), we can use the well-known identity \[J_{m+1}(x)+J_{m-1}(x)=2\frac{m}{x}J_{m}(x) \tag{77}\] to write for the fundamental \[Y_{1}^{SCQME}=2\frac{\hbar\omega}{\delta\varepsilon^{2}}\mathcal{ C}_{1}\delta\varepsilon\sum_{m=-\infty}^{+\infty}mJ_{m}^{2}\left(\frac{\delta \varepsilon}{\hbar\omega}\right)\mathcal{F}_{m}^{-}(\varepsilon_{0})=\] \[=2\frac{\hbar\omega}{\delta\varepsilon}\mathcal{C}_{1}\sum_{m=1} ^{+\infty}mJ_{m}^{2}\left(\frac{\delta\varepsilon}{\hbar\omega}\right)\left( \mathcal{F}_{m}^{-}(\varepsilon_{0})-\mathcal{F}_{-m}^{-}(\varepsilon_{0}) \right), \tag{78}\] where we have used the fact that \(J_{-m}(x)=(-1)^{m}J_{m}(x)\). Equations (76) and (78) represent the major result of this work, as they are the most general expression of the SEB admittance (assuming weak photon coupling). However, because of their mathematical complexity, Section VI presents them in different regimes, where approximations can be made to simplify the expressions and thus highlight the effect of each parameter. ### A Floquet Interpretation Before discussing the impact of the SCQME on the reflectometry lineshapes, we believe valuable to the reader to physically interpret Eq.(76) in terms of Floquet Theory. The first step consists of the solution of a sinusoidally-driven single-level system whose Hamiltonian reads \[H=\varepsilon_{0}+\delta\varepsilon\cos\omega t. \tag{79}\] This simple system is of interest due to an explicit solution for the Fourier decomposition of the single level, known as the Tien-Gordon model, which reads[59; 60] \[\ket{\phi}(t)=e^{i\varepsilon_{0}t/\hbar}\sum_{m=-\infty}^{\infty}J_{m}\left( \frac{\delta\varepsilon}{\hbar\omega}\right)e^{im\omega t}\ket{\phi_{m}}, \tag{80}\] where the Fourier components \(\ket{\phi_{m}}\) traditionally take the name of Floquet Modes. This result can be used to highlight key concepts of the dynamics of a driven quantum system. We can consider adding an integer offset \(n\) to the index \(m\), which would not change Eq.(80) apart from a redefinition of dummy indexes and the transformation \(\varepsilon_{0}\rightarrow\varepsilon_{0}+n\hbar\omega\). Therefore, the energy of a periodically-driven quantum system is defined up to an integer multiple of the photon energy. For this reason, they are traditionally named _quasi_-energies. However, this simple picture invites us to think of a single driven level as a _ladder_ of energetically equally-spaced Floquet Modes, whose phase in time oscillates at the harmonics of the fundamental driving tone. This is sometimes referred to as the Fourier or Sambe decomposition of the state [61]. This simple model, nonetheless, predicts an _effective_ density of states of the driven QD [59] \[\mathcal{D}_{QD}=\sum_{m=-\infty}^{+\infty}J_{m}^{2}\left(\frac{\delta \varepsilon}{\hbar\omega}\right)\frac{\Gamma/\pi}{\Gamma^{2}+(\epsilon- \varepsilon_{0}-m\omega)^{2}}, \tag{81}\] which, despite the mathematical similarity, does _not_ lead to the result in the SCQME for the simple reason that, without the CR, the occupation of the QD trivially reads \[P(t)=\sum_{m}J_{m}^{2}\left(\frac{\delta\varepsilon}{\hbar\omega}\right)=1, \tag{82}\] i.e. the electron is stuck in the QD and therefore, no current can be generated. From Eq.(67), however, we can immediately observe the similarity with the concept of quasi-energies, with the only key difference that CR introduces an imaginary part. In the semiclassical derivation, we have seen how the Fermi-Dirac distribution is evaluated at the instantaneous energy of the QD \(\varepsilon(t)\). In the SCQME, we can consider Eq.(66) as its natural extension to a quantum formalism, which is evaluated at the (complex) energies \[\mathcal{E}_{m}=\varepsilon_{0}+m\hbar\omega-ih\Gamma. \tag{83}\] This is an obvious consequence of the CR, which makes the state in the QD metastable, and the eigenstates of the self-consistent time propagator in Eq.(55) are exponentially decaying. From classical mechanics, moreover, we know that, at the steady state, an exponential decay can be modelled as a phase delay compared to driving the system. This phase lag can be derived directly from the ME in Eq.(26), but it is also easily obtained by noticing that, for both the SME and SCQME, \[\arg Y_{N}=\arctan\frac{\Gamma}{N\omega}=\frac{\pi}{2}-\phi_{N} \tag{84}\] is notably independent of \(\varepsilon_{0}\). This last remark is particularly interesting because, _electrically_, we can separate real and imaginary parts of the SEB admittance as \[Y=G_{S}+i\omega C_{T}, \tag{85}\] where \(C_{T}\) takes the name of the _tunnelling_ capacitance while \(G_{S}=1/R_{S}\) is the Sisyphus conductance [12; 24]. Equation (84), thus, indicates that the distinction between the _resistive_ and _reactive_ response of the SEB is dictated uniquely by its dynamical properties (quasi-energies), and the detuning only modulates the _magnitude_ of the signal. In Appendix A, we discuss the significance of this on the energy balance of the system. Because of the non-unitary interaction with the CR, it is formally impossible to write the SEB evolution as a _single_ ket. Section III.2 proves that the dynamics discussed thus far are the only solution to the SEB SCQME that satisfies the requirement of a density matrix, and therefore, is the steady state, no matter the initial electronic state [62]. Nonetheless, we believe it valuable to the reader to stress that the metastable counterpart of the modes in Eq.(80) are the building blocks of the SEB evolution. The interpretation of the dynamics in terms of damped-driven Floquet Modes with equally-spaced quasi-energies will lead to a novel interpretation of Power Broadening as an _interference_ phenomenon and will serve as an intuitive explanation of a novel effect introduced in this work: Floquet Broadening. ## VI Reflectometry lineshapes in different regimes From Eq.(76), it ought to be clear that four energy scales, namely \(k_{B}T\), \(\delta\varepsilon\), \(\hbar\omega\), and \(h\Gamma\), determine the SEB dynamics. In this section, therefore, we shall discuss the interplay between them and the different behaviour of the SEB when one dominates compared to the others. Finally, we will introduce the effect of photon loss \(\kappa\) in the PhB and determine its repercussions on the effective admittance. Considering the discussion of Eq.(84), in this section, we shall only discuss the magnitude of the reflectometry signal \(|Y|\). When possible, general properties of \(Y_{N}\) are discussed, with a particular focus given to the fundamental frequency, \(Y_{1}\), due to its experimental significance. In particular, for the small-signal regime, it is possible to expand the Bessel functions as \[J_{m}(x)\xrightarrow[x\to 0]{}\frac{x^{m}}{2^{m}m!}. \tag{86}\] Therefore, when \(\delta\varepsilon\ll\hbar\omega\) we can neglect all \(N>1\) terms. Physically, this is equivalent to the intuitive fact of considering only first dressed state in the Floquet ladder. In this case, it is trivial to show that one always obtains \[Y_{1}=\frac{(\alpha e)^{2}}{2k_{B}T}\frac{\Gamma\omega}{\omega-i\Gamma} \mathcal{F}^{\prime}(\varepsilon_{0}), \tag{87}\] with \(\mathcal{F}^{\prime}\) a new function defined such that \(\mathcal{F}^{\prime}(\varepsilon_{0})\in[0,1]\) and, more notably, \[\mathcal{F}^{\prime}(\varepsilon_{0}=0)\ \frac{\hbar\omega/k_{B}T\to 0}{\hbar\Gamma/k_{B}T\to 0}\ 1. \tag{88}\] Therefore, the different regimes only differ for the shape of \(\mathcal{F}^{\prime}(\varepsilon_{0})\). Interestingly, for a given temperature, the maximum achievable signal is \(\frac{(\alpha e)^{2}}{2k_{B}T}\), which we will see is the semiclassical prediction [12; 18]. Perhaps unsurprisingly, we will also prove that, in all regimes, the maximum signal for the fundamental is always obtained at zero detuning. ### Thermal Broadening The most straightforward regime is when \(k_{B}T\gg\delta\varepsilon,\hbar\omega,h\Gamma\). In this case, the thermal smearing of the Fermi-Dirac distribution in the CR is the dominant effect. Therefore, all the assumptions of the SME are valid, and the SEB behaves fully semiclassically. In particular, if \(k_{B}T\gg\delta\varepsilon\), it is trivial to obtain, through Taylor expansion for small \(\delta\varepsilon\) of the SME in Eq.(74), the well-known result \[\begin{split} Y_{1}&=2\mathcal{C}_{1}\frac{\partial }{\partial\varepsilon_{0}}f(\varepsilon_{0})=\\ &=\frac{(\alpha e)^{2}}{2k_{B}T}\frac{\Gamma\omega}{\omega-i \Gamma}\cosh^{-2}\left(\frac{\varepsilon_{0}}{2k_{B}T}\right)\end{split} \tag{89}\] as expected from Eq.(87). From Eq.(76), we can obtain the same result via Eq.(86). Therefore, in the limit \(k_{B}T\gg h\Gamma\), \(\mathcal{F}^{-}_{-m}(\varepsilon_{0})\approx f(\varepsilon_{0}-\hbar m\omega)\), and \[\begin{split} Y_{1}=\frac{\mathcal{C}_{1}}{\delta\varepsilon} \frac{\delta\varepsilon}{\hbar\omega}\left(f(\varepsilon_{0}+\hbar m\omega) -f(\varepsilon_{0}-\hbar m\omega)\right)\approx\\ \approx 2\mathcal{C}_{1}\frac{\partial}{\partial\varepsilon_{0}}f( \varepsilon_{0}),\end{split} \tag{90}\] where the latter approximate equality derives from the fact that if \(k_{B}T\gg\hbar\omega\) the difference approaches the definition of a derivative. In fact, in the small-signal regime, we can show by Taylor expansion of Eq.(74) around \(\delta\varepsilon=0\) that [18] \[Y_{N}\propto\frac{\partial^{N}}{\partial\varepsilon_{0}^{N}}f(\varepsilon_{0}). \tag{91}\] We obtain a similar result for the SCQME, whereby recursively using Eq.(77), \(Y_{N}^{SCQME}\) follows the finite difference stencil coefficients for the \(N\)-th derivative with second-order accuracy. Thus, in the limit of \(k_{B}T\gg\hbar\omega,\delta\varepsilon\) the SCQME result approaches Eq.(91), as expected. ### Lifetime Broadening By remaining in the small-signal regime, it is interesting to compare the two models when \(h\Gamma\gtrsim k_{B}T\gg\delta\varepsilon,\hbar\omega\). In this case, the QD is strongly coupled to the CR, and the reservoir-induced broadening of the (metastable) discrete level is comparable with thermal energy, know as Lifetime Broadening (LB). This effect, however, is not considered Figure 2: Pictorial representation of the dressed states in a driven QD. The single state in the QD whose energy is driven sinusoidally in time can be equivalently thought of as a _ladder_ of levels equally spaced by one photon energy, whose occupation is dictated by the strength of the drive. in the SME. This should be apparent as \(\Gamma\) and \(\omega\) enter in Eq.(74) only in the prefactor \(\mathcal{C}_{N}\). Therefore, in the semiclassical picture, \(\Gamma\) only _rescales_ the signal because of the high-pass filtering of the gate current from the series quantum capacitance, but does not influence its _lineshape_. Using the mathematical result in Appendix D, we can write the small-signal SEB admittance in the LB regime as Eq.(87) with \[\mathcal{F}^{\prime}_{LB}(\varepsilon_{0})=\frac{2}{\pi^{2}}\Re\left[\psi_{1} \left(\frac{1}{2}+i\frac{\varepsilon_{0}}{2\pi k_{B}T}+\frac{h\Gamma}{2\pi k_{ B}T}\right)\right], \tag{92}\] where \(\psi_{1}(z)\) is the trigamma function. Notably, \[\mathcal{F}^{\prime}_{LB}(\varepsilon_{0})=\frac{\partial}{ \partial\varepsilon}\mathcal{F}^{-}_{0}(\varepsilon_{0})= \tag{93}\] \[=\frac{1}{2k_{B}T}\left(\cosh^{-2}\left(\frac{\epsilon}{2k_{B}T} \right)*\frac{\Gamma/\pi}{\Gamma^{2}+(\epsilon-\varepsilon_{0})^{2}}\right)\] which is the convolution between the \(\cosh^{-2}\) in the TB regime to a Lorentzian peak in the case \(h\Gamma\gg k_{B}T\), interpolating smoothly between the two. The effect of increasing tunnel rate in the admittance lineshape is shown in Fig. 3a. Interestingly, experimentally, it is a known fact that different transitions may show a different Full-Width-Half-Maximum (FWHM) at the same electron temperature. It has been shown that the small-signal quantum capacitance of a DTR in the LB regime is equal to [63, 17] \[C_{Q}=\frac{(e\alpha)^{2}}{2k_{B}T}\frac{1}{1+\omega^{2}/\Gamma ^{2}}\cdot\\ \cdot\left(\cosh^{-2}\left(\frac{\epsilon}{2k_{B}T}\right)*\frac{ \Gamma/\pi}{\Gamma^{2}+(\epsilon-\varepsilon_{0})^{2}}\right). \tag{94}\] Therefore, considering the result in Eq.(93), the self-consistent treatment in the SCQME naturally extends Eq.(94) to both the Sisyphus resistance and higher harmonics. Fig. 3b shows the former effect of LB on the Full-Width Half-Maximum (FWHM) of the peak, where we see that for \(h\Gamma\gtrsim k_{B}T\), the FWHM starts increasing from the expected TB value of \(3.53~{}k_{B}T\), obtained from the inverse hyperbolic cosine, and becomes linear with the tunnelling rate for increasing tunnel rates. In particular, \[\text{FWHM}(|Y_{1}|)\xrightarrow[h\Gamma\gg k_{B}T,h\omega,\delta\varepsilon ]{}2h\Gamma. \tag{95}\] For clarity, in all Figures, the parameters which are not varied are indicated with a subscript \(0\). Finally, we can consider the effect of \(\Gamma\) on the amplitude of SEB admittance at the fundamental frequency, which is shown in Fig. 3c. As discussed above, \(\Gamma\) enters in \(Y_{1}\) both in the prefactor and because of lifetime broadening. The former is a purely classical effect, and it is due to the charge requiring a finite time to tunnel out of the QD. In contrast, the other is purely quantum and arises from the effective density of states of the electron in the SEB. In Fig. 3a, we show the effect purely of the latter for increasing \(\Gamma/\omega\), which highlights another crucial effect of LB. As the coupling to the CR increases, the discrete energy level in the QD becomes increasingly broader, and thus the peak _decreases_. Therefore, any spectroscopic _broadening_ of the lineshape inevitably translates into _lowering_ the maximum signal, as it must be the case because of the properties of the convolution. This effect, however, is complemented by the semiclassical contribution in \(\mathcal{C}_{1}\). One could posit that larger coupling to the CR ought to translate in a higher probability of tunnelling events per rf cycle. This effect is shown in Fig. 3c, where the inclusion of the prefactor \(\mathcal{C}_{1}\) leads _monotonically increasing_ signal for increasing \(\Gamma\). However, a Self-Consistent quantum picture clearly shows that this is not the case. In the limit of \(h\Gamma\gg k_{B}T\), the discrete level is so broadly distributed in energy that _no_ tunnelling events can be observed. Consequently, the maximum reflectometry signal begins to drop when \(h\Gamma\) starts to become comparable with \(k_{B}T\) and completely vanishes for \(h\Gamma\gg\hbar\omega,k_{B}T\). Consistently with Fig. 3a, this is accompanied by an increase of the FWHM of the peak, as evidenced by Fig. 3b. More specifically, while there is no simple analytical expression for the behaviour in Fig. 3c, we can notice that \(\mathcal{F}^{\prime}_{LB}\propto\Gamma^{-1}\) for large \(\Gamma\). Therefore, we similarly expect \(\max\left(|Y_{1}|\right)\propto\Gamma^{-1}\) for \(h\Gamma\gg\hbar\omega\). ### Floquet Broadening The final small-signal regime presented in this work is a novel kind of broadening which arises from the rf photon energy being comparable with thermal and lifetime broadening. Similarly to lifetime broadening, this effect is Figure 3: Effect of Lifetime Broadening (LB) on \(Y_{1}\). (a) Shape of the function \(\mathcal{F}^{\prime}_{LB}(\varepsilon_{0})\) (Eq.(93)) for varying \(h\Gamma/k_{B}T_{0}\), showing how the broadening of the peak is accompanied by a lowering of the maximum. (b) FWHM of \(|Y_{1}|\) showing the linear increase when \(h\Gamma\gtrsim k_{B}T\). The graph is normalized to \(3.53~{}k_{B}T\), the FWHM of \(|Y_{1}|\) in the TB regime (SME). (c) Maximum of \(|Y_{1}|\) normalized to the TB regime (SME). This panel shows the competition between the increase in signal because of \(C_{1}\) and the drop because of the broadening of \(\mathcal{F}^{\prime}_{LB}\) for \(h\Gamma\gg k_{B}T\). purely quantum and cannot be captured semiclassically, as \(\omega\) only appears in the already-discussed prefactor \(C_{N}\). Interestingly, however, neither \(\delta\varepsilon\) nor \(\omega\) appears separately in the SCQME, but only in the ratio \(\frac{\delta\varepsilon}{\hbar\omega}\). We can gain further insight by reconsidering the effective density of states in Eq.(81), where the Floquet modes are equally spaced by the photon energy \(\hbar\omega\). Therefore, the ratio \(\frac{\delta\varepsilon}{\hbar\omega}\) determines the _highest_ dressed state that will be reached by the semiclassical voltage swing. Moreover, from the discussion in Section V.2, the reflectometry lineshape is determined by the interference between the different dressed states. Therefore, if \(\hbar\omega\gtrsim h\Gamma\), i.e. the separation between the Floquet modes is comparable with the broadening of the levels and, thus, the discrete nature of the dressed states may emerge. In this work, we present a novel result, which we name Floquet Broadening (FB), which becomes apparent in the regime where \(\hbar\omega\gtrsim k_{B}T,h\Gamma\), i.e. the discrete nature of the photon energy becomes the primary source of _broadening_ in the SEB admittance. Here, we take the occasion to stress that FB is an _inherently_ quantum phenomenon, arising from the discrete spacing of the Floquet modes, thus, uncapturable with any semiclassical treatment. From a mathematical standpoint, FB stems from the fact that, for \(\hbar\omega\sim k_{B}T\), Eq.(90) becomes an increasingly worse approximation of the derivative. Therefore, we can use the small-signal expansion in Eq.(86) to write the admittance in the form of Eq.(87) with \[\begin{split}\mathcal{F^{\prime}}_{FB}=\frac{k_{B}T}{\hbar \omega}\Im\bigg{[}\psi_{0}\left(\frac{1}{2}+i\frac{\varepsilon_{0}+\hbar \omega}{2\pi k_{B}T}+\frac{h\Gamma}{2\pi k_{B}T}\right)-\\ -\psi_{0}\left(\frac{1}{2}+i\frac{\varepsilon_{0}-\hbar\omega}{2 \pi k_{B}T}+\frac{h\Gamma}{2\pi k_{B}T}\right)\bigg{]}.\end{split} \tag{96}\] Recalling the fact that \(\psi_{0}\) is the digamma function, and \(d/dz\)\(\psi_{0}(z)=\psi_{1}(z)\), it is clear that for slow frequencies, \(\mathcal{F^{\prime}}_{FB}\approx\mathcal{F^{\prime}}_{LB}\), while for larger photon energies, it is possible to resolve the two separate terms in Eq.(96), which correspond to the first rotating and counter-rotating dressed states. We show the results in Fig. 4, where we present the effect for the cases \(k_{B}T\gg\hbar\Gamma\) (Fig. 4a-c) and \(h\Gamma\gg k_{B}T\) (Fig. 4d-f). In particular, it is interesting to consider the effect of increasing \(\omega\) on the FWHM and maximum admittance in Fig. 4b,e and Fig. 4c,f, respectively. Considering Eq.(96) and ignoring LB for simplicity, we expect because of the factor \(k_{B}T/\hbar\omega\) in Eq.(96) \[|Y_{1}^{SCQME}|\propto|\mathcal{C}_{1}|\,\omega^{-1}\propto\frac{\Gamma}{ \sqrt{\Gamma^{2}+\omega^{2}}}. \tag{97}\] In contrast, in the semiclassical case \[|Y_{1}^{SME}|\propto|\mathcal{C}_{1}|\propto\frac{\Gamma\omega}{\sqrt{\Gamma ^{2}+\omega^{2}}}.|Y_{1}^{SME}|\propto|\mathcal{C}_{1}|\propto\frac{\Gamma \omega}{\sqrt{\Gamma^{2}+\omega^{2}}}. \tag{98}\] Therefore, contrary to the SME, the SCQME predicts a reduction in the SEB signal with increasing \(\omega\) (Fig. 4c,f). Similarly to LB, this drop in signal is accompanied by an increase in the FWHM (Fig. 4b,e). In particular, \[\text{FWHM}(|Y_{1}|)\xrightarrow[\hbar\omega\gg k_{B}T,h\Gamma,\delta \varepsilon]\,2\hbar\omega. \tag{99}\] There is some interesting physical insight into the process of rf reflectometry to be gained from this regime. Semiclassically, one can picture increasing the measurement frequency in the small-signal regime as increasing the _chances_ per unit time of electron tunnelling. Therefore, this shall increase the maximum signal until \(\omega\) becomes of the order of the tunnelling rate \(\Gamma\). At that point, the probability of a tunnel event per unit cycle reads \[P_{cycle}=1-e^{-\Gamma\tau_{cycle}}\approx\Gamma\tau_{cycle},\] where \(\tau_{cycle}=2\pi/\omega\). Therefore, the average gate current reads \[\langle I_{g}\rangle\propto eP_{cycle}/\tau_{cycle}\] which, as shown in Eq.(98) and Fig. 4f, saturates and becomes independent of frequency (dashed line). From a quantum perspective, the tunnelling of the electron in (out) the CR is more akin to a _measurement_ of the state of the QD. Increasing \(\omega\) means increasing the measurement rate, potentially much faster than the timescales of the (non-unitary) quantum dynamics. Therefore, the electron becomes _trapped_ inside (outside) the QD, a process resembling the Quantum Zeno effect [28]. Another interpretation may come from the Heisenberg Uncertainty Principle, as increasing \(\omega\) would attempt to define the energy of a level at time scales faster than its lifetime. Therefore, we expect a _Heisenberg_ energy broadening of the order \(2\pi/\tau_{cycle}=\omega\), as shown in Fig. 4. A more detailed discussion of the optimal parameters for the SEB used as a sensor, taking into account LB and FB, is outlined in Appendix B. ### Power Broadening As previously discussed, it is experimentally interesting to compare the SME and the SCQME in the large-signal regime, where \(\delta\varepsilon\gtrsim k_{B}T,h\Gamma,\hbar\omega\). For reasons that shall become apparent, this is commonly referred to as the power-broadened (PB) regime, which we will begin to discuss in the case where \(k_{B}T,h\Gamma\gg\hbar\omega\). In Fig. 5a, we show how the SCQME can reproduce the familiar power-broadening fan predicted by the SME, where the admittance linewidth increases linearly with the amplitude of the detuning oscillations [14; 18; 64; 65; 66; 67; 12; 14]. From Eq.(78), however, we can see how the SCQME does so by considering an _infinte summation_ over dressed states rather than an integral over a smoothly-varying voltage. We can gain more insight into the physical reasons behind this via the use of Floquet Theory presented in Section V.2, where we showed how a single level oscillating with a large amplitude can be modelled as a ladder of equally-spaced levels. However, we have already pointed out how a simple application of Eq.(81), obtained without the CR, does _not_ lead to the SCQME result in Eq.(76) (or Eq.(78)). The reasoning why is that the Floquet Modes must combine in such a way to generate a _real_ occupation probability. At any frequency different to \(\omega\), two different Floquet modes will accumulate a different phase each rf cycle. When averaged out over multiple cycles, any interference becomes negligible and, therefore, one can separately describe the single dressed states, as in Eq.(81). If the excitation frequency is the same as the measurement one (or its integer fraction), the different dressed states have a _defined_ phase relationship between each other, and will be able to interfere to generate the various harmonic of the driving. The signal at the \(N\)-th harmonic is obtained by combining _rotating_ waves at \(N\omega\) and _counter-rotating_ waves at \(-N\omega\). Or, to phrase it differently, reflectometry techniques are sensitive to changes in AC currents (i.e. emitted and absorbed photons) and, thus, to _transitions_ between the ladder of dressed states. Therefore, this result can be considered an AC extension to the Tien-Gordon model [59], which is only concerned with direct (DC) conductance and, thus, time-averaged properties of the system [60]. All this is particularly apparent for the fundamental in Eq.(78), where the lineshape is constructed from the _interference_ of the \(m\)-th and \(-m\)-th Floquet Modes. This equation also highlights the well-known result in Floquet theory that _all_ the modes enter the definition of the system's response at every harmonic[61, 40]. From this discussion, it ought to be clear how PB is not merely due to incoherence with the CR but rather an _interference_ phenomenon between the Floquet Modes of the damped-driven quantum system. The stochastic nature of the QD-CR interaction can thus be reinterpreted as a scattering process between the dressed states or as a finite lifetime of the polaron state. Another interesting property of PB is how it is affected by LB (Fig. 5b-d). In the small-signal regime, LB arises as an additional broadening in addition to the thermal smearing of the reservoir. If \(\delta\varepsilon\gg k_{B}T\gg h\Gamma\), however, we can imagine that the voltage swing will dominate, and the QD will be emptied (filled) with probability 1 every cycle [18]. In the semiclassical picture, this will lead to a saturation of the Gate current. This trend is clearly shown in Fig. 5c. Notably, the admittance, which is defined as the Gate current over the excitation voltage, drops as \(\delta\varepsilon^{-1}\). Therefore, for visual clarity, in Fig.5 we plot the product \(\delta\varepsilon|Y_{1}|\). Lifetime Broadening, on the other hand, is an _intrinsic_ Figure 4: Effect of FB in for varying \(\hbar\omega/k_{B}T_{0}\) (a-c) and \(\omega/\Gamma_{0}\) (d-f). Panels (a,d) show how for increasing \(\omega\), \(\mathcal{F}^{\prime}_{FB}\) approximates increasingly less a derivative and the consequent lowering of the peak accompanying the broadening. Panels (b,e) show the increase of FWHM of \(|Y_{1}|\), eventually reaching \(2\hbar\omega\), while panels (c,f) show the dependence of the maximum admittance with drive frequency. Figure 5: Effect of increasing the amplitude of the driving in the large-signal regime. (a) Map of the normalized Gate current as a function of detuning offset and driving amplitude, showing the well-known PB fan. (b) Line cut of (a) (red dashes) showing the combined effect of PB and LB. (c) Saturation of the Gate current in the limit of very large drive (\(\delta\varepsilon\gg k_{B}T,h\Gamma,\hbar\omega\)), and its retardation because of LB. (d) FWHM of \(|Y_{1}|\), showing the convergence to \(\sqrt{3}\delta\varepsilon\) in the limit of very large drive. property of the QD and dictates the lifetime of _all_ dressed states. Therefore we would expect it to play a role even in the regime where \(\delta\varepsilon\gg h\Gamma\gtrsim k_{B}T\). Indeed, in Fig. 5b, we see that increasing \(\Gamma\) leads to a broadening of the lineshape and the consequent lowering of the peak at constant power. However, by increasing the rf power, we can still reach saturation (Fig. 5c). In the semiclassical picture, we can interpret this as a broadening of the DOS of the QD. Therefore, similarly to the previous argument regarding temperature, one can imagine the voltage swing to be large enough to ignore the intrinsic broadening of the level. If this is sufficient to fully empty or fill the QD every excitation cycle, we expect saturation to be reached. From a quantum perspective, we notice in Eq.(62) that the evolution of the polaron phase is proportional to \(\delta\varepsilon\), while it decays with \(\Gamma\). Recalling the discussion in Section VI.2 (see also Appendix A), LB decreases the signal because the polaron dynamics is _cut short_. Increasing the number of photons can accelerate the dynamics fast enough to overcome the finite lifetime. However, it comes at the expense of a broader peak because of Heisenberg (Fig. 5d). While a simple analytical solution is not available for the SME in Eq.(74), we can notice that in the regime \(\delta\varepsilon\gg k_{B}T\gg h\Gamma\) the integrand of the Fourier transform \(f(\varepsilon(t))\) tends to a square wave. Thus, we can immediately write in the very large signal regime [18] \[\left\{\begin{array}{lcl}Y_{N}^{SME}=\frac{2}{\pi}\frac{\mathcal{C}_{N}}{N \delta\varepsilon}\sin\left[N\;\arccos\left(\frac{\varepsilon_{0}}{\delta \varepsilon}\right)\right]&if&|\varepsilon_{0}|<\delta\varepsilon\\ Y_{N}^{SME}=0&if&|\varepsilon_{0}|\geq\delta\varepsilon\end{array}\right. \tag{100}\] which, for \(N=1\), reduces to \[Y_{1}^{SME}\bigg{|}_{\delta\varepsilon\rightarrow\infty}=\frac{2(\alpha \epsilon)^{2}}{\pi\delta\varepsilon}\frac{\Gamma\omega}{\omega-i\Gamma} \sqrt{1+\left(\frac{\varepsilon_{0}}{\delta\varepsilon}\right)^{2}}. \tag{101}\] For ease of notation, we can therefore define \[\mathcal{M}_{\mathcal{I}}=\max\left(|I_{1}|\right)\bigg{|}_{\delta\varepsilon \rightarrow\infty}=\frac{2(\alpha\epsilon)^{2}}{\pi}\left|\frac{\Gamma\omega }{\omega-i\Gamma}\right|, \tag{102}\] which is the maximum Gate current semiclassically achievable at the fundamental. Notably, this saturates at very large power, and thus the admittance, which is defined as the Gate current over the excitation voltage, drops as \(\delta\varepsilon^{-1}\). Interestingly, we shall note that Eq.(101) (and Eq.(100)) have a discontinuous derivative at \(\varepsilon_{0}=\pm\delta\varepsilon\). This can be understood by considering that, when thermal broadening is negligible, the Fermi-Dirac function tends to be a step-function with a discontinuous derivative. The smearing of the peak caused by LB, however, will smoothen these corners and keep every derivative continuous, as depicted in Fig. 5b. Finally, from Fig. 5d, we notice an interesting trend. Because of the properties of Eq.(78), the LB of the peak happens in such a way as to make FWHM only dependent on \(\delta\varepsilon\). Therefore, we can use Eq.(101) to derive the fact that, we have \[\text{FWHM}(|Y_{1}|)\xrightarrow[\delta\varepsilon\gg k_{B}T,h\Gamma,\hbar \omega]{}\sqrt{3}\delta\varepsilon\approx 1.73\delta\varepsilon \tag{103}\] independently from \(k_{B}T\) or \(h\Gamma\). Lastly, it is interesting to consider the effect of PB in conjunction with FB, in the case where \(\delta\varepsilon\gg\hbar\omega\gtrsim k_{B}T,h\Gamma\). Similarly to the small-signal regime, when the photon energy becomes significant with respect to the other broadenings, we expect to Figure 6: Evolution of the PB fan for increasing \(\omega\). As we enter the FB regime, the effect of the _discrete_ summation in \(Y_{1}\) becomes more clear, revealing the underlying ladder of dressed states. As this occurs, the Gate current is permitted to go above the semiclassical limit, reaching \(\delta\varepsilon|Y_{1}|\approx 1.02\mathcal{M}_{\mathcal{I}}\) for \(\delta\varepsilon/\hbar\omega\approx 2.40\). of the summation over the dressed states in Eq.(78). We show the effect of increasing excitation frequency in Fig. 6, where it is clear how the PB fan becomes a more faithful depiction of the _ladder_ of dressed states. _En passant_, it is interesting to note how, in the case of \(\hbar\omega>k_{B}T,\hbar\Gamma\) it is possible to achieve _more_ Gate current that the semiclassical limit. It is possible to show, in fact, that for \(\delta\varepsilon/\hbar\omega\approx 2.40\), \(\delta\varepsilon|Y_{1}|\approx 1.02\mathcal{M}_{\mathbb{Z}}\). Although not a significant enhancement, it stands as testament of the coherent interference at the root of the process generating the reflectometry signals in this regime and points towards new experiments in the field of coherent hybrid quantum-classical circuits. ### Photon Loss Broadening Up until this point, we have considered the PhB as a perfect cavity held in a coherent state. However, since the SEB dynamics are purely dictated by the interaction between the ladder of dressed states, we must also consider photon loss, since this will cause the various dressed states to dephase and thus may strongly affect the signal. We can account for this effect by assuming the loss rate \(\kappa\) to be small compared to the rf frequency \(\omega\). Therefore, redefining the PhB Hamiltonian as \[H_{PhB}=\hbar\omega a^{\dagger}a+\hbar\kappa\mathcal{D}[a], \tag{104}\] which, in conjunction with a classical drive supplied by the rf source, will cause the photon number to fluctuate in time as \[n(t)=\bar{n}+\delta n(t). \tag{105}\] For a damped-driven system, the average number of photons reads [29] \[\bar{n}=\frac{\delta\varepsilon^{2}}{(2g)^{2}+\hbar^{2}\kappa^{2}}, \tag{106}\] while the time correlation of the fluctuations obeys [67] \[\langle\delta n(t)\delta n(t^{\prime})\rangle=\bar{n}e^{-\frac{\pi}{2}|t-t^{ \prime}|}. \tag{107}\] With this additional term, we can write the Displacement Operator as [68; 69; 70] \[D_{\alpha}(t)=e^{\frac{|\alpha|^{2}}{2}(1-e^{-\frac{\pi}{2}t})}e^{-\frac{1}{2 }\pi a^{\dagger}at}D_{e^{i\omega t}\alpha}. \tag{108}\] In the Gaussian approximation for the phase, we can modify the kernel in Eq.(62) as [28; 69; 29] \[\mathcal{K}(\epsilon,t)=e^{i\left(\frac{\delta\varepsilon}{\hbar \omega}\right)\sin\omega t}\sum_{m=-\infty}^{+\infty}J_{m}\left(\frac{\delta \varepsilon}{\hbar\omega}\right)e^{-im\omega t}. \tag{109}\] \[\cdot\int_{0}^{+\infty}d\tau e^{-i(\epsilon-\epsilon_{0}-m\omega )\tau}e^{-\Gamma\tau}e^{\bar{n}\left(1-\frac{\kappa}{2}\tau-e^{-\frac{\pi}{2 }}\right)}\] If we assume that \(\kappa\gg\Gamma\), the lifetime of an electron in the QD is much longer than the coherence of the radiation. In this case, we can neglect the term \(e^{-\frac{\pi}{2}\tau}\), and it ought to be apparent how the _effective_ lifetime of the ladder of dressed states reads \[\tilde{\Gamma}=\Gamma+\kappa\frac{\delta\varepsilon^{2}}{(2g)^{2}+\hbar^{2} \kappa^{2}}=\Gamma+\gamma\left(\frac{\delta\varepsilon}{\hbar\omega}\right)^{ 2}, \tag{110}\] which, for large input powers, is dominated by photon dynamics rather than electronic ones. As seen in the previous sections, it is helpful to describe the dynamics in terms of the ratio \(\delta\varepsilon/\hbar\omega\). Thus, in Eq.(110), we have defined the rate \[\gamma=\frac{\kappa\omega^{2}}{(2g/\hbar)^{2}+\kappa^{2}} \tag{111}\] to quantify the effect of photon loss to the CR in the SEB lifetime. If, on the other hand, \(\kappa\ll\Gamma\), photon loss becomes a small perturbation, and \[e^{\bar{n}\left(1-\frac{\kappa}{2}\tau-e^{-\frac{\pi}{2}\tau}\right)}\approx e ^{-\frac{\pi\rho^{2}}{8}\tau^{2}}, \tag{112}\] which means that the _spectrum_ of the SEB becomes the convolution of a Lorentzian and a Gaussian (i.e. a Voigt profile), the inhomogeneous part directly arising from the Poisson statistics of photon counts. Thus, this phenomenon has also been described as photon shot noise in the context of superconducting flux qubits [71; 72; 73]. We will refer to this novel effect as Photon Loss Broadening (PLB), and it is the SEB equivalent of the well-known phenomenon in circuit QED of measurement-induced dephasing [71; 74; 28; 29; 28; 29; 30]. Once again, this result stresses how PB in a SEB is a _coherent_ process between dressed states in the QD. We can analytically evaluate the integral in Eq.(109) and expand the resulting spectrum in an infinite series of Lorentzians [29]. Therefore, we can use the result in Appendix D to introduce the effect of PLB simply by redefining \[\mathcal{F}_{m}^{\pm}(\varepsilon_{0})=\frac{1}{2}+\frac{e^{\bar{n}}}{2\pi} \sum_{l}\frac{(-\bar{n})^{l}}{l!}\Im\left[\psi_{0}\left(\frac{1}{2}+\tilde{ \Theta}_{l,m}^{\pm}\right)\right], \tag{113}\] where \[\Theta_{l,m}^{\pm}=h\frac{\tilde{\Gamma}+l\kappa}{2\pi k_{B}T}\pm i\frac{ \varepsilon_{0}+m\hbar\omega}{2\pi k_{B}T}.\Theta_{l,m}^{\pm}=h\frac{\tilde{ \Gamma}+l\kappa}{2\pi k_{B}T}\pm i\frac{\varepsilon_{0}+m\hbar\omega}{2\pi k_{ B}T}. \tag{114}\] In usual experimental settings, we have that \(\hbar\kappa\ll g\ll\hbar\omega\sim\hbar\Gamma\). Therefore, we could approximate \[\tilde{\Theta}_{l,m}^{\pm}\approx\tilde{\Theta}_{m}^{\pm}=\frac{h\tilde{ \Gamma}}{2\pi k_{B}T}\pm i\frac{\varepsilon_{0}+m\omega}{2\pi k_{B}T}, \tag{115}\] which allows us to sum the infinite series as \[\mathcal{F}_{m}^{\pm}(\varepsilon_{0})\approx\frac{1}{2}+\frac{1}{2\pi}\Im \left[\psi_{0}\left(\frac{1}{2}+\tilde{\Theta}_{m}^{\pm}\right)\right], \tag{116}\] which shows how the main effect of PLB is a superlinear LB and the consequent reduction in the SEB signal. We capture this effect in Fig. 7, which shows a comparison of the PB fan with \(\gamma=0\) (Fig. 7a) and with \(\gamma=0.2\cdot 10^{-2}\Gamma\) (Fig. 7b). Perhaps counterintuitively, in the case of a lossy cavity (resonator), increasing the input power does not always lead to an increase in the reflectometry signal, as observed experimentally [14, 18, 75]. However, we shall note that in an experimental setting, the degradation of the signal because of PLB may be complemented by heating the sample and/or the resonator, which may also lower the total signal. We highlight the effect of PLB in Figs. 7c-d, where we show maximum signal and its FWHM are shown as a function of detuning amplitude for varying \(\gamma\). For small \(\gamma\), the effect of PLB only manifests at very large detuning amplitudes, where we see a superlinear broadening for increasing \(\delta\varepsilon\) and the consequent reduction in the peak height. Roughly, we can estimate the threshold for this change in regime as \[\delta\varepsilon\gtrsim\hbar\omega\sqrt{\frac{\Gamma}{\gamma}}, \tag{117}\] where PLB becomes comparable with the usual LB (which is only caused by the CR). As shown in Section VI.4, the onset of PB is for \[\delta\varepsilon\gtrsim k_{B}T,h\Gamma. \tag{118}\] Therefore, depending on the value of \(\kappa\), Eq.(117) may be reached before Eq.(118). Observing Fig. 7c, in this case, the maximum theoretical signal is never reached. Finally, we see that the FWHM of the peak increases as \(\delta\varepsilon^{2}\) (Fig. 7d). ## VII Conclusions and outlook We have presented a new theory to go beyond the semiclassical models for the development of equivalent circuits of quantum systems and applied it to a simple yet technologically useful QD system: the SEB. Our results reproduce the known behaviour of the SEB at low frequency and high temperature [19, 41], and present an extension beyond the adiabatic regime. The self-consistent nature of our approach also allows us to include the effect of the finite lifetime of an electron in a QD when coupled to a reservoir, including the description of its resistive component, which was missing from the literature [17, 63, 76]. The mathematical formalism employed in this work sheds new light on the concept of Gate current and the process of rf reflectometry, which is interpreted as the dynamics of polaron states in the QD. This allows us to see the well-known phenomenon of Power Broadening from a new angle, as an interference effect between dressed states in the QD. Our theory extends the adiabatic semiclassical SEB description into a novel regime, which we call Floquet Broadening, where the main source of energy uncertainty comes from the discrete nature of the photonic part of the polaron. In doing so, our theory recovers the Heisenberg Uncertainty Principle, which is not respected by the semiclassical theory. In its discussion, we propose measurements which are technologically within reach for the experimental validation of our theoretical results,including a modification of the admittance lineshape and a drop of its maximum value at high drive frequencies. The self-consistent quantum formalism here developed is fully general and applicable to quantum systems such as multi quantum dots or multi-level systems such as spin qubits. Therefore, it lays the foundations for developing fully quantum circuit equivalent models for quantum systems. The possibility of properly accounting for relaxation and dephasing phenomena beyond the adiabatic limit comes with the enticing perspective of extending such models to systems conceived for quantum computation, allowing for a more informed engineering of their design and operation. Moreover, our quantum description of the photon source Figure 7: Effect of PLB. (a,b) Gate current maps showing the alteration of the PB fans in the presence of photon loss. Both maps are normalized to the maximum semiclassical Gate current \(\mathcal{M}_{\mathcal{I}}\), showing how in the case of PLB this is never reached, and the signal starts decreasing for very large \(\delta\varepsilon\). (c,d) Maximum normalized Gate current and FWHM of the peak for increasing \(\gamma\). The effect of PLB causes panel (c) to not be monotonically increasing, causing a reduction of signal for increasing power. This is accompanied by a _superlinear_ increase in FWHM, which eventually become proportional to \(\gamma\delta\varepsilon^{2}\) (panel (d)). allows for the theoretical treatment of its nonidealities. This possibility has led us to introduce a new large-signal phenomenon, photon loss broadening, which can contribute to the experientially observed drop in SEB signal at very large powers. This is the first description in a QD system and terms of an equivalent circuit model of a well-known phenomenon observed in superconducting devices, kown as photon shot noise or measurement induced dephasing [71, 72, 73, 29, 30, 74]. Also in this case we present a clear experimental path for the measurement of this effect. Finally, we have shown how, by exploiting quantum coherence in the system, the semiclassical limits of the maximum achievable AC currents can be overcome. While our example only presents a minor improvement (2%), leveraging quantum phenomena to augment well-known devices may put within technological reach semiclassically unachievable goals. ## VIII Acknowledgements This research was supported by European Union's Horizon 2020 research and innovation programme under grant agreement no. 951852 (QLSI), and by the UK's Engineering and Physical Sciences Research Council (EPSRC) via the Cambridge NanoDTC (EP/L015978/1), QUES2T (EP/N015118/1), the Hub in Quantum Computing and Simulation (EP/T001062/1) and Innovate UK [10000965]. L.P. acknowledges the Winton Programme for the Physics of Sustainability. M.F.G.Z. acknowledges a UKRI Future Leaders Fellowship [MR/V023284/1]. The authors acknowledge F. Martins from Hitachi Cambridge Laboratory for useful discussions. ## Appendix A Real and Virtual Photon Exchange In the main text, we have mostly discussed the absolute value of the SEB admittance, especially the maximum value and FWHM, for they are easily experimentally-accessible properties. We believe of value, however, to explore more in depth the real and imaginary part of \(Y\), introduced in Section V.2, for they have very different _electrical_ (and physical) effects [12]. For simplicity, in this section we shall uniquely discuss the fundamental \(Y_{1}\), dropping the suffix from now on, but this discussion is trivially extendable to the other harmonics [18]. Moreover, being in the small-signal regime, the effects of PLB are not considered in this section and the PhB is to be considered lossless. To begin with, the presence of only first time derivatives in the LME in Eq.(34) means that we can, without loss of generality, write \[Y=G_{S}+i\omega C_{T}, \tag{12}\] where \(C_{T}\) represents the _tunnelling_ capacitance while \(G_{S}=1/R_{S}\) is the Sisyphus conductance [12, 24]. We shall stress the notable absence of what is commonly referred to as _quantum_ capacitance [25], for that corresponds to an adiabatic redistribution of charges along a unitary dynamics because of an avoided crossing [12, 24]. The level crossing in a SEB at \(\varepsilon=0\), however, is between two states describing the full or empty QD. Therefore, they formally belong to _different_ canonical single-body Hilbert spaces, and thus the crossing _cannot_ be avoided. Redistribution of charge, however, can still happen through _incoherent_ (stochastic) processes via the CR through the jump operators in the LME and may occur either elastically or inelastically. In a cyclostationary process, this results in a gate current either in-phase or out-of-phase with the driving voltage. In the small-signal regime, this simply reads [25, 24, 18] \[I_{g}=YV_{in}. \tag{13}\] This semiclassical picture is of particular interest because it clearly shows how only the former, described by \(G_{S}\), can lead to energy dissipation, while the signal arising from \(C_{T}\) conserves energy over an rf cycle. We can picture these two types of response from a QED perspective as the QD mediating an interaction between the PhB and the CR. The capacitive response, thus, would correspond to the QD and the PhB exchanging _virtual_ photons, whose energy is not accounted for, in the form of creating a polaron and populating the Floquet ladder, while the resistive components will describe _real_ photons, which transfer energy to the PhB from the CR and vice versa. Consisting of fully elastic jumps, the capacitive component is time-reversible. Thus, the \(m\)-th and \(-m\)-th Floquet Modes must share the same occupation (i.e. Eq.(81)) and a precise phase relation. Stocastically, however, an _extra_ real photon can be emitted (absorbed) resulting in scattering between the Floquet Modes. This unbalance in the \(m\)-th and \(-m\)-th Modes manifests mathematically as a phase delay in the averaged steady-state response, and thus a resistive behaviour. In light of this interpretation, it is interesting to consider the behaviour in Fig. 8, which shows the (maximum of) tunnelling capacitance and Sisyphus conductance for increasing \(\Gamma\) and \(\omega\). Semiclassically, increasing \(\Gamma\) always leads to an increased signal (dashed lines in Figs. 8a-c), as it increases the probability of tunnelling events per unit time. Moreover, for very large \(\Gamma\) the signal is mostly capacitive, as, in the Instantaneous Eigenvalues Approximation (IEA), \(G_{S}\) derives from electrons that _failed to tunnel_ elastically at the CR Fermi energy, but still have nonzero tunnelling probability because of thermal smearing. The last point, as shown in Fig. 8c, remains valid in the Floquet picture, as from the Floquet-Fermi golden rule we expect the CR to efficiently generate transitions between modes when the energy scale of its interaction (\(h\Gamma\)) is comparable with the photon energy [61]. For \(\Gamma\) largely detuned from \(\omega\) we expect the Floquet dynamics to be mostly unperturbed, and thus the admittance be mostly reactive. This has a simple interpretation in light of the time-energy Heisen berg uncertainty principle, recalling that tunnelling in and out of the QD requires the system to create and destroy photons. When the lifetime of the polaron becomes shorter than the photon energy, said photons can avoid being _accounted for_, and thus this process becomes dissipationless. Because of LB, however, the Floquet dynamics consists of _decaying_ states, as obvious by considering the eigenstates of the self-consistent propagator in Eq.(55). When \(\Gamma\gg\omega\), essentially _all_ dressed states will decay within an rf cycle. Therefore, we expect also the capacitive component to decay exponentially with the Floquet dynamics. A similar point can be made about FB when we increase \(\omega\) for fixed \(\Gamma\) (Figs. 8b,d). Semiclassically, we expect most of the signal to be resistive for large \(\omega\), for the QD cannot keep up with the rf and thus the tunnelling phase will be largely randomized. However, we still expect the Sisyphus conductance to monotonically increase, as already argued in Section VIC. This ought to be clear by the fact that a damped system which is driven too fast classically _lags_ 90\({}^{\circ}\) out of phase with the driving force. This, however, is not allowed by FB, as shown in Fig. 8d. The quantum Zeno argument made in Section VIC finds its natural counterpart in the description Floquet modes. If the coherent dynamics, in fact, becomes faster than the interaction with the CR, we expect the Floquet evolution to be mostly unperturbed. Floquet modes are, however, orthogonal over an rf cycle. Therefore, no transitions between dressed states becomes allowed, and the resistive signal drops. Perhaps more simply, this has to occur as for \(\omega\gg\Gamma\) the evolution of the SEB becomes almost unitary, and thus it _cannot_ dissipate any energy. We can make sense of this also from a polaron perspective, as increasing \(\omega\) also increases the energy of the photons that are necessary to create the polaron. If \(\Gamma<\omega\), moreover, those photons live longer than the Uncertainty Principle allows for virtual photons. Therefore, there must be energy transferred to and from the CR. If we are in the FB regime, however, this energy is larger than the average thermal energy \(k_{B}T\) of the CR. This, therefore, will be far less likely to be able to provide _real_ photons to cause transitions between the modes. Thus the resistive signal will decrease. Finally, we shall point out how, perhaps unsurprisingly, the SCQME deviates from the SME when \(k_{B}T\approx\hbar\omega,\hbar\Gamma\), while the semiclassical result is retained in the limit of large temperature. ## Appendix B Optimal Small-Signal Parameters In this section, we will briefly discuss the combined effect of LB and FB on SEB readout. From Section VI.3, in fact, one could assume that increasing \(\omega\) would be detrimental past a certain point because of FB, and similarly for \(\Gamma\) because of LB. While this is certainly true, however, this assumes that _all_ other parameters are fixed. If both \(\Gamma\) and \(\omega\) become free variables which may occur in systems with tunable couplings and resonators [77]. In Figure 9 we show the impact of the small-signal admittance \(|Y_{1}|\) at constant \(k_{B}T\) as a function of \(\Gamma\) and \(\omega\). As it ought to be clear, the signal keeps increasing even if \(k_{B}T\ll h\Gamma,\hbar\omega\), well beyond the semiclassical regime. Despite the dynamics now being fully dominated by quantum degrees of freedom of the QD dynamics, the semiclassical picture is still insightful. Increasing the frequency, in fact, means that we can get more tunnelling events per unit time. This occurs, however, only if the electrons can keep up with the fast oscillations. Therefore, that the signal is maximized when \(\omega\sim\Gamma\), as transpires from Fig. 9. This fact can also be explained in the quantum picture by the uncertainty principle. LB, in fact, arises from the finite lifetime of electrons in the QD. In practice, however, this is not _seen_ by the system if the QD is, on average, filled and emptied by the PhB _faster_ than the intrinsic lifetime of the level. A similar argument can be made for FB, which can essentially be seen as a quantum Zeno phenomenon as the electron gets trapped in the QD. A short enough lifetime, however, ensures that tunnelling can still occur. Therefore, these two effects complement each other and the signal keeps increasing at the onset of LB and FB, just more slowly than it would have done in the semiclassical model. ## Appendix C From Admittance to Reflectometry Lineshape Having obtained the effective quantum impedance of the SEB, we need to compute the system's transmission coefficient \(\mathcal{T}(Y_{N})\) as a function of \(Y_{N}\) when the system is coupled via the Gate to the resonator, whose centre frequency is \(\omega_{res}=N\omega\). It is easier to consider the case of the setup in Fig. 10a, with the SEB in parallel to the resonator. Figure 8: Small-signal Tunnelling capacitance (\(C_{T}\)) and Sisyphus conductance (\(G_{S}\)) for varying \(\Gamma\) and \(\omega\). Panels (a,c) show how the effect of LB is mostly on the reactive component, while panels (b,d) shows how the main effect of FB is to lower the Sisyphus conductance. This is to be expected when considering energy dissipation in the CR. We can now define \(\mathcal{T}(Y_{N})\) by considering the phasor relation \[V_{out}^{N}=\mathcal{T}(Y_{N})V_{in}. \tag{106}\] where \(V_{out}^{N}(t)=\Re\left[V_{out}e^{i\int\omega t}\right]\) and \(V_{in}(t)=\Re\left[V_{in}e^{i\omega t}\right]\). We point out that, for \(N=1\), this is nothing but the standard definition of the transmission coefficient. To build an equivalent circuit model, we can now consider that the impedance seen by \(V_{in}\) will be much larger than that of the line \(Z_{0}=50\Omega\), of the order of the quantum of resistance \(e^{2}/h\). Therefore, most of the signal will be reflected. For simplicity, we will assume that the drive is large enough to be able to neglect the self-loading of the collection gate [41]. In this case, the dynamics of the SEB as seen by the output can be replaced by a Voltage-Controlled Current Source (VCCS), whose output will be \(I_{N}=\Re\left[V_{in}Y_{N}e^{iN\omega t}\right]\). Considering this equivalent circuit, we easily obtain \[V_{out}^{N}=-\frac{Y_{N}}{Y_{Res}}\ V_{in}. \tag{107}\] We note that the impedance of the resonator, in parallel to the VCCS, will be significant only at the resonant frequency, while it shall act as a short to ground at all other frequencies. Hence justifying the formalism of equivalent admittance, since all the terms not oscillating at the resonant frequency (here taken \(\omega_{Res}=N\omega\)) are short to ground. For the resonant term, however, we see how the low admittance of the resonator amplifies the resonant signal. Moreover, we can see how the effective admittance of the SEB, or rather, its _transconductance_, is directly proportional to the transmitted signal. This last observation allows us to directly relate Eqs.(74) and (76) to experimental reflectometry data. The high admittance of the off-resonance resonator forces us to use 2 separate gates for the driving. For simplicity, we imagine them to have negligible crosstalk and the same lever arm. If this is not the case, one can simply account for this by modifying \(\alpha^{2}\rightarrow\alpha_{d}\alpha_{c}\) in \(\mathcal{C}_{N}\) (Eq.(72)), where \(\alpha_{d}\) and \(\alpha_{c}\) are the lever arms of the driving and collection gates respectively [18]. Lastly, we point out that, if the measurement is made with a _single_ Gate in reflection rather than transmission, by a similar argument one can immediately write \[V_{out}^{\mathcal{R}}=\left(1-\frac{Y_{1}}{Y_{Res}}\right)\ V_{in} \tag{108}\] where we highlight that only the fundamental can be observed through the resonator in this case. ## Appendix D The broadened Fermi-Dirac In this section, we will briefly discuss the mathematical steps that lead to the digamma function in Eq.(66). While this result allows for some physical insights (e.g. the discussion of Eq.(67)), a closed-form equation for \(\mathcal{F}_{m}\) allows for dramatically lower computation times. In particular, the expansion of the digamma (and trigamma) functions as infinite series or continued fractions allows for zero-cost inclusion of LB in QD simulations. To begin with, we notice the obvious fact that \[\mathcal{F}_{m}^{\pm}(\varepsilon_{0})=\frac{\Gamma}{\pi}\int_{-\infty}^{ \infty}\quad\frac{f(\pm\epsilon)}{\Gamma^{2}+((\epsilon-\varepsilon_{0})/\hbar -m\omega)^{2}}\quad d\epsilon \tag{109}\] is a convolution between the Fermi-Dirac and a Lorentzian, and thus can be computed as a product in reciprocal space. In the following, we shall for simplicity only consider the case of \(\mathcal{F}_{0}^{-}(\varepsilon_{0})=\mathcal{F}(\varepsilon_{0})\), and all other cases trivially follow. Before tackling the problem, we can simplify the calculation by noticing that \[f(\epsilon)=\frac{1}{2}-\frac{1}{2}\tanh\left(\frac{\epsilon}{2k_{B}T}\right) \tag{110}\] and thus \[\mathcal{F}(\varepsilon_{0})=\frac{1}{2}-\frac{\Gamma}{2\pi}\int_{-\infty}^{ \infty}\quad\frac{\tanh\left(\frac{\epsilon}{2k_{B}T}\right)}{\Gamma^{2}+( \epsilon-\varepsilon_{0})^{2}/\hbar^{2}}\quad d\epsilon. \tag{111}\] We can now make use of the fact that the Fourier transform of a Lorentzian is a decaying exponential, while, \[\mathfrak{F}\left[\tanh\left(\frac{\epsilon}{2k_{B}T}\right)\right](\xi)= \frac{i\pi k_{B}T}{\sinh\pi k_{B}T\xi} \tag{112}\] Figure 10: (a) Transmission experiment setup with an SEB connected in parallel with a resonator. (b) Equivalent circuit schematic in which the SEB is seen by the resonator as a set of VCCSs in parallel. Figure 9: Maximum of the small-signal admittance \(|Y_{1}|\) when varying \(\Gamma\) and \(\omega\). The figure shows how the maximum signal is always achieved when \(\Gamma\approx\omega\), and both are as large as possible. where \(\mathfrak{F}\) indicates the Fourier transform in a distribution sense, to write \[\mathcal{F}(\varepsilon_{0})=i\;\mathfrak{F}^{-1}\left[\frac{\pi k_{B}Te^{-| \Gamma\xi|}}{\sinh\left(\pi k_{B}T\xi\right)}\right]. \tag{101}\] We can now use the fact that \(\mathcal{F}(\varepsilon_{0})-\frac{1}{2}\) is antisymmetric to write the integrals as \[\mathcal{F}(\varepsilon_{0})-\frac{1}{2}=i\pi k_{B}T\left(\int_{0}^{\infty} \frac{e^{i\varepsilon_{0}\xi-\Gamma\xi}e^{\pi k_{B}T\xi}}{1-e^{-2\pi k_{B}T \xi}}+c.c.\right). \tag{102}\] To get the desired result we must now perform the substitution \(t=2\pi k_{B}T\xi\) and notice that, for \(\Re[z]>0\), we can write the digamma function as[78] \[\psi_{0}(z)=\int_{0}^{+\infty}\left(\frac{e^{-t}}{t}-\frac{e^{-zt}}{1-e^{-t}} \right)dt. \tag{103}\] Adding and subtracting \(\frac{e^{-t}}{t}\) to Eq.(102) therefore, we get \[\begin{split}\mathcal{F}(\varepsilon_{0})-\frac{1}{2}=\frac{1}{ 2\pi}\bigg{(}\psi_{0}\left(\frac{1}{2}+\frac{h\Gamma+i\varepsilon_{0}}{2\pi k _{B}T}\right)-\\ -\psi_{0}\left(\frac{1}{2}+\frac{h\Gamma-i\varepsilon_{0}}{2\pi k _{B}T}\right)\bigg{)}.\end{split} \tag{104}\] Finally, we can use the identity \[\psi_{0}(z^{*})=\psi_{0}(z)^{*} \tag{105}\] to write \[\mathcal{F}(\varepsilon_{0})=\frac{1}{2}-\frac{1}{\pi}\Im\left[\psi_{0}\left( \frac{1}{2}+\frac{h\Gamma+i\varepsilon_{0}}{2\pi k_{B}T}\right)\right] \tag{106}\] of which Eq.(66) is an immediate generalization. As a final remark, an interesting sanity check is that it is a well-known fact from taking the logarithmic derivative of the Gamma function that, for \(x\in\mathbb{R}\), \[\Im\left[\psi_{0}\left(\frac{1}{2}+x\right)\right]=\frac{\pi}{2}\tanh\left( \pi x\right) \tag{107}\] which immediately shows how Eq.(106) retrieves Eq.(107) in the limit of \(\Gamma=0\). ## Appendix E Polaron-Transformed Hamiltonian To discuss the mathematical manipulations presented in Section IV.2.1, we can begin by noticing that \([d,a]=[c,a]=0\), and similarly for any of the respective adjoints. Thus, we can simply picture the polaron transformation as a displacement of the bath, which happens to be dependent on the QD occupation. In the main text, we made reference to the BCH theorem, which states for two matrices \(A\) and \(B\) that \[e^{A}Ba^{-A}=B+[B,A] \tag{108}\] if \([B,A]\) is a \(c\)-number (i.e. \([B,A]\) commutes with both \(A\) and \(B\)). Considering this property, for any displacement operator \(D_{\alpha}=\exp(\alpha a^{\dagger}-a^{*}a)\), we have \[D_{\alpha}aD\alpha^{\dagger}=a+\alpha, \tag{109}\] from which it follows immediately that \[e^{-S}\left(H_{PhB}+H_{DP}\right)e^{S}=\frac{g^{2}}{\hbar\omega}d^{\dagger}d+ \hbar\omega a^{\dagger}a. \tag{110}\] This is particularly remarkable because, referencing Section IV.1, the term \(a+a^{\dagger}\), which gives the semiclassically-oscillating QD energy has now disappeared. Moreover, that term obviously contains rotating and counter-rotating waves. Thus, especially recalling the discussion in Section VI.4, it ought to be clear how a Rotating Wave Approximation in the usual Floquet-Rabi sense is generally not possible if we want to describe effective admittances. Equation (50) could be derived directly from Eq.(108) using fermionic commutation relations. However, a simple trick is to note that \(a-a^{\dagger}\) is a Grassmann number. Therefore, \(e^{S}\) is _also_ a fermionic displacement operator, for which Eq.(109) is also valid[79]. Therefore, we have shown that the polaron transformation is _simultaneously_ displacing both the PhB depending on the state of the QD _and_ the QD because of the PhB electric field. This characteristic of double displacement allows us to completely remove the semiclassically oscillating field but also invites us to think of the problem no more as electrons interacting with photons but as a _combined_ quasiparticle (hence the name polaron). This picture is retrieved if we consider the QD-CR interaction in the polaron frame (Eq.(50)). An interaction of the form \[c_{\epsilon}D^{\dagger}d^{\dagger} \tag{111}\] destroys a fermion in the CR and simultaneously creates an electron in the QD _and_ displaces the PhB to create a polaron. Finally, we shall note that, technically, the rates in Eq.(65) are obtained in a different frame than the semiclassical description. However, from Eq.(24), we see how, at the steady state, \(\rho\) is purely diagonal. Therefore, the density operator reads \[\rho_{SS}(t)=\frac{1}{2}\mathbb{I}+\left(P(t)-\frac{1}{2}\right)\sigma_{z} \tag{112}\] where \(\mathbb{I}\) is the identity and, obviously, \(\sigma_{z}=d^{\dagger}d\) in the two-level picture. Thus, \([\rho_{SS}(t),S]=0\), and the occupation probability in the polaron frame is the same as in the lab frame. This is an obvious consequence of \(|e\rangle\) and \(|o\rangle\) belonging to _different_ canonical Hilbert spaces, and thus they cannot be mixed by the canonical Lang-Firsov transformation.
2306.17712
3D Boson representation of affine Yangian of ${\mathfrak{gl}}(1)$ and 3D cut-and-join operators
We have constructed 3D Bosons. In this paper, we show the 3D Bosonic Fock space, which is isomorphic to the vector space of 3D Young diagrams as graded vector spaces. We use 3D Bosons to represent the generators of the affine Yangian of ${\mathfrak{gl}}(1)$ and define the 3D cut-and-join operators. Then we discuss the 3D Boson representation of $W$-operators in matrix models.
Na Wang, Can Zhang, Ke Wu
2023-06-30T14:49:18Z
http://arxiv.org/abs/2306.17712v1
# 3D Boson representation of affine Yangian of \(\mathfrak{gl}(1)\) and 3D cut-and-join operators ###### Abstract We have constructed 3D Bosons. In this paper, we show the 3D Bosonic Fock space, which is isomorphic to the vector space of 3D Young diagrams as graded vector spaces. We use 3D Bosons to represent the generators of the affine Yangian of \(\mathfrak{gl}(1)\) and define the 3D cut-and-join operators. Then we discuss the 3D Boson representation of \(W\)-operators in matrix models. **Keywords:** 3D Young diagrams, \(W_{1+\infty}\) algebra, Affine Yangian, Schur functions, Jack polynomials. ## 1 Introduction Schur functions and Jack polynomials defined on 2D Young diagrams are attractive research objects. Schur functions were used to determine irreducible characters of highest weight representations of the classical groups[1, 2, 3]. Recently they appear in mathematical physics, especially in integrable models. In [4], the group in the Kyoto school uses Schur functions in a remarkable way to understand the KP and KdV hierarchies. In [5, 6], Tsilevich and Sulkowski, respectively, give the realization of the phase model in the algebra of Schur functions and build the relations between the \(q\)-boson model and Hall-Littlewood functions. In [7], we build the relations between the statistical models, such as phase model, and KP hierarchy by using 2D Young diagrams and Schur functions. In [8], the authors show that the states in the \(\beta\)-deformed Hurwitz-Kontsevich matrix model can be represented as the Jack polynomials. 3-Jack polynomials defined on 3D Young diagrams (plane partitions) are a generalization of Schur functions and Jack polynomials. We have constructed 3-Jack polynomials in [9]. 3D Young diagrams arose naturally in crystal melting model[10, 11]. 3D Young diagrams also have many applications in many fields of mathematics and physics, such as statistical models, number theory, representations of some algebras (Ding-Iohara-Miki algebras, affine Yangian, etc). 3-Jack polynomials behave the same with 3D Young diagrams when we consider the MacMahon representation of the affine Yangian of \(\mathfrak{gl}(1)\). We have constructed the 3D Bosons in [12]. In this paper, we firstly construct the 3D Bosonic Fock space. As the 2D Bosonic Fock space is isomorphic to the space of 2D Young diagrams or the space of Schur functions defined on 2D Young diagrams, the 3D Bosonic Fock space is isomorphic to the space of 3D Young diagrams or the space of 3-Jack polynomials defined on 3D Young diagrams. The MacMahon representation space of affine Yangian of \(\mathfrak{gl}(1)\) is also the space of 3D Young diagrams[13]. The main result of this paper is that we use 3D Bosons to realize the generators of the affine Yangian of \(\mathfrak{gl}(1)\). In [12], we have obtained the representation of \(W_{1+\infty}\) algebra, which have actions on 3D Young diagrams, by 3D Bosons, then we can see the relations between the operators in affine Yangian of \(\mathfrak{gl}(1)\) and \(W_{1+\infty}\) algebra since they both can be represented by 3D Bosons. Use the operators \(\psi_{j}\) in affine Yangian of \(\mathfrak{gl}(1)\), we define 3D cut-and-join operators. 2D cut-and-join operators are operators commutative with each other, which have prominent applications in matrix models[8], the simplest one is \[\frac{1}{2}\sum_{n,m=1}^{\infty}\left((n+m)p_{n}p_{m}\frac{\partial}{\partial p _{n+m}}+nmp_{n+m}\frac{\partial^{2}}{\partial p_{n}\partial p_{m}}\right).\] Schur functions are the common eigenstates of 2D cut-and-join operators. In this paper, we use 3D Bosons to represent 3D cut-and-join operators. The eigenstates are 3-Jack polynomials, we show that by some examples. The \(W\)-operators of the 3D generalizations of some matrix models can also be represented by 3D Bosons. The paper is organized as follows. In section 2, we recall the \(W_{1+\infty}\) algebra and the affine Yangian of \(\mathfrak{gl}(1)\). In section 3, we recall 3D Bosons and give the 3D Bosonic Fock space. In section 4, we use 3D Bosons to represent the generators \(\psi_{3},\ e_{0},\ f_{0}\) of affine Yangian of \(\mathfrak{gl}(1)\). In section 5, we construct 3D cut-and-join operators and use 3D Bosons to represent the operators in some matrix models. ## 2 \(W_{1+\infty}\) algebra and affine Yangian of \(\mathfrak{gl}(1)\) In this section, we recall the \(W_{1+\infty}\) algebra and the affine Yangian of \(\mathfrak{gl}(1)\), which all have the representations on 3D Young diagrams. ### Affine Yangian of \(\mathfrak{gl}(1)\) Let \(h_{1},h_{2}\) and \(h_{3}\) be three complex numbers satisfying \(h_{1}+h_{2}+h_{3}=0\). Define \[\sigma_{2} = h_{1}h_{2}+h_{1}h_{3}+h_{2}h_{3},\] \[\sigma_{3} = h_{1}h_{2}h_{3}.\] We associate \(h_{1},\ h_{2},\ h_{3}\) to \(y,\ x,\ z\)-axis respectively. The affine Yangian \(\mathcal{Y}\) of \(\mathfrak{gl}(1)\) is an associative algebra with generators \(e_{j},f_{j}\) and \(\psi_{j}\), \(j=0,1,\ldots\) and the following relations[13, 14] \[[\psi_{j},\psi_{k}]=0, \tag{1}\] \[[e_{j+3},e_{k}]-3\left[e_{j+2},e_{k+1}\right]+3\left[e_{j+1},e_{ k+2}\right]-[e_{j},e_{k+3}]\] \[\quad+\sigma_{2}\left[e_{j+1},e_{k}\right]-\sigma_{2}\left[e_{j}, e_{k+1}\right]-\sigma_{3}\left\{e_{j},e_{k}\right\}=0,\] (2) \[[f_{j+3},f_{k}]-3\left[f_{j+2},f_{k+1}\right]+3\left[f_{j+1},f_{k+ 2}\right]-[f_{j},f_{k+3}]\] \[\quad+\sigma_{2}\left[f_{j+1},f_{k}\right]-\sigma_{2}\left[f_{j}, f_{k+1}\right]+\sigma_{3}\left\{f_{j},f_{k}\right\}=0,\] (3) \[[e_{j},f_{k}]=\psi_{j+k},\] (4) \[[\psi_{j+3},e_{k}]-3\left[\psi_{j+2},e_{k+1}\right]+3\left[\psi_ {j+1},e_{k+2}\right]-[\psi_{j},e_{k+3}]\] \[\quad+\sigma_{2}\left[\psi_{j+1},e_{k}\right]-\sigma_{2}\left[\psi _{j},e_{k+1}\right]-\sigma_{3}\left\{\psi_{j},e_{k}\right\}=0,\] (5) \[[\psi_{j+3},f_{k}]-3\left[\psi_{j+2},f_{k+1}\right]+3\left[\psi_ {j+1},f_{k+2}\right]-[\psi_{j},f_{k+3}]\] \[\quad+\sigma_{2}\left[\psi_{j+1},f_{k}\right]-\sigma_{2}\left[ \psi_{j},f_{k+1}\right]+\sigma_{3}\left\{\psi_{j},f_{k}\right\}=0, \tag{6}\] together with boundary conditions \[[\psi_{0},e_{j}]=0,[\psi_{1},e_{j}]=0,[\psi_{2},e_{j}]=2e_{j}, \tag{7}\] \[[\psi_{0},f_{j}]=0,[\psi_{1},f_{j}]=0,[\psi_{2},f_{j}]=-2f_{j}, \tag{8}\] and a generalization of Serre relations \[{\rm Sym}_{(j_{1},j_{2},j_{3})}\left[e_{j_{1}},[e_{j_{2}},e_{j_{3}+1} ]\right]=0, \tag{9}\] \[{\rm Sym}_{(j_{1},j_{2},j_{3})}\left[f_{j_{1}},[f_{j_{2}},f_{j_{3}+ 1}]\right]=0, \tag{10}\] where \({\rm Sym}\) is the complete symmetrization over all indicated indices which include 6 terms. The affine Yangian \({\cal Y}\) has a representation on 3D Young diagrams. As in our paper [15], we use the following notations. For a 3D Young diagram \(\pi\), the notation \(\Box\in\pi^{+}\) means that this box is not in \(\pi\) and can be added to \(\pi\). Here "can be added" means that when this box is added, it is still a 3D Young diagram. The notation \(\Box\in\pi^{-}\) means that this box is in \(\pi\) and can be removed from \(\pi\). Here "can be removed" means that when this box is removed, it is still a 3D Young diagram. For a box \(\Box\), we let \[h_{\Box}=h_{1}y_{\Box}+h_{2}x_{\Box}+h_{3}z_{\Box}, \tag{11}\] where \((x_{\Box},y_{\Box},z_{\Box})\) is the coordinate of box \(\Box\) in coordinate system \(O-xyz\). Here we use the order \(y_{\Box},x_{\Box},z_{\Box}\) to match that in paper [13]. Following [13, 14], we introduce the generating functions: \[e(u) = \sum_{j=0}^{\infty}\frac{e_{j}}{u^{j+1}},\] \[f(u) = \sum_{j=0}^{\infty}\frac{f_{j}}{u^{j+1}}, \tag{12}\] \[\psi(u) = 1+\sigma_{3}\sum_{j=0}^{\infty}\frac{\psi_{j}}{u^{j+1}},\] where \(u\) is a parameter. Introduce \[\psi_{0}(u)=\frac{u+\sigma_{3}\psi_{0}}{u} \tag{13}\] and \[\varphi(u)=\frac{(u+h_{1})(u+h_{2})(u+h_{3})}{(u-h_{1})(u-h_{2})(u-h_{3})}. \tag{14}\] For a 3D Young diagram \(\pi\), define \(\psi_{\pi}(u)\) by \[\psi_{\pi}(u)=\psi_{0}(u)\prod_{\Box\in\pi}\varphi(u-h_{\Box}). \tag{15}\] In the following, we recall the representation of the affine Yangian on 3D Young diagrams as in paper [13] by making a slight change. The representation of affine Yangian on 3D Young diagrams is given by \[\psi(u)|\pi) = \psi_{\pi}(u)|\pi), \tag{16}\] \[e(u)|\pi) = \sum_{\Box\in\pi^{+}}\frac{E(\pi\rightarrow\pi+\Box)}{u-h_{\Box} }|\pi+\Box),\] (17) \[f(u)|\pi) = \sum_{\Box\in\pi^{-}}\frac{F(\pi\rightarrow\pi-\Box)}{u-h_{\Box} }|\pi-\Box) \tag{18}\] where \(|\pi\rangle\) means the state characterized by the 3D Young diagram \(\pi\) and the coefficients \[E(\pi\to\pi+\Box)=-F(\pi+\Box\to\pi)=\sqrt{\frac{1}{\sigma_{3}}\,{\rm res}_{u\to h _{\Box}}\,\psi_{\pi}(u)}. \tag{19}\] Specially, \[\psi_{1}|\pi\rangle=0,\ \psi_{2}|\pi\rangle=2|\pi||\pi\rangle,\ \psi_{3}|\pi \rangle=\sum_{\Box\in\pi}(6h_{\Box}+2\psi_{0}\sigma_{3})|\pi\rangle,\] \[\psi_{4}|\pi\rangle=\sum_{\Box\in\pi}(12h_{\Box}^{2}-2\sigma_{2}+ 6h_{\Box}\psi_{0}\sigma_{3})|\pi\rangle, \tag{20}\] where \(|\pi|\) is the box number of \(\pi\). In the following of this paper, we treat \(E(\pi\to\pi+\Box)|\pi+\Box\rangle\) as one element and still denote it by \(|\pi+\Box\rangle\), then 3D Young diagrams depend on the box growth process. 3-Jack polynomials \(J_{\pi}\) behave the same as \(\pi\) exactly. ### \(W_{1+\infty}\) algebra For any 3D Young diagram, we cut it into slices by the plane \(z=j\), every slice is a 2D Young diagram. Then 3D Young diagrams can be treated as a series of 2D Young diagrams. When we consider the 3D Young diagrams which have at most \(N\) layers in \(z\)-axis direction, we suppose \(\psi_{0}=-\frac{N}{h_{1}h_{2}}\). Let \(a_{j,n}\) be the 2D Bosons associated to the 2D Young diagrams on the slice \(z=j\) of 3D Young diagrams with the relation \[[a_{j,n},a_{k,m}]=-\frac{1}{h_{1}h_{2}}\delta_{j,k}n\delta_{n+m,0}. \tag{21}\] The operators \(V_{j,n}\) of the \(W_{1+\infty}\) algebra can be represented by \(a_{j,n},\ j=1,2,\cdots,N\), which corresponds to that a 3D Young diagram can be represented by a series of 2D Young diagrams. This means that the operators \(V_{j,n}\) have the representation on the space of 3D Young diagrams. The fields \[J_{j}(z)=\sum_{n\in\mathbb{Z}}a_{j,n}z^{-n-1}\ \ {\rm and}\ \ V_{j}(z)=\sum_{n\in \mathbb{Z}}V_{j,n}z^{-j-n}. \tag{22}\] Let[12] \[V_{1}(z) = J_{1}(z)+J_{2}(z)+\cdots+J_{N}(z), \tag{23}\] \[V_{2}(z) = -\frac{h_{1}h_{2}}{2}\sum_{j=1}^{N}:J_{j}(z)J_{j}(z):-\frac{h_{1} h_{2}\alpha_{0}}{2}\sum_{j=1}^{N}(N+1-2j)J_{j}^{\prime}(z),\] (24) \[V_{3}(z) = \frac{1}{3}h_{1}^{2}h_{2}^{2}\sum_{j=1}^{N}:J_{1}(z)^{3}:-\frac{1 }{2}\alpha_{0}h_{1}^{2}h_{2}^{2}\sum_{j<k}J_{j}J_{k}^{\prime}(z)+\frac{1}{2} \alpha_{0}h_{1}^{2}h_{2}^{2}\sum_{j<k}J_{j}^{\prime}J_{k}(z)\] (25) \[+\frac{1}{2}\alpha_{0}h_{1}^{2}h_{2}^{2}\sum_{j=1}^{N}(N+1-2j)J_{ j}^{\prime}J_{j}(z)\] \[-\alpha_{0}^{2}h_{1}^{2}h_{2}^{2}\sum_{j=1}^{N}\left(\frac{(j-1)( N-j)}{2}-\frac{(N-1)(N-j)}{12}\right)J_{j}^{\prime\prime}(z),\] and[16] \[V_{4}(z) = -h_{1}^{3}h_{2}^{3}\left(\frac{1}{4}\sum_{j=1}^{N}J_{j}J_{j}J_{j}J_{j }(z)-\frac{\alpha_{0}}{2}\sum_{j<k}J_{j}J_{j}J_{k}^{\prime}(z)-\alpha_{0}\sum_{j <k<l}(l-3)J_{j}J_{k}^{\prime}J_{l}(z)\right.\] \[\left.+\frac{\alpha_{0}}{2}\sum_{j<k}J_{j}J_{k}^{\prime}J_{k}(z)+ \frac{\alpha_{0}}{2}\sum_{j<k}J_{j}^{\prime}J_{j}J_{k}(z)+\frac{\alpha_{0}}{2} \sum_{j<k}J_{j}^{\prime}J_{k}J_{k}(z)\right.\] \[\left.-\alpha_{0}\sum_{j<k<l}(l-3)J_{j}^{\prime}J_{k}J_{l}(z)+ \frac{\alpha_{0}}{2}\sum_{j=1}^{N}(N+1-2j)J_{j}^{\prime}J_{j}J_{j}(z)\right.\] \[\left.-\frac{\alpha_{0}^{2}}{4}\sum_{j<k}(N+1-2k)J_{j}J_{k}^{ \prime\prime}(z)-\frac{\alpha_{0}^{2}}{4}\sum_{j<k}(N+1-2j)J_{j}^{\prime\prime }J_{k}(z)\right.\] \[\left.-\alpha_{0}^{2}\sum_{j<k}(k-j)J_{j}^{\prime}J_{k}^{\prime}( z)+\frac{3}{20h_{1}h_{2}}\sum_{j=1}^{N}J_{j}^{\prime}J_{j}^{\prime}(z)-\frac{1}{10h_ {1}h_{2}}\sum_{j=1}^{N}J_{j}^{\prime\prime}J_{j}(z)+\cdots\right).\] Specially, when \(N=1,h_{1}=1,h_{2}=-1\), 3D Young diagrams become 2D Young diagrams and 3-Jack polynomials become Schur functions. We denote \(V_{j}(z)\) in this special case by \(V_{j}^{2DS}(z)\). Then \[V_{1}^{2DS}(z) = J_{1}(z), \tag{26}\] \[V_{2}^{2DS}(z) = \frac{1}{2}:J_{1}(z)^{2}:,\] (27) \[V_{3}^{2DS}(z) = \frac{1}{3}:J_{1}(z)^{3}:,\] (28) \[V_{4}^{2DS}(z) = \frac{1}{4}:J_{1}(z)^{4}:-\frac{3}{20}:J_{1}^{\prime}(z)^{2}:+ \frac{1}{10}:J_{1}^{\prime\prime}(z)J_{1}(z): \tag{29}\] with the OPE \[J_{1}(z)J_{(}w)\sim\frac{1}{(z-w)^{2}}.\] This special case \(V_{j}^{2DS}(z)\) of the \(W_{1+\infty}\) algebra have representation on the space of 2D Young diagrams or the Schur functions defined on 2D Young diagrams. The general case \(V_{j}(z)\) of the \(W_{1+\infty}\) algebra have representation on the space of 3D Young diagrams or the 3-Jack polynomials defined on 3D Young diagrams. The OPEs of \(V_{j}(z)V_{k}(w)\) can be found in [12, 16]. From the OPEs, the relations \([V_{j,m},V_{k,n}]\) can be obtained. We list the first few of them: \[[V_{1,m},V_{1,n}]= -\frac{N}{h_{1}h_{2}}m\delta_{m+n,0}=\psi_{0}m\delta_{m+n,0},\] \[[V_{1,m},V_{2,n}]= mV_{1,m+n},\] \[[V_{2,m},V_{2,n}]= (m-n)V_{2,m+n}-(\psi_{0}\sigma_{2}+\psi_{0}^{3}\sigma_{3}^{2}) \frac{m^{3}-m}{12}\delta_{m+n,0},\] \[[V_{1,m},V_{3,n}]= 2mV_{2,m+n},\] \[[V_{2,m},V_{3,n}]= -\frac{1}{6}(\sigma_{2}+\sigma_{3}^{2}\psi_{0}^{2})(m^{3}-m)V_{1,m +n}+(2m-n)V_{3,m+n}.\] ## 3 3D Bosons and 3D Bosonic Fock space We have shown the method to construct the 3D Bosons in [12]. In this section, we recall 3D Bosons and give the 3D Bosonic Fock space. We denote 3D Bosons by \(b_{n,j}\). When 3D Young diagrams have at most \(N\) layers in the \(z\)-axis direction, \(j\) in \(b_{n,j}\) equals \(1,2,\cdots,N\), which means that \(b_{n,j}=0\) when \(j>N\). Specially, when \(N=1\), \(b_{n,j}\) becomes \(b_{n,1}\), which is the normal 2D Bosons. This special case corresponds to that 2D Young diagrams can be treated as the 3D Young diagrams which have one layer in \(z\)-axis direction. 3D Bosons \(b_{n,j}\) can be represented by 2D Bosons \(a_{j,n}\): \[b_{n,1} = \sum_{j=1}^{N}a_{j,n},\] \[b_{n,2} = -h_{1}h_{2}(1-\frac{1}{N})\sum_{j=1}^{N}\sum_{k+l=n}:a_{j,k}a_{j,l }:+\frac{2h_{1}h_{2}}{N}\sum_{j<k}\sum_{m+l=n}:a_{j,m}a_{k,l}:\] \[+h_{3}\sum_{j=1}^{N}(N+1-2j)(-n-1)a_{j,n},\] \[b_{n,3} = 6h_{1}^{2}h_{2}^{2}\left(-\sum_{j<k<l}\sum_{m+p+q=n}:a_{j,m}a_{k,p}a_{l,q}:-\alpha_{0}\sum_{j<k}\sum_{m+q=n}(j-1)(-m-1):a_{j,m}a_{k,q}:\right.\] \[-\alpha_{0}\sum_{j<k}\sum_{m+q=n}(k-2)(-q-1):a_{j,m}a_{k,q}:- \frac{\alpha_{0}^{2}}{2}\sum_{j=1}^{N}(j-1)(j-2)(-n-1)(-n-2)a_{j,n}\] \[+\frac{N-2}{N}\sum_{j=1}^{N}\sum_{k<l}\sum_{m+p+q=n}:a_{j,m}a_{k,p}a_{l,q}:+\frac{(N-2)\alpha_{0}}{N}\sum_{j,k=1}^{N}\sum_{m+q=n}(k-1)(-q-1):a _{j,m}a_{k,q}:\] \[-\frac{(N-1)(N-2)}{3N^{2}}\sum_{j,k,l=1}^{N}\sum_{m+p+q=n}:a_{j,m} a_{k,p}a_{l,q}:+\frac{(N-2)\alpha_{0}^{2}}{2}\sum_{j=1}^{N}(j-1)(-n-1)(-n-2)a_{j,n}\] \[+\frac{(N-2)\alpha_{0}}{2}\sum_{j<k}\sum_{m+q=n}(-m-q-2):a_{j,m} a_{k,q}:-\frac{(N-1)(N-2)\alpha_{0}^{2}}{12}\sum_{j=1}^{N}(-n-1)(-n-2)a_{j,n}\] \[\left.-\frac{(N-1)(N-2)\alpha_{0}}{2N}\sum_{j,k=1}^{N}\sum_{m+q=n }(-m-1):a_{j,m}a_{k,q}:\right).\] The relations between 3D Bosons can be obtained from the OPEs \(B_{j}(z)B_{k}(w)\)[12], we list some of them: \[[b_{m,1},b_{n,1}] = \psi_{0}m\delta_{m+n,0}, \tag{30}\] \[[b_{m,1},b_{n,j\geq 2}] = 0,\] (31) \[[b_{m,2},b_{n,2}] = 2(m-n)b_{m+n,2}-2(1+\sigma_{2}\psi_{0}+\psi_{0}^{3}\sigma_{3}^{2} )\frac{m^{3}-1}{6}\delta_{m+n,0},\] (32) \[[b_{m,2},b_{n,3}] = 2(2m-n)b_{m+n,3}. \tag{33}\] Denote the vacuum state associated to 2D Bosons \(a_{j,n}\) by \(|0\rangle_{j}\), and Define \[|0\rangle=\sum_{j=1}^{N}|0\rangle_{j}.\] We know that \(b_{n,j}|0\rangle=0\) for any \(n>0\), and \(b_{n,j}|0\rangle=0\) for any \(n\leq 0,j>-n\). Denote the algebra generated by 3D Bosons \(b_{n,j}\) by \(\mathfrak{B}\). Define the 3D Bosonic Fock space by \[\mathfrak{B}\cdot|0\rangle:=\{a|0\rangle\ |\ a\in\mathfrak{B}\}. \tag{34}\] Then 3D Bosonic Fock space has a basis \[\{b_{-n_{1},j_{1}}b_{-n_{2},j_{2}}\cdots b_{-n_{r},j_{r}}|0\rangle\ |\ 0<n_{1} \leq n_{2}\leq\cdots\leq n_{r},j_{i}\leq n_{i}\}. \tag{35}\] Note that \[b_{-n,1}b_{-m,j\geq 2}|0\rangle=b_{-m,j\geq 2}b_{-n,1}|0\rangle,\] which is the same with that for 2D, but \(b_{-n,i\geq 2}b_{-m,j\geq 2}|0\rangle\) and \(b_{-m,j\geq 2}b_{-n,i\geq 2}|0\rangle\) may do not equal to each other. For example, \[b_{-2,2}b_{-3,2}|0\rangle=b_{-3,2}b_{-2,2}|0\rangle+2b_{-5,2}|0\rangle.\] Define the degree of \(b_{-n,j}\) by \(n\), and the degree of \(|0\rangle\) by zero. Then 3D Bosonic Fock space is a graded space. The degree zero part is \(\mathbb{C}|0\rangle\). The degree one part is \(\mathbb{C}b_{-1,1}|0\rangle\). A basis of degree 2 part is \(\{b_{-1,1}^{2}|0\rangle,b_{-2,1}|0\rangle,b_{-2,2}|0\rangle\}\). The 3D Bosonic Fock space is isomorphic to the space of 3D Young diagrams, it is also isomorphic to the space of 3-Jack polynomials. Let \[P_{n,k}=b_{-n,k}|0\rangle \tag{36}\] as in [12], and we denote \[b_{-n_{1},j_{1}}b_{-n_{2},j_{2}}\cdots b_{-n_{r},j_{r}}|0\rangle\] by \(P_{n_{1},j_{1}}P_{n_{2},j_{2}}\cdots P_{n_{r},j_{r}}\), which explains the variables in 3-Jack polynomials. Note that for any 3-Jack polynomial \(J_{\pi}\), only \(P_{n,1}J_{\pi}\) equals the normal multiplication of \(P_{n,1}\) and \(J_{\pi}\), \(P_{n,j\geq 2}J_{\pi}\) equals the action \(b_{-n,j\geq 2}\cdot J_{\pi}\). We give an example. Let \[a_{j,-n}|0\rangle_{j}=p_{j,n}|0\rangle_{j},\ \ a_{j,n}|0\rangle_{j}=-\frac{1}{h _{1}h_{2}}\frac{\partial}{\partial p_{j,n}}|0\rangle_{j},\ \ n>0,\] where \(p_{j,n}\) is the normal power sum on the slice \(z=j\) of 3D Young diagrams. We know that[17] \[P_{2,1} = \sum_{j=1}^{N}p_{j,2},\] \[P_{2,2} = -h_{1}h_{2}\sum_{j=1}^{N}p_{j,1}^{2}+\frac{h_{1}h_{2}}{N}(\sum_{ j=1}^{N}p_{j,1})^{2}-\sum_{j=1}^{N}(N-2j+1)h_{3}p_{j,2}.\] It can be calculated that \(P_{2,1}P_{2,2}\) equals the normal multiplication \[\sum_{j=1}^{N}p_{j,2}\left(-h_{1}h_{2}\sum_{j=1}^{N}p_{j,1}^{2}+\frac{h_{1}h_{ 2}}{N}(\sum_{j=1}^{N}p_{j,1})^{2}-\sum_{j=1}^{N}(N-2j+1)h_{3}p_{j,2}\right),\] but \(P_{2,2}P_{2,2}\) does not equal \[\left(-h_{1}h_{2}\sum_{j=1}^{N}p_{j,1}^{2}+\frac{h_{1}h_{2}}{N}(\sum_{j=1}^{N }p_{j,1})^{2}-\sum_{j=1}^{N}(N-2j+1)h_{3}p_{j,2}\right)^{2}.\] ## 4 3D Boson representation of affine Yangian of \(\mathfrak{gl}(1)\) In this section, we use 3D Bosons to represent the affine Yangian of \(\mathfrak{gl}(1)\). From the relations in the affine Yangian of \(\mathfrak{gl}(1)\), we only need to represent the operators \(\psi_{3},\ e_{0},\ f_{0}\). **Theorem 4.1**.: _The affine Yangian of \(\mathfrak{gl}(1)\) can be represented by the 3D Bosons \(b_{n,j}\) in the following way:_ \[e_{0} = b_{-1,1}, \tag{37}\] \[f_{0} = -b_{1,1},\] (38) \[\psi_{3} = -\frac{1}{2}b_{0,3}+\frac{3}{\psi_{0}}\sum_{n>0}(b_{-n,1}b_{n,2}+b_ {-n,2}b_{n,1})\] (39) \[+\frac{3}{\psi_{0}^{2}}\sum_{n,m>0}(b_{-n,1}b_{-m,1}b_{n+m,1}+b_ {-n-m,1}b_{n,1}b_{m,1})\] \[+3\sigma_{3}\sum_{n>0}nb_{-n,1}b_{n,1}-\sigma_{3}\psi_{0}\frac{ \psi_{2}}{2},\] _with_ \[\psi_{2}=b_{0,2}+\frac{2}{\psi_{0}}\sum_{n>0}b_{-n,1}b_{n,1}. \tag{40}\] From [18], we know that the operators of the affine Yangian of \(\mathfrak{gl}(1)\) can be represented by a series of 2D Bosons \(a_{j,n}\): \[\psi_{2} = -2h_{1}h_{2}\sum_{j=1}^{N}\sum_{n>0}a_{j,-n}a_{j,n}, \tag{41}\] \[\psi_{3} = 3h_{1}^{2}h_{2}^{2}\sum_{j=1}^{N}\sum_{n,m>0}(a_{j,-n-m}a_{j,n} a_{j,m}+a_{j,-n}a_{j,-m}a_{j,n+m})\] (42) \[+6\sigma_{3}\sum_{j<k}\sum_{n>0}na_{j,-n}a_{k,n}+(-4N+6j-3)\sigma_ {3}\sum_{j=1}^{N}\sum_{n>0}a_{j,-n}a_{j,n}\] \[+3\sigma_{3}\sum_{j=1}^{N}\sum_{n>0}na_{j,-n}a_{j,n},\] and \[e_{0}=\sum_{j=1}^{N}a_{j,-1},\ f_{0}=-\sum_{j=1}^{N}a_{j,1}. \tag{43}\] It is clear that the relations (37) and (38) hold. The relations (39) and (40) can be proved by direct calculation, since \(b_{n,j}\) can be represented by this series of 2D Bosons \(a_{j,n}\). In the following, we obtain (39) and (40) by a simpler way. We need two lemmas. **Lemma 4.2**.: _The representation of \(V_{3,0}\) by a series of 2D Bosons \(a_{j,n}\) is_ \[V_{3,0} = h_{1}^{2}h_{2}^{2}\sum_{j=1}^{N}\sum_{n,m>0}(a_{j,-n-m}a_{j,n}a_ {j,m}+a_{j,-n}a_{j,-m}a_{j,n+m}) \tag{44}\] \[-\sigma_{3}\sum_{j<k}\sum_{n>0}(na_{k,-n}a_{j,n}-na_{j,-n}a_{k,n})\] \[-\sigma_{3}\sum_{j=1}^{N}\sum_{n>0}(N-2j+1)a_{j,-n}a_{j,n}.\] This result can be obtained from (25). **Lemma 4.3**.: _The representation of \(V_{3,0}\) by 3D Bosons \(b_{n,j}\) is_ \[V_{3,0} = -\frac{1}{6}b_{0,3}+\frac{1}{\psi_{0}}\sum_{n>0}(b_{-n,1}b_{n,2}+b_ {-n,2}b_{n,1}) \tag{45}\] \[+\frac{1}{\psi_{0}^{2}}\sum_{n,m>0}(b_{-n,1}b_{-m,1}b_{n+m,1}+b_{- n-m,1}b_{n,1}b_{m,1}).\] This result is obtained from \[V_{3}(z)=-\frac{1}{6}B_{3}(z)+\frac{1}{\psi_{0}}B_{1}B_{2}(z)+\frac{1}{3\psi_{0 }^{2}}B_{1}B_{1}B_{1}(z). \tag{46}\] Note that this relation is slightly different from that in [12] since \(V_{3}(z)\) here equals \(V_{3}(z)\) in [12] multiplied by \(-1\). The proof of (39). \[\psi_{3}-3V_{3,0} = 3\sigma_{3}\sum_{j<k}\sum_{n>0}\left(na_{k,-n}a_{j,n}+na_{j,-n}a _{k,n}\right)\] \[+3\sigma_{3}\sum_{j=1}^{N}\sum_{n>0}na_{j,-n}a_{j,n}-N\sigma_{3} \sum_{j=1}^{N}\sum_{n>0}a_{j,-n}a_{j,n}\] \[= 3\sigma_{3}\sum_{n>0}b_{-n,1}b_{n,1}-\sigma_{3}\psi_{0}\frac{ \psi_{2}}{2}.\] The proof of (40). It holds since \[\psi_{2}=2V_{2,0}, \tag{47}\] and \[V_{2}(z)=\frac{1}{2}B_{2}(z)+\frac{1}{2\psi_{0}}B_{1}B_{1}(z). \tag{48}\] This relation is obtained in [12]. We give some examples about the eigenstates of \(\psi_{2}\) and \(\psi_{3}\) by (40) and (39). The eigenstates of \(\psi_{2}\): For 3D Young diagram of one box, \[\psi_{2}b_{-1,1}|0\rangle=\frac{2}{\psi_{0}}b_{-1,1}b_{1,1}b_{-1,1}|0\rangle=2 b_{-1,1}|0\rangle.\] For 3D Young diagram of two boxes, \[\psi_{2}b_{-1,1}^{2}|0\rangle = \frac{2}{\psi_{0}}b_{-1,1}b_{1,1}b_{-1,1}^{2}|0\rangle=4b_{-1,1} ^{2}|0\rangle,\] \[\psi_{2}b_{-2,1}|0\rangle = \frac{2}{\psi_{0}}b_{-2,1}b_{2,1}b_{-2,1}|0\rangle=4b_{-2,1}|0\rangle,\] \[\psi_{2}b_{-2,2}|0\rangle = b_{0,2}b_{-2,2}|0\rangle=4b_{-2,2}|0\rangle.\] For 3D Young diagram of three boxes, \[\psi_{2}b_{-1,1}^{3}|0\rangle = \frac{2}{\psi_{0}}b_{-1,1}b_{1,1}b_{-1,1}^{3}|0\rangle=6b_{-1,1}^{3} |0\rangle,\] \[\psi_{2}b_{-1,1}b_{-2,1}|0\rangle = \frac{2}{\psi_{0}}b_{-1,1}b_{1,1}b_{-1,1}b_{-2,1}|0\rangle+\frac{2 }{\psi_{0}}b_{-2,1}b_{2,1}b_{-1,1}b_{-2,1}|0\rangle\] \[= 6b_{-1,1}b_{-2,1}|0\rangle,\] \[\psi_{2}b_{-1,1}b_{-2,2}|0\rangle = \frac{2}{\psi_{0}}b_{-1,1}b_{1,1}b_{-1,1}b_{-2,2}|0\rangle+b_{0,2} b_{-1,1}b_{-2,2}|0\rangle\] \[= 6b_{-1,1}b_{-2,2}|0\rangle,\] \[\psi_{2}b_{-3,1}|0\rangle = \frac{2}{\psi_{0}}b_{-3,1}b_{3,1}b_{-3,1}|0\rangle=6b_{-3,1}|0\rangle,\] \[\psi_{2}b_{-3,2}|0\rangle = b_{0,2}b_{-3,2}|0\rangle=6b_{-3,2}|0\rangle,\] \[\psi_{2}b_{-3,3}|0\rangle = b_{0,2}b_{-3,3}|0\rangle=6b_{-3,3}|0\rangle.\] The eigenstates of \(\psi_{3}\): For 3D Young diagram of one box, \[\psi_{3}b_{-1,1}|0\rangle=3\sigma_{3}b_{-1,1}b_{1,1}b_{-1,1}|0\rangle-\sigma_{ 3}\psi_{0}\frac{\psi_{2}}{2}b_{-1,1}|0\rangle=2\sigma_{3}\psi_{0}b_{-1,1}|0\rangle.\] For 3D Young diagram of two boxes, \[\psi_{3}b_{-1,1}^{2}|0\rangle = \frac{3}{\psi_{0}^{2}}b_{-2,1}b_{1,1}b_{1,1}b_{-1,1}^{2}|0\rangle+ 3\sigma_{3}b_{-1,1}b_{1,1}b_{-1,1}^{2}|0\rangle-\sigma_{3}\psi_{0}\frac{\psi_ {2}}{2}b_{-1,1}^{2}|0\rangle\] \[= 6b_{-2,1}|0\rangle+3\sigma_{3}\cdot 2\psi_{0}b_{-1,1}^{2}|0 \rangle-2\sigma_{3}\psi_{0}b_{-1,1}^{2}|0\rangle\] \[= 6b_{-2,1}|0\rangle+4\sigma_{3}\psi_{0}b_{-1,1}^{2}|0\rangle,\] \[\psi_{3}b_{-2,1}|0\rangle = \frac{3}{\psi_{0}}b_{-2,2}b_{2,1}b_{-2,1}|0\rangle+\frac{3}{\psi_ {0}^{2}}b_{-1,1}b_{-1,1}b_{2,1}b_{-2,1}|0\rangle\] \[+3\sigma_{3}2b_{-2,1}b_{2,1}b_{-2,1}|0\rangle-\sigma_{3}\psi_{0} \frac{\psi_{2}}{2}b_{-2,1}|0\rangle\] \[= 6b_{-2,2}|0\rangle+\frac{6}{\psi_{0}}b_{-1,1}^{2}|0\rangle+12 \sigma_{3}\psi_{0}b_{-2,1}|0\rangle-2\sigma_{3}\psi_{0}b_{-2,1}|0\rangle\] \[= 6b_{-2,2}|0\rangle+\frac{6}{\psi_{0}}b_{-1,1}^{2}|0\rangle+10 \sigma_{3}\psi_{0}b_{-2,1}|0\rangle,\] \[\psi_{3}b_{-2,2}|0\rangle = \frac{3}{\psi_{0}}b_{-2,1}b_{2,2}b_{-2,2}|0\rangle-\sigma_{3}\psi _{0}\frac{\psi_{2}}{2}b_{-2,2}|0\rangle\] \[= -\frac{6}{\psi_{0}}(1+\sigma_{2}\psi_{0}+\sigma_{3}^{2}\psi_{0}^{ 3})b_{-2,1}|0\rangle-2\sigma_{3}\psi_{0}b_{-2,2}|0\rangle,\] then we have \[\psi_{3}\left((1+h_{2}h_{3})\frac{1}{\psi_{0}}b_{-1,1}^{2}|0 \rangle+(1+h_{2}h_{3}\psi_{0})h_{1}b_{-2,1}|0\rangle+b_{-2,2}|0\rangle\right)\] \[= (6h_{1}+4\sigma_{3}\psi_{0})\left((1+h_{2}h_{3}\psi_{0})\frac{1}{ \psi_{0}}b_{-1,1}^{2}|0\rangle+(1+h_{2}h_{3}\psi_{0})h_{1}b_{-2,1}|0\rangle+b_{ -2,2}|0\rangle\right),\] which means that \[\tilde{J}_{\framebox{$\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 3D cut-and-join operators and some matrix models In this section, we construct 3D cut-and-join operators and use 3D Bosons to represent the operators in some matrix models. ### 2D/3D cut-and-join operators Since Schur functions defined on 2D Young diagrams are the eigenstates of the 2D cut-and-join operators, in this subsection, we consider the special case \(N=1,h_{1}=-1,h_{2}=-1\) of \(W_{1+\infty}\) algebra and affine Yangian of \(\mathfrak{gl}(1)\), where \(N\) is the at most layers of 3D Young diagrams in \(z\)-axis direction. Under this special case, 3D Young diagrams become 2D Young diagrams, and 3-Jack polynomials become Schur functions. We denote \(a_{1,n}\) by \(a_{n}\), clearly \[[a_{m},a_{n}]=m\delta_{m+n,0}\] specially, which have an representation on Schur functions by \[a_{-n}=p_{n},\quad a_{n}=n\frac{\partial}{\partial p_{n}}\] for \(n>0\). The first two examples of the cut-and-join operators in terms of the power sums \(p_{n}\) are[19] \[W_{1} = \sum_{n=1}^{\infty}np_{n}\frac{\partial}{\partial p_{n}},\] \[W_{2} = \frac{1}{2}\sum_{n,m=1}^{\infty}\left((n+m)p_{n}p_{m}\frac{ \partial}{\partial p_{n+m}}+nmp_{n+m}\frac{\partial^{2}}{\partial p_{n} \partial p_{m}}\right).\] We use the notation \(\psi_{j}^{2DS}\) to denote \(\psi_{j}\) in the special case \(N=1,h_{1}=-1,h_{2}=-1\), and the same for other operators in the affine Yangian of \(\mathfrak{gl}(1)\) and the operators in the \(W_{1+\infty}\) algebra. Clearly, \[\frac{1}{2}\psi_{2}^{2DS}=V_{2,0}^{2DS}=W_{1}, \tag{49}\] \[\frac{1}{3}\psi_{3}^{2DS}=V_{3,0}^{2DS}=W_{2}. \tag{50}\] Then, we can treat \(W_{j=1,2}\) is the reduction of \(V_{j+1,0}\) or \(\frac{1}{j+1}\psi_{j+1}\) from 3D to 2D. We will define the 3D cut-and-join operators from \(\psi_{j}\) for two reasons. The first one is that it can be checked that 3-Jack polynomials are not the eigenstates of \(V_{3,0}\), while they are the eigenstates of all \(\psi_{j}\). For the second reason, we consider the special case \(N=1\) and \(h_{1},h_{2}\) arbitrary. We denote \(\psi_{j}\) in this special case by \(\psi_{j}^{2D}\) and similarly for other operators. Then \[V_{3,0}^{2D} = h_{1}^{2}h_{2}^{2}\sum_{n,m>0}(a_{-n-m}a_{n}a_{m}+a_{-n}a_{-m}a_ {n+m})\] \[\psi_{3}^{2D} = 3h_{1}^{2}h_{2}^{2}\sum_{n,m>0}(a_{-n-m}a_{n}a_{m}+a_{-n}a_{-m}a _{n+m}) \tag{51}\] \[+\sigma_{3}\sum_{n>0}(3n-1)a_{-n}a_{n}\] with \([a_{n},a_{m}]=-\frac{1}{h_{1}h_{2}}n\delta_{n+m,0}\). The eigenstates of \(\psi_{3}^{2D}\) are Jack polynomials, and in the deformed Hurwitz-Kontsevich model, the \(W\)-operator \[\frac{1}{2}\sum_{k,l=1}^{\infty}\left(klp_{k+l}\frac{\partial}{ \partial p_{k}}\frac{\partial}{\partial p_{l}}-h_{1}h_{2}(k+l)p_{k}p_{l}\frac{ \partial}{\partial p_{k+l}}\right)\] \[+ \frac{1}{2}\sum_{k=1}^{\infty}((h_{1}+h_{2})(k-1)+2\psi_{0}\sqrt {\beta}N)kp_{k}\frac{\partial}{\partial p_{k}}\] equals \[\frac{1}{6}\psi_{3}^{2D}+\frac{1}{2}(\psi_{0}\sqrt{\beta}N-\frac{1}{3}\psi_{0} \sigma_{3})\psi_{2}^{2D}\] with \(a_{-n}=p_{n},\ a_{n}=-\frac{1}{h_{1}h_{2}}\frac{\partial}{\partial p_{n}}\) for \(n>0\). For this two reasons, we define 3D cut-and-join operators as follows. **Definition 5.1**.: _The 3D cut-and-join operators \(W_{n}^{3D}\) are defined by_ \[W_{n}^{3D}=\frac{1}{n+1}\psi_{n+1}. \tag{52}\] For example, \[W_{1}^{3D} = \frac{1}{2}\psi_{2}=\frac{1}{2}b_{0,2}+\frac{1}{\psi_{0}}\sum_{n> 0}b_{-n,1}b_{n,1},\] \[W_{2}^{3D} = \frac{1}{3}\psi_{3}=-\frac{1}{6}b_{0,3}+\frac{1}{\psi_{0}}\sum_{n >0}(b_{-n,1}b_{n,2}+b_{-n,2}b_{n,1})\] \[+\frac{1}{\psi_{0}^{2}}\sum_{n,m>0}(b_{-n,1}b_{-m,1}b_{n+m,1}+b_{ -n-m,1}b_{n,1}b_{m,1})\] \[+\sigma_{3}\sum_{n>0}nb_{-n,1}b_{n,1}-\frac{1}{6}\sigma_{3}\psi_{ 0}b_{0,2}-\frac{1}{3}\sigma_{3}\sum_{n>0}b_{-n,1}b_{n,1}.\] From the properties of \(\psi_{j}\), we know that every two 3D cut-and-join operators are commutative, and the 3-Jack polynomials are their eigenstates. ### Some matrix models In this subsection, we use 3D Bosons to represent the \(W\)-operators in matrix models. We recall the partition function hierarchy first. The Hurwitz-Kontsevich model [20] \[Z_{0}\{p\}=\int_{\tilde{N}\times\tilde{N}}\sqrt{\det\left(\frac{\sinh(\frac{ \phi\otimes I-\tilde{I}\otimes\phi}{2})}{\frac{\phi\otimes I-\tilde{I}\otimes \phi}{2}}\right)}d\phi e^{-\frac{1}{2t}\mathrm{Tr}\phi^{2}-\frac{\tilde{N}}{2} \mathrm{Tr}\phi-\frac{1}{6}t\tilde{N}^{3}+\frac{1}{24}t\tilde{N}+\mathrm{Tr}(e ^{\phi}\psi)}, \tag{53}\] where \(\psi\) is an \(\tilde{N}\times\tilde{N}\) matrix and the time variables \(p_{k}=\mathrm{Tr}\psi^{k}\). Here \(\tilde{N}\) has no relations with \(N\) before. It is generated by the exponent of the Hurwitz operator \(\hat{W_{0}}\) acting on the function \(e^{p_{1}/e^{t\tilde{N}}}\)[8], \[Z_{0}\{p\}=e^{t\hat{W_{0}}}\cdot e^{p_{1}/e^{t\tilde{N}}}=\sum_{\lambda}e^{tc_ {\lambda}}S_{\lambda}\{p_{k}=e^{-t\tilde{N}}\delta_{k,1}\}S_{\lambda}\{p\}, \tag{54}\] where \(t\) is a deformation parameter and \(S_{\lambda}\) are Schur functions, the Hurwitz operator \(\hat{W_{0}}\) is given by \[\hat{W_{0}}=\frac{1}{2}\sum_{k,l=1}^{\infty}\left((k+l)p_{k}p_{l}\frac{ \partial}{\partial p_{k+l}}+klp_{k+l}\frac{\partial}{\partial p_{k}}\frac{ \partial}{\partial p_{l}}\right)+\tilde{N}\sum_{k=1}^{\infty}kp_{k}\frac{ \partial}{\partial p_{k}}, \tag{55}\] and \(c_{\lambda}=\sum_{(i,j)\in\lambda}(\tilde{N}-i+j)\). Introduce \[E_{1}=[\hat{W}_{0},p_{1}],\quad\hat{W}_{-1}=[\hat{W}_{0},E_{1}], \tag{56}\] the operators \[\hat{W}_{-n}=\frac{1}{(n-1)!}\underbrace{[\hat{W}_{-1},[\hat{W}_{-1},\cdots[ \hat{W}_{-1},E_{1}]\ldots]],\ \ n\geq 2}\] give the partition function hierarchies which include the Gaussian hermitian one-matrix model, \(\tilde{N}\times\tilde{N}\) complex matrix model[8]. In the following, we use 3D Bosons to represent the 3D generations of these \(W\)-operators. The 3D Hurwitz-Kontsevich model [17] \[Z_{0}^{3D}\{p\}=e^{t\hat{W}_{0}^{3D}}\cdot e^{\frac{P_{1,1}}{\psi_{0}e^{t\hat{ M}}}}, \tag{57}\] where \[\hat{W}_{0}^{3D} = \frac{1}{2}\sum_{i=1}^{N}\sum_{k,l=1}^{\infty}\left(klp_{i,k+l} \frac{\partial}{\partial p_{i,k}}\frac{\partial}{\partial p_{i,l}}-h_{1}h_{2} (k+l)p_{i,k}p_{i,l}\frac{\partial}{\partial p_{i,k+l}}\right) \tag{58}\] \[+(h_{1}+h_{2})\sum_{i_{1}<i_{2}}\sum_{k>0}k^{2}p_{i_{1},k}\frac{ \partial}{\partial p_{i_{2},k}}\] \[+\frac{1}{2}\sum_{j=1}^{N}\sum_{k=1}^{\infty}\left((h_{1}+h_{2}) (k-2N+2j-1)+2\psi_{0}\sqrt{\beta}\tilde{N}\right)kp_{j,k}\frac{\partial}{ \partial p_{j,k}}.\] This operator \(\hat{W}_{0}^{3D}\) can be represented by 3D Bosons \[\hat{W}_{0}^{3D} = -\frac{1}{12}b_{0,3}+\frac{1}{2\psi_{0}}\sum_{n>0}(b_{-n,1}b_{n, 2}+b_{-n,2}b_{n,1}) \tag{59}\] \[+\frac{1}{2\psi_{0}^{2}}\sum_{n,m>0}(b_{-n,1}b_{-m,1}b_{n+m,1}+b _{-n-m,1}b_{n,1}b_{m,1})+\frac{1}{2}\sigma_{3}\sum_{n>0}nb_{-n,1}b_{n,1}\] \[+\frac{1}{2}\left(\psi_{0}\sqrt{\beta}\tilde{N}-\frac{1}{2}\psi _{0}\sigma_{3}\right)\left(b_{0,2}+\frac{2}{\psi_{0}}\sum_{n>0}b_{-n,1}b_{n,1 }\right).\] The 3D Boson representation of other operators can be obtained from this equation, for example, \[E_{1}^{3D} = [W_{0}^{3D},b_{-1,1}] \tag{60}\] \[= \frac{1}{2}b_{-1,2}+\frac{1}{\psi_{0}}\sum_{n>0}b_{-n-1}b_{n,1}+ \psi_{0}\sqrt{\beta}\tilde{N}b_{-1,1}.\] ## Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Declaration of interest statement The authors declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements This research is supported by the National Natural Science Foundation of China under Grant No. 12101184 and No. 11871350, and supported by the Key Scientific Research Project in Colleges and Universities of Henan Province No. 22B110003.
2307.00147
Maximal $k$-Edge-Connected Subgraphs in Almost-Linear Time for Small $k$
We give the first almost-linear time algorithm for computing the \emph{maximal $k$-edge-connected subgraphs} of an undirected unweighted graph for any constant $k$. More specifically, given an $n$-vertex $m$-edge graph $G=(V,E)$ and a number $k = \log^{o(1)}n$, we can deterministically compute in $O(m+n^{1+o(1)})$ time the unique vertex partition $\{V_{1},\dots,V_{z}\}$ such that, for every $i$, $V_{i}$ induces a $k$-edge-connected subgraph while every superset $V'_{i}\supset V_{i}$ does not. Previous algorithms with linear time work only when $k\le2$ {[}Tarjan SICOMP'72{]}, otherwise they all require $\Omega(m+n\sqrt{n})$ time even when $k=3$ {[}Chechik~et~al.~SODA'17; Forster~et~al.~SODA'20{]}. Our algorithm also extends to the decremental graph setting; we can deterministically maintain the maximal $k$-edge-connected subgraphs of a graph undergoing edge deletions in $m^{1+o(1)}$ total update time. Our key idea is a reduction to the dynamic algorithm supporting pairwise $k$-edge-connectivity queries {[}Jin and Sun FOCS'20{]}.
Thatchaphol Saranurak, Wuwei Yuan
2023-06-30T21:39:01Z
http://arxiv.org/abs/2307.00147v1
# Maximal \(k\)-Edge-Connected Subgraphs in Almost-Linear Time for Small \(k\) ###### Abstract We give the first almost-linear time algorithm for computing the _maximal \(k\)-edge-connected subgraphs_ of an undirected unweighted graph for any constant \(k\). More specifically, given an \(n\)-vertex \(m\)-edge graph \(G=(V,E)\) and a number \(k=\log^{o(1)}n\), we can deterministically compute in \(O(m+n^{1+o(1)})\) time the unique vertex partition \(\{V_{1},\ldots,V_{z}\}\) such that, for every \(i\), \(V_{i}\) induces a \(k\)-edge-connected subgraph while every superset \(V_{i}^{\prime}\supset V_{i}\) does not. Previous algorithms with linear time work only when \(k\leq 2\)[Tarjan SICOMP'72], otherwise they all require \(\Omega(m+n\sqrt{n})\) time even when \(k=3\)[Chechik et al. SODA'17; Forster et al. SODA'20]. Our algorithm also extends to the decremental graph setting; we can deterministically maintain the maximal \(k\)-edge-connected subgraphs of a graph undergoing edge deletions in \(m^{1+o(1)}\) total update time. Our key idea is a reduction to the dynamic algorithm supporting pairwise \(k\)-edge-connectivity queries [Jin and Sun FOCS'20]. Introduction We study the problem of efficiently computing the _maximal \(k\)-edge-connected subgraphs_. Given an undirected unweighted graph \(G=(V,E)\) with \(n\) vertices and \(m\) edges, we say that \(G\) is _\(k\)-edge-connected_ if one needs to delete at least \(k\) edges to disconnect \(G\). The maximal \(k\)-edge-connected subgraphs of \(G\) is a unique vertex partition \(\{V_{1},\ldots,V_{z}\}\) of \(V\) such that, for every \(i\), the induced subgraph \(G[V_{i}]\) is \(k\)-edge-connected and there is no strict superset \(V_{i}^{\prime}\supset V_{i}\) where \(G[V_{i}^{\prime}]\) is \(k\)-edge-connected. This fundamental graph problem has been intensively studied. Since the 70's, Tarjan [14] showed an optimal \(O(m)\)-time algorithm when \(k=2\). For larger \(k\), the folklore recursive mincut algorithm takes \(\tilde{O}(mn)\) time1 and there have been significant efforts from the database community in devising faster heuristics [13, 14, 15, 16] but they all require \(\Omega(mn)\) time in the worst case. Eventually in 2017, Chechik et al. [13] broke the \(O(mn)\) bound to \(\tilde{O}(m\sqrt{n}k^{O(k)})\) using a novel approach based on _local_ cut algorithms. Forster et al. [12] then improved the local cut algorithm and gave a faster Monte Carlo randomized algorithm with \(\tilde{O}(mk+n^{3/2}k^{3})\) running time. Very recently, Geogiadis et al. [1] showed a deterministic algorithm with \(\tilde{O}(m+n^{3/2}k^{8})\) time and also how to sparsify a graph to \(O(nk\log n)\) edges while preserving maximal \(k\)-edge-connected subgraphs in \(O(m)\) time. Thus, the factor \(m\) in the running time of all algorithms can be improved to \(O(nk\log n)\) while paying an \(O(m)\) additive term. The \(O(mn)\) bound has also been improved even in more general settings such as directed graphs and/or vertex connectivity [1, 15, 16] as well as weighted undirected graphs [17]. Nonetheless, in the simplest setting of undirected unweighted graphs where \(m=O(n)\) and \(k=O(1)\), the \(\Omega(n\sqrt{n})\) bound remains the state of the art since 2017. Footnote 1: The algorithm computes a global minimum cut \((A,B)\) (using e.g. Karger’s algorithm [10]) and return \(\{V\}\) if the cut size of \((A,B)\) is at least \(k\). Otherwise, recurse on both \(G[A]\) and \(G[B]\) and return the union of the answers of the two recursions. Let us discuss the closely related problem called _\(k\)-edge-connected components_. The goal of this problem is to compute the unique vertex partition \(\{\hat{V}_{1},\ldots,\hat{V}_{z^{\prime}}\}\) of \(V\) such that, each vertex pair \((s,t)\) is in the same part \(\hat{V}_{i}\) iff the \((s,t)\)-minimum cut in \(G\) (not in \(G[\hat{V}_{i}]\)) is at least \(k\). The partition of the maximal \(k\)-edge-connected subgraphs is always a refinement of the \(k\)-edge-connected components and the refinement can be strict. See Figure 1 for example. Very recently, the Gomory-Hu tree algorithm by Abboud et al. [1] implies that \(k\)-edge-connected components can be computed in \(m^{1+o(1)}\) time in undirected unweighted graphs. This algorithm, however, does not solve nor imply anything to our problem. See Appendix A for a more detailed discussion. It is an intriguing question whether one can also obtain an almost-linear time algorithm for maximal \(k\)-edge-connected subgraphs, or there is a separation between these two closely related problems. **Our results.** In this paper, we show the first almost-linear time algorithm when \(k=\log^{o(1)}n\), answering the above question affirmatively at least for small \(k\). **Theorem 1.1**.: _There is a deterministic algorithm that, given an undirected unweighted graph \(G\) with \(n\) vertices and \(m\) edges, computes the maximal \(k\)-edge-connected subgraphs of \(G\) in \(O(m+n^{1+o(1)})\) time for any \(k=\log^{o(1)}n\)._ Our techniques naturally extend to the decremental graph setting. **Theorem 1.2**.: _There is a deterministic algorithm that, given an undirected unweighted graph \(G\) with \(n\) vertices and \(m\) edges undergoing a sequence of edge deletions, maintains the maximal \(k\)-edge-connected subgraphs of \(G\) in \(m^{1+o(1)}\) total update time for any \(k=\log^{o(1)}n\)._ Dynamic algorithms for maximal \(k\)-edge-connected subgraphs were recently studied in [1]. For comparison, their algorithm can handle both edge insertions and deletions but require \(O(n\sqrt{n}\log n)\) worst-case update time, which is significantly slower than our \(m^{o(1)}\) amortized update time. When \(k=3\), they also gave an algorithm that handles edge insertions only using \(\tilde{O}(n^{2})\) total update time. **Previous Approaches and Our Techniques.** Our approach diverges significantly from the local-cut-based approach in [1, 10]. In these previous approaches, they call the local cut subroutine \(\Omega(n)\) times and each call takes \(\Omega(\sqrt{n})\) time. Hence, their running time is at least \(\Omega(n\sqrt{n})\) and this seems inherent without significant modification. Recently, [1] took a different approach. Their \(\tilde{O}(m+n^{3/2}k^{8})\)-time algorithm efficiently implements the folklore recursive mincut algorithm by feeding \(O(nk)\) updates to the dynamic minimum cut algorithm by Thorup [14]. However, since Thorup's algorithm has \(\Omega(\sqrt{n})\) update time, the final running time of [1] is at least \(\Omega(n\sqrt{n})\) as well. Our algorithm is similar to [1] in spirit but is much more efficient. We instead apply the dynamic \(k\)-edge connectivity algorithms by Jin and Sun [11] that takes only \(n^{o(1)}\) update time when \(k=\log^{o(1)}n\). Our reduction is more complicated than the reduction in [1] to dynamic minimum cut because the data structure by [11] only supports pairwise \(k\)-edge connectivity queries, not a global minimum cut. Nonetheless, we show that \(\tilde{O}(nk)\) updates and queries to this "weaker" data structure also suffice. Our approach is quite generic. Our algorithm is carefully designed without the need to check if the graph for which the recursive call is made is \(k\)-edge-connected. This allows us to extend our algorithm to the dynamic case. Figure 1: A graph \(G\) where its maximal \(3\)-edge-connected subgraphs are different from its \(3\)-edge-connected components. **Organization.** We give preliminaries in Section 2. Then, we prove Theorem 1.1 and Theorem 1.2 in Section 3 and Section 4, respectively. ## 2 Preliminaries Let \(G=(V,E)\) be an _unweighted undirected_ graph. Let \(n=|V|\) and \(m=|E|\), and assume \(m=\operatorname{poly}(n)\) and \(k=\log^{o(1)}n\). For any \(S,T\subseteq V\), let \(E(S,T)=\{(u,v)\in E\mid u\in S,v\in T\}\). For every vertex \(u\), the degree of \(u\) is \(\deg(u)=|\{(u,v)\mid(u,v)\in E\}|\). For every subset of vertices \(S\subseteq V\), the volume of \(S\) is \(\operatorname{vol}(S)=\sum_{u\in S}\deg(u)\). Denote \(G[S]\) as the induced graph of \(G\) on a subset of vertices \(S\subseteq V\). Two vertices \(s\) and \(t\) are _\(k\)-edge-connected_ in \(G\) if one needs to delete at least \(k\) edges to disconnect \(s\) and \(t\) in \(G\). A vertex set \(S\) is _\(k\)-edge-connected_ if every pair of vertices in \(S\) is \(k\)-edge-connected. We use the convention that \(S\) is \(k\)-edge-connected when \(|S|=1\). We say that a graph \(G=(V,E)\) is \(k\)-edge-connected if \(V\) is \(k\)-edge-connected. A _\(k\)-edge-connected component_ is an inclusion-maximal vertex set \(S\) such that \(S\) is \(k\)-edge-connected. A whole vertex set can always be partitioned into \(k\)-edge-connected components. We use \(kECC(u)\) to denote the unique \(k\)-edge-connected component containing \(u\). Note that a \(k\)-edge-connected component may not induce a connected graph when \(k>2\). A vertex set \(S\) is a _\(k\)-cut_ if \(|E(S,V\setminus S)|<k\). Note, however, we also count the whole vertex set \(V\) as a trivial \(k\)-cut. We will crucially exploit the following dynamic algorithm in our paper. **Theorem 2.1** (Dynamic pairwise \(k\)-edge connectivity [10]).: _There is a deterministic algorithm that maintains a graph \(G\) with \(n\) vertices undergoing edge insertions and deletions using \(n^{o(1)}\) update time and, given any vertex pair \((s,t)\), reports whether \(s\) and \(t\) are \(k\)-edge-connected in the current graph \(G\) in \(n^{o(1)}\) time where \(k=\log^{o(1)}n\)._ For the maximal \(k\)-edge-connected subgraph problem, we can assume that the graph is sparse using the _forest decomposition_. **Definition 2.2** (Forest decomposition [11]).: _A \(t\)-forest decomposition of a graph \(G\) is a collection of forests \(F_{1},\ldots,F_{t}\), such that \(F_{i}\) is a spanning forest of \(G\setminus\bigcup_{j=1}^{i-1}F_{j}\), for every \(1\leq i\leq t\)._ **Theorem 2.3** (Lemma 8.3 of [1]).: _Any \(O(k\log n)\)-forest decomposition of a graph has the same maximal \(k\)-edge-connected subgraphs as the original graph. Moreover, there is an algorithm for constructing such a \(O(k\log n)\)-forest decomposition in \(O(m)\) time._ ## 3 The Static Algorithm In this section, we prove our main result, Theorem 1.1. The key idea is the following reduction: **Lemma 3.1**.: _Suppose there is a deterministic decremental algorithm supporting pairwise \(k\)-edge-connectivity that has \(t_{p}\cdot m\) total preprocessing and update time on an initial graph with \(n\) vertices and \(m\) edges and query time \(t_{q}\)._ _Then, there is a deterministic algorithm for computing the maximal \(k\)-edge-connected subgraphs in \(O(m+(t_{p}+t_{q})\cdot kn\log^{2}n)\) time._ By plugging in theorem 2.1, we get theorem 1.1. The rest of this section is for proving Lemma 3.1. Throughout this section, we let \(t_{q}\) denote the query time of the decremental pairwise \(k\)-edge connectivity data structure that Lemma 3.1 assumes. Recall again that, for any vertex \(u\), \(u\)'s \(k\)-edge-connected component, \(kECC(u)\), might not induce a connected graph. The first tool for proving Lemma 3.1 is a "local" algorithm for finding a connected component of \(G[kECC(u)]\). **Lemma 3.2**.: _Given a graph \(G\) and a vertex \(u\), there is a deterministic algorithm for finding the connected component \(U\) containing \(u\) of \(G[kECC(u)]\) in \(O(t_{q}\cdot\operatorname{vol}(U))\) time._ Proof.: We run BFS from \(u\) to explore every vertex in the connected component \(U\) containing \(u\) of \(kECC(u)\). During the BFS process, we only visit the vertices in \(kECC(u)\) by checking if the newly found vertex is \(k\)-edge-connected to \(u\). Since each edge incident to \(U\) is visited at most twice, the total running time is \(O(t_{q}\cdot\operatorname{vol}(U))\). Below, we describe the algorithm for Lemma 3.1 in Algorithm 1 and then give the analysis. ``` Input: An undirected connected graph \(G=(V,E)\), and a list of vertices \(L\) (initially \(L=V\)). Note that the parameters are passed by value. Output: The maximal \(k\)-edge-connected subgraphs of \(G\). 1\(S\leftarrow\varnothing\). 2while\(|L|>1\)do 3 Choose an arbitrary pair \((u,v)\in L\). 4if\(u\) and \(v\) are \(k\)-edge-connected in \(G\)then 5\(L\gets L\setminus\{v\}\). 6else 7 Simultaneously compute the \(u\)'s connected component of \(G[kECC(u)]\) and the \(v\)'s connected component of \(G[kECC(v)]\), until the one with the smaller volume (denoted by \(U\)) is found. 8\(S\gets S\cup\textsc{Main}(G[U],U)\). 9\(G\gets G\setminus U\). 10\(L\leftarrow(L\setminus U)\cup\{w\notin U\mid(x,w)\in E(U,V(G)\setminus U)\}\). 11 end while 12 13 end while 14\(S\gets S\cup\{V(G)\}\). 15return\(S\). ``` **Algorithm 1**Main\((G,L)\): compute the maximal \(k\)-edge-connected subgraphs **Correctness.** We start with the following structural lemma. **Lemma 3.3** (Lemma 5.6 of [17]).: _Let \(T\) be a \(k\)-cut in \(G[C]\) for some vertex set \(C\). Then, either_ * \(T\) _is a_ \(k\)_-cut in_ \(G\) _as well, or_ * \(T\) _contains an endpoint of_ \(E(C,V(G)\setminus C)\)_._ Next, the crucial observation of our algorithm is captured by the following invariant. **Lemma 3.4**.: _At any step of Algorithm 1, every \(k\)-cut \(T\) in \(G\) is such that \(T\cap L\neq\emptyset\)._ Proof.: The base case is trivial because \(L\gets V\) initially. Next, we prove the inductive step. \(L\) can change in Line 5 or Line 8. In the first case, the algorithm finds that \(u\) and \(v\) are \(k\)-edge-connected and removes \(v\) from \(L\). For any \(k\)-cut \(T\) where \(v\in T\), an important observation is that \(kECC(v)\subseteq T\) as well. But \(kECC(u)=kECC(v)\) and so \(u\in T\) too. So the invariant still holds even after removing \(v\) from \(L\). In the second case, the algorithm removes \(U\) from \(G\). Let us denote \(G^{\prime}=G\setminus U\). Since the algorithm adds the endpoints of cut edges crossing \(U\) to \(L\), it suffices to consider a \(k\)-cut \(T\) in \(G^{\prime}\) that is disjoint from the endpoints of the cut edges of \(U\). By Lemma 3.3, \(T\) was a \(k\)-cut in \(G\). Since the changes in \(L\) occur only at \(U\) and neighbors of \(U\), while \(T\) is disjoint from both \(U\) and all neighbors of \(U\), we have \(T\cap L\neq\emptyset\) by the induction hypothesis. **Corollary 3.5**.: _When \(|L|=1\), then \(G\) is \(k\)-edge-connected._ Proof.: Otherwise, there is a partition \((A,B)\) of \(V\) where \(|E(A,B)|<k\). So both \(A\) and \(B\) are \(k\)-cuts in \(G\). By Lemma 3.4, \(A\cap L\neq\emptyset\) and \(B\cap L\neq\emptyset\) which contradicts that \(|L|=1\). We are ready to conclude the correctness of Algorithm 1. At a high level, the algorithm finds the set \(U\) and "cuts along" \(U\) at Lines 7. Then, on one hand, recurse on \(U\) at Line 8 and, on the other hand, continue on \(V(G)\setminus U\). We say that the cut edges \(E(U,V(G)\setminus U)\) are "deleted". Now, since \(U\) is the connected component of \(G[kECC(u)]\) for some vertex \(u\). We have that, for every edge \((x,y)\in E(U,V(G)\setminus U)\), the pair \(x\) and \(y\) are not \(k\)-edge-connected in \(G\). In particular, \(x\) and \(y\) are not \(k\)-edge-connected in \(G[V^{\prime}]\) for every \(V^{\prime}\subseteq V\). Thus, the algorithm never deletes edges inside any maximal \(k\)-edge-connected subgraph \(V_{i}\). Since the algorithm stops only when the remaining graph is \(k\)-edge-connected, the algorithm indeed returns the maximal \(k\)-edge-connected subgraphs of the whole graph. **Running Time.** Consider the time spent on each recursive call. Let \(G^{\prime}\) be the graph for which the recursive call is made and \(m^{\prime}=\operatorname{vol}(G^{\prime})\). Every vertex is inserted to \(L\) initially or as an endpoint of some removed edge, so the total number of vertices added to \(L\) is \(O(m^{\prime})\). In each iteration, either we remove a vertex from \(L\), or remove a subgraph from \(G\). Hence we check pairwise \(k\)-edge-connectivity \(O(m^{\prime})\) times, so the running time of checking pairwise \(k\)-edge-connectivity is \(O(t_{q}\cdot m^{\prime})\). For the time of finding connected components of \(k\)-edge-connected components, since we spend \(O(t_{q}\cdot\operatorname{vol}(U))\) time to find some \(U\) and remove \(U\) from \(G\), the total cost is \(O(t_{q}\cdot m^{\prime})\). Plus, initializing the dynamic pairwise \(k\)-edge connectivity algorithm on \(G^{\prime}\) takes \(O(t_{p}\cdot m^{\prime})\) time. Thus the total running time of each recursive call is \(O((t_{p}+t_{q})\cdot m^{\prime})\). For the recursion depth, since each \(U\) found has the smaller volume of the two, \(\operatorname{vol}(U)\leq m^{\prime}/2\). Hence the recursion depth is \(O(\log m_{0})\), where \(n_{0}\) and \(m_{0}\) are the numbers of vertices and edges of the initial graph. Thus the total running time of Algorithm 1 is \(O((t_{p}+t_{q})\cdot m_{0}\log n_{0})\). By applying theorem 2.3 to the initial graph \(G\) and invoking Algorithm 1 on the resulting graph, the number of edges in the resulting graph is \(O(kn_{0}\log n_{0})\), so the running time is improved to \(O(m_{0}+(t_{p}+t_{q})\cdot kn_{0}\log^{2}n_{0})\). This completes the proof of Lemma 3.1. ## 4 The Decremental Algorithm Our static algorithm can be naturally extended to a decremental dynamic algorithm. To prove Theorem 1.2, we prove the following reduction. By combining Lemma 4.1 and Theorem 2.1, we are done. **Lemma 4.1**.: _Suppose there is a deterministic decremental algorithm supporting pairwise \(k\)-edge-connectivity that has \(t_{p}\cdot m\) total preprocessing and update time on an initial graph with \(n\) vertices and \(m\) edges and query time \(t_{q}\)._ _Then there is a deterministic decremental dynamic algorithm for maintaining the maximal \(k\)-edge-connected subgraphs on an undirected graph of \(n\) vertices and \(m\) edges with \(O((t_{p}+t_{q})\cdot m\log n)\) total preprocessing and update time, and \(O(1)\) query time._ The algorithm for Lemma 4.1 as is follows. First, we preprocess the initial graph \(G_{0}\) using Algorithm 1 and obtain the maximal \(k\)-edge-connected subgraphs \(\{V_{1},\ldots,V_{z}\}\) of \(G_{0}\). Next, given an edge \(e\) to be deleted, if \(e\) is in a maximal \(k\)-edge-connected subgraph \(V_{i}\) of \(G\), then we invoke \(\textsc{Update}(G[V_{i}],e)\) and update the set of the maximal \(k\)-edge-connected subgraphs of \(G\); otherwise we ignore \(e\). The subroutine \(\textsc{Update}(H,e)\) is described in Algorithm 2. ``` Input: A \(k\)-edge-connected subgraph \(H\) and an edge \(e=(x,y)\in H\) to be deleted. Output: The \(k\)-edge-connected subgraphs of \(H\) after deletion. 1\(H\gets H\setminus\{(x,y)\}\). return\(\textsc{Main}(H,\{x,y\})\). ``` **Algorithm 2**Update(H, e) **Correctness.** Let \(H=(V^{\prime},E^{\prime})\) be the maximal \(k\)-edge-connected subgraph containing edge \((x,y)\) before deletion. It suffices to prove that Lemma 3.4 holds when we invoke Algorithm 1. Suppose there is a \(k\)-cut \(C\) in \(H\setminus\{(x,y)\}\) such that \(C\cap\{x,y\}=\emptyset\), then \(C\) is also a \(k\)-cut in \(H\), a contradiction. Hence the correctness follows from the correctness of Algorithm 1. **Running Time.** In the case that \(H\setminus\{(x,y)\}\) is still \(k\)-edge-connected, the running time is \(t_{q}\). We charge this time \(t_{q}\) to the deleted edge \((x,y)\). Otherwise, consider the time spend on each recursive call of Main. Assume that the total volume of the subgraphs removed and passed to another recursive call in a recursive call is \(\nu\). The total number of vertices added to \(L\) is \(O(\nu)\). In each iteration, we either remove a vertex from \(L\) or remove a subgraph. Hence we check pairwise \(k\)-edge-connectivity \(O(\nu)\) times, so the running time of checking pairwise \(k\)-edge-connectivity is \(O(t_{q}\cdot\nu)\). Since we spend \(O(t_{q}\cdot\mathrm{vol}(U))\) time to find \(U\), the total cost is \(O(t_{q}\cdot\nu)\). Plus, it takes \(O((t_{p}+t_{q})\cdot m^{\prime})\) time to initialize the dynamic pairwise \(k\)-edge connectivity algorithm and check pairwise \(k\)-edge-connectivity on a graph \(H^{\prime}\) with \(m^{\prime}\) edges for the first time we invoke Main on \(H^{\prime}\). Also, removing all edges from \(H^{\prime}\) takes \(t_{p}\cdot m^{\prime}\) time. We charge \(O(t_{p}+t_{q})\) to each of the removed edges in each recursive call. The recursion depth is \(O(\log m_{0})\) by Lemma 3.1, where \(n_{0}\) and \(m_{0}\) are the numbers of vertices and edges of the initial graph. Hence each edge will be charged \(O(\log m_{0})\) times, so the total preprocessing and update time is \(O((t_{p}+t_{q})\cdot m_{0}\log n_{0})\). ## Appendix A Relationship with \(k\)-Edge-Connected Components The reason why a subroutine for computing \(k\)-edge-connected components is not useful for computing maximal \(k\)-edge-connected subgraphs is as follows. Given a graph \(G=(V,E)\), we can artificially create a supergraph \(G^{\prime}=(V^{\prime}\supseteq V,E^{\prime}\supseteq E)\) where the whole set \(V\) is \(k\)-edge-connected, but the maximal \(k\)-edge-connected subgraphs of \(G^{\prime}\) will reveal the maximal \(k\)-edge-connected subgraphs of \(G\). So given a subroutine for computing the \(k\)-edge-connected components of \(G^{\prime}\), we know nothing about the maximal \(k\)-edge-connected subgraphs of \(G\). The construction of \(G^{\prime}\) is as follows. First, we set \(G^{\prime}\gets G\). Assume \(V=\{1,2,\ldots,n\}\). For every \(1\leq i<n\), add \(k\) parallel dummy length-2 paths \((i,d_{i,1},i+1),\ldots,(i,d_{i,k},i+1)\). Thus \(i\) and \(i+1\) are \(k\)-edge-connected, so \(V\) is \(k\)-edge-connected at the end. When we compute the maximal \(k\)-edge-connected subgraphs of \(G^{\prime}\), we know that we will first remove all dummy vertices \(d_{i,j}\) because they all have degree 2 (assuming that \(k>2\)). We will obtain \(G\) and so we will obtain the maximal \(k\)-edge-connected subgraphs of \(G\) from this process.
2309.17197
An Investigation Into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features
Recent research has shown that artificial intelligence (AI) models can exhibit bias in performance when trained using data that are imbalanced by protected attribute(s). Most work to date has focused on deep learning models, but classical AI techniques that make use of hand-crafted features may also be susceptible to such bias. In this paper we investigate the potential for race bias in random forest (RF) models trained using radiomics features. Our application is prediction of tumour molecular subtype from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our results show that radiomics features derived from DCE-MRI data do contain race-identifiable information, and that RF models can be trained to predict White and Black race from these data with 60-70% accuracy, depending on the subset of features used. Furthermore, RF models trained to predict tumour molecular subtype using race-imbalanced data seem to produce biased behaviour, exhibiting better performance on test data from the race on which they were trained.
Mohamed Huti, Tiarna Lee, Elinor Sawyer, Andrew P. King
2023-09-29T12:45:53Z
http://arxiv.org/abs/2309.17197v1
An Investigation Into Race Bias in Random Forest Models Based on Breast DCE-MRI Derived Radiomics Features ###### Abstract Recent research has shown that artificial intelligence (AI) models can exhibit bias in performance when trained using data that are imbalanced by protected attribute(s). Most work to date has focused on deep learning models, but classical AI techniques that make use of hand-crafted features may also be susceptible to such bias. In this paper we investigate the potential for race bias in random forest (RF) models trained using radiomics features. Our application is prediction of tumour molecular subtype from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. Our results show that radiomics features derived from DCE-MRI data do contain race-identifiable information, and that RF models can be trained to predict White and Black race from these data with 60-70% accuracy, depending on the subset of features used. Furthermore, RF models trained to predict tumour molecular subtype using race-imbalanced data seem to produce biased behaviour, exhibiting better performance on test data from the race on which they were trained. Keywords:Bias AI Fairness Radiomics Breast DCE-MRI. ## 1 Introduction The potential for artificial intelligence (AI) models to exhibit bias, or disparate performance for different protected groups, has been demonstrated in a range of computer vision and more recently medical imaging applications. For example, biased performance has been reported in AI models for diagnostic tasks from chest X-rays [9, 21], cardiac magnetic resonance (MR) image segmentation [19, 18, 10], brain MR image analysis [7, 16, 24, 22] and dermatology image analysis [1, 6]. In response, the field of _Fair AI_ has emerged to address the challenge of making AI more trustworthy and equitable in its performance for protected groups [14]. A common cause of bias in AI model performance is the combination of a distributional shift between the data of different protected groups and demographic imbalance in the training set. For example, in chest X-rays there is a distributional shift between sexes due to the presence of breast tissue lowering the signal-to-noise ratio of images acquired from female subjects [9]. However, more subtle distributional shifts can also exist which cannot be perceived by human experts, and recent work has shown that race-based distributional shifts are present in a range of medical imaging modalities, including breast mammography [5]. This raises the possibility of race bias in AI models trained using imbalanced data from these modalities. Most work on AI bias to date has focused on deep learning techniques, in which the features used for the target task are optimised as part of the training process. In the presence of distributional shift and training set imbalance this learning process can lead to bias in the features and potentially in model performance. Classical AI approaches are trained using fixed hand-crafted features such as radiomics, and so might be considered to be less susceptible to bias. However, despite these approaches still being widely applied, little experimental work has been performed to assess the potential for, and presence of, bias in these features and the resulting models. In this paper, we investigate the potential for bias in a classical AI model (Random Forest) based on radiomics features. Our chosen application is potential race bias in Random Forest models trained using radiomics features derived from dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) of breast cancer patients. This application is of interest because there have been reported differences in breast density and composition between races [12, 15], as well as tumour biology [11], indicating a possible distributional shift in (imaging) data acquired from different races, and hence the possibility of bias in AI models trained using these data. Our target task is the prediction of tumour molecular subtype from the radiomics features. This is a clinically useful task because different types of tumour are commonly treated in different ways (e.g. surgery, chemotherapy), and tumour molecular subtype is normally determined through an invasive biopsy. Therefore, development and validation of an AI model to perform this task from imaging data would obviate the need for such biopsies. This paper makes two key contributions to the field of Fair AI. First, we present the first thorough investigation into possible bias in AI models based on radiomics features. Second, we perform the first investigation of bias in AI models based on features derived from breast DCE-MRI imaging. ## 2 Materials In our experiments we employ the dataset described in [20]3, which features pre-operative DCE-MRI images acquired from 922 female patients with invasive breast cancer at Duke Hospital, USA, together with demographic, clinical, pathology, genomic, treatment, outcome and other data. From the DCE-MRI images, 529 radiomics features have been derived which are split into three (partially overlapping) categories: whole breast, fibrolandular tissue (FGT) only and tumour only. The full dataset consists of approximately 70% White sub jects, 22% Black subjects and 8% other races. We refer the reader to [20] for a full summary of patient characteristics and the data provided. ## 3 Methods For all experiments we employed a Random Forest (RF) classifier as our AI model, similar to the work described in [20]. For each model, we performed a grid search hyperparameter optimisation using a 5-fold cross validation on the training set. Following this, the final model was trained with the selected hyperparameter values using all training data and applied to the test set. The hyperparameters optimised were the number of trees (50, 100, 200, 250), the maximum depth of the trees (10, 15, 30, 45) and the splitting criterion (entropy, Gini). Our model training differed from that described in [20] in three important ways: 1. We used only Black and White subjects to enable us to analyse bias in a controlled environment. Data from all other races were excluded from both the training and test sets. This meant that our dataset comprised 854 subjects (651 White and 203 Black). 2. To simplify our analysis, we focused on just one of the binary classification problems reported in [20]: prediction of _Luminal A_ vs _non-Luminal A_ tumour molecular subtype. Based on these labels, the numbers of positive (_Luminal A_) and negative (_non-Luminal A_) subjects for each race are summarised in Table 1. As can be seen, there is a higher proportion of _non-Luminal A_ tumours in the Black patients, which is consistent with prior studies on relative incidence of tumour subtypes by race [2, 8]. 3. We did not perform feature selection prior to training and evaluating the RF classifiers. We chose to omit this step because one of our objectives was to analyse which specific radiomics features (if any) could lead to bias in the trained models, so we did not want to exclude any features prior to this analysis. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Label** & **White** & **Black** & **All** \\ \hline Positive (_Luminal A_) & 442 & 107 & 549 \\ \hline Negative (_non-Luminal A_) & 209 & 96 & 305 \\ \hline \end{tabular} \end{table} Table 1: Summary of positive (_Luminal A_) and negative (_non-Luminal A_) labels in the dataset overall and broken down by race. ## 4 Experiments and Results ### Race Classification In the first experiment, our aim was to determine if the radiomics features contain race-identifiable information. The presence of such information is a known potential cause of bias in trained models as it would be indicative of a distributional shift in the data between races, not just in the imaging data but in the derived (hand-crafted) radiomics features. To investigate this, we trained RF classifiers to predict race (White or Black) from the entire radiomics feature set, and also for the whole breast, FGT and tumour features individually. For these experiments, to eliminate the effect of class (i.e. race) imbalance, we randomly sampled from the dataset to create race-balanced training and test sets, each consisting of 100/100 White/Black subjects. Results are reported as percentage classification accuracy in Table 2 for all subjects in the test set and also separately for each race. We can see that it is possible to predict race from radiomics features with around 60-70% accuracy. The results are similar for both White and Black subjects and do not differ significantly for the category of radiomics features used. It should be noted that the whole breast, FGT and tumour categories are partially overlapping, hence the similar performance for the different radiomics categories. Specifically, a set of features related to breast and FGT volume is included in both the whole breast and FGT categories, and another set related to FGT and tumour enhancement is present in both the FGT and tumour categories [20]. ### Bias Analysis Having established one of the key conditions for the presence of bias in AI models, i.e. a distributional shift between the data of different protected groups, we next investigated whether training with highly imbalanced training sets can lead to bias in performance. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Radiomics features** & **Whole test set** & **White subjects only** & **Black subjects only** \\ \hline All & 63\% & 64\% & 66\% \\ \hline Whole breast only & 62\% & 70\% & 57\% \\ \hline FGT only & 61\% & 65\% & 60\% \\ \hline Tumour only & 62\% & 62\% & 66\% \\ \hline \end{tabular} \end{table} Table 2: Race classification accuracy from radiomics features derived from breast DCE-MRI. Results are presented as percentage classification accuracy and reported for whole test set as well as broken down by race. Classification was performed from all radiomics features as well as just those derived from the whole breast, fibroglandular tissue (FGT) and tumour only. For these experiments we split the dataset into a training set of 426 subjects and a test set of 428 subjects. The split was random under the constraints that the White and Black subjects and the _Luminal A_ and _non-Luminal A_ subjects were evenly distributed between train and test sets. The training set consisted of 325/101 White/Black subjects and 274/152 _Luminal A_/_non-Luminal A_ subjects, and the test set consisted of 326/102 White/Black subjects and 275/153 _Luminal A_/_non-Luminal A_ subjects. In addition, we curated two additional training sets consisting of only the White subjects and only the Black subjects from the combined training set described above. Due to the racial imbalance in the database, these training sets consisted of 325/101 subjects for White/Black subjects. Using all three training sets (i.e. all, White-only and Black-only), we trained RF models for the task of classifying _Luminal A_ vs _non-Luminal A_ tumour molecular subtype and evaluated their performance for the entire test set as well as for the White subjects and the Black subjects in the test set individually. Class (i.e. molecular subtype) imbalance was addressed by applying a weighting to training samples that was inversely proportional to the class frequency. Results are presented in Table 3, in which performance is quantified using the percentage classification accuracy. We performed this experiment using all radiomics features, just the whole breast features, just the FGT features and just the tumour features. We can see that in terms of overall performance, the models trained using all data and the White-only data had higher accuracy than the models trained using Black-only data, reflecting the impact of different training set sizes. Regarding race-specific performance, the models trained using all training data (i.e. 325/101 White/Black subjects) performed slightly better on White subjects, likely reflecting the effect of training set imbalance. The difference in performance in favour of White subjects varied from 3-11% (mean 6.25%), depending on the subset of features used. The models trained using White-only data had a larger performance disparity in favour of White subjects, varying between 6-11% (mean 9%). The models trained using Black-only data showed generally better performance on Black subjects (mean 3.5% difference), although the model trained using all radiomics features was 1% better for White subjects. In contrast, the model trained using whole breast radiomics features performed 10% better for Black subjects. With the exception of this last result, in general there was not a noticeable difference in bias between the models trained using all radiomics features, just whole breast features, just FGT features and just tumour features, which is consistent with the similar race classification results reported in Section 4.1. ### Covariate Analysis Next we investigated a range of covariates to test for the presence of confounding variable(s) that could be leading to the observed bias. From the full set of patient data available within the dataset we selected those variables that could most plausibly have associations with both race and model performance. These variables are summarised in Table 4. For the continuous variable (i.e. age), the \begin{table} \begin{tabular}{|c|l|c|c|c|} \hline **ALL FEATURERES** & \multicolumn{4}{|c|}{**Train**} \\ \hline \multirow{4}{*}{**Test**} & & **All** & **White** & **Black** \\ \cline{2-5} & **All** & 65\% & 65\% & 60\% \\ \cline{2-5} & **White** & 68\% & 67\% & 60\% \\ \cline{2-5} & **Black** & 57\% & 58\% & 59\% \\ \hline \hline **WHOLE BREAST** & \multicolumn{4}{|c|}{**Train**} \\ \hline \multirow{4}{*}{**Test**} & & **All** & **White** & **Black** \\ \cline{2-5} & **All** & 61\% & 62\% & 53\% \\ \cline{2-5} & **White** & 62\% & 63\% & 51\% \\ \cline{2-5} & **Black** & 57\% & 57\% & 61\% \\ \hline \hline **FGT** & \multicolumn{4}{|c|}{**Train**} \\ \hline \multirow{4}{*}{**Test**} & & **All** & **White** & **Black** \\ \cline{2-5} & **All** & 67\% & 64\% & 61\% \\ \cline{2-5} & **White** & 68\% & 67\% & 60\% \\ \cline{2-5} & **Black** & 62\% & 56\% & 62\% \\ \hline \hline **TUMOUR** & \multicolumn{4}{|c|}{**Train**} \\ \hline \multirow{4}{*}{**Test**} & & **All** & **White** & **Black** \\ \cline{2-5} & **All** & 67\% & 65\% & 59\% \\ \cline{2-5} & **White** & 68\% & 67\% & 58\% \\ \cline{2-5} & **Black** & 65\% & 57\% & 61\% \\ \hline \end{tabular} \end{table} Table 3: Tumour molecular subtype classification accuracy for _Luminal A_ vs. _non-Luminal A_ task. Results presented as percentage accuracy and reported for training/testing using all subjects, White subjects only and Black subjects only. From top-to-bottom: results computed using all radiomics features, just whole breast features, just fibroglandular tissue (FGT) features and just tumour features. table shows the median and lower/upper quartiles for White and Black patients separately. For categorical variables (i.e. all other variables), counts and percentages are provided. The \(p\)-values were computed using a Mann-Whitney U test for age and Chi-square tests for independence for all other variables. We can see that three of the covariates showed significant differences (at 0.05 significance) in their distributions between White and Black subjects: age, estrogen receptor status and neoadjuvant chemotherapy. As stated earlier, non-luminal breast cancer, which is generally estrogen receptor negative, is more commonly seen in Black subjects than White subjects [2, 8]. In addition, this cancer is more commonly treated with neoadjuvant chemotherapy, whereas luminal breast cancer is treated with surgery, followed by endocrine therapy and chemotherapy [23][4]. This may contribute to the statistically significant differences seen in the covariates. \begin{table} \begin{tabular}{|l l|c|c|} \hline **Covariate** & & **White** & **Black** & \(p\)**-value** \\ \hline Age at diagnosis (years, M(L,U)) & 53.3(45.9, 61.8) & 50.5(44.0, 58.5) & 0.012 \\ \hline Scanner (N / \%): & GE & 451 / 69.3\% & 134/ 66.0\% & \\ & Siemens & 200 / 30.7\% & 69 / 34.0\% & 0.430 \\ \hline Field strength (N / \%): & 1.5T & 315 / 48.4\% & 111 / 54.7\% & \\ & 2.89T & 1 / 0.1\% & 0 / 0.0\% & 0.258 \\ & 3T & 335/ 51.5\% & 92 / 45.3\% & \\ \hline Menopause at diagnosis & Pre & 276 / 42.4\% & 94 / 46.3\% & \\ (N / \%): & Post & 364 / 55.9\% & 105 / 51.7\% & 0.574 \\ & N/A & 11 / 1.7\% & 4 / 2.0\% & \\ \hline Estrogen receptor status & Positive & 510/ 78.3\% & 123 / 60.6\% & \\ (N / \%): & Negative & 141 / 21.7\% & 80 / 39.4\% & \\ \hline Human epidermal growth & Positive & 111 / 17.1\% & 36 / 17.7\% & \\ factor 2 receptor status (N / \%): & Negative & 540 / 82.9\% & 167 / 82.3\% & \\ \hline Adjuvant radiation & Yes & 434 / 67.7\% & 144 / 71.0\% & 0.341 \\ therapy (N / \%): & No & 210 / 32.3\% & 58 / 29.0\% & \\ \hline Neoadjuvant radiation & Yes & 13 / 2.0\% & 7 / 3.4\% & \\ therapy (N / \%): & No & 632 / 98.0\% & 7 / 96.6\% & \\ \hline Adjuvant chemotherapy & Yes & 391/ 63.1\% & 108 / 57.1\% & \\ (N / \%): & No & 229 / 36.9\% & 81 / 42.9\% & \\ \hline Neoadjuvant chemotherapy & Yes & 178/ 28.1\% & 91 / 46.9\% & \\ (N / \%): & No & 455 /71.9 \% & 103 / 53.1\% & 1.593e-06 \\ \hline \end{tabular} \end{table} Table 4: Distributions of covariates in the dataset by race (White and Black subjects only). Continuous variables are reported as median (M), lower (L) and upper (U) quartiles. Categorical variables are reported as count (N) and percentage (%). \(p\)-values calculated using Mann Whitney U tests for continuous variables and Chi Square tests for independence for categorical variables. ## 5 Discussion and Conclusions The main contribution of this paper has been to present the first investigation focused on potential bias in AI models trained using radiomics features. The work described in [20] also reported performance of their AI models based on radiomics features broken down by race. However, in our work we have performed a more controlled analysis to investigate the potential for bias and its possible causes. As a second key contribution, our paper represents the first investigation into bias in AI models based on breast DCE-MRI imaging. Our key findings are that: (i) radiomics features derived from breast DCE-MRI data contain race-identifiable information, leading to the potential for bias in AI models trained using such data, and (ii) RF models trained to predict tumour molecular subtype seem to exhibit biased behaviour when trained using race-imbalanced training data. These findings show that the process of producing hand-crafted features such as radiomics features does not _remove_ the potential for bias from the imaging data, and so further investigation of the performances of other similar models is warranted. However, an unanswered question is whether the production of hand-crafted features _reduces_ the potential for bias. To investigate this, in future work we will compare bias in radiomics-based AI models to similar image-based AI models. Our analysis of covariates did highlight several possible confounders, so we emphasise that the cause of the bias we have observed remains to be established. In future work we will perform further analysis of these potential confounders, including of interactions between multiple variables, to help determine this cause. Interestingly, the work described in [20], which included the same _Luminal A_ vs. _non-Luminal A_ classification task using the same dataset did not find a statistically significant difference in performance between races. However, there are a number of differences between our work and [20]. First, [20] used all training data (half of the full dataset) when training their RF models, i.e. they did not create deliberately imbalanced training sets as we did. Therefore, their race distribution was presumably similar to that of the full dataset (i.e. 70% White, 22% Black, 8% other races). It may be that this was not a sufficient level of imbalance to result in biased performances, and/or that the presence of other races (apart from White and Black) in the training and test sets reduced the bias effect. Second, we also note that the comparison performed in [20] was between White and other races, whereas we compared White and Black races. Third, in [20] a feature selection step was employed to optimise performance of their models. It is possible that this reduced the potential for bias by removing features that contained race-specific information, although our race classification results (see Section 4.1) suggest that this information is present across all categories of feature. In this work we have focused on distributional shift in imaging data (and derived features) as a cause of bias, but bias can also arise from other sources, such as bias in data acquisition, annotations, and use of the models after deployment [13, 3]. We emphasise that by focusing on this specific cause of bias we do not believe that others should be neglected, and we argue for the importance of considering possible bias in all parts of the healthcare AI pipeline. Finally, this paper has focused on highlighting the _presence_ of bias, and we have not addressed the important issue of what should be _done_ about it. Bias mitigation techniques have been proposed and investigated in a range of medical imaging problems [19, 25, 26], and approaches such as these may have a role to play in addressing the bias we have uncovered in this work. However, when attempting to mitigate bias one should bear in mind that the classification tasks of different protected groups may have different levels of difficulty, making it challenging to eliminate bias completely. Furthermore, one should take care to ensure that the performances of the protected groups are 'levelled up' rather than 'levelled down' [17] to avoid causing harm to some protected groups. #### 4.0.1 Acknowledgements This work was supported by the National Institute for Health Research (NIHR) Biomedical Research Centre at Guy's and St Thomas' NHS Foundation Trust and King's College London, United Kingdom. Additionally this research was funded in whole, or in part, by the Wellcome Trust, United Kingdom WT203148/Z/16/Z. The views expressed in this paper are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.
2309.15958
Review on $f(Q)$ Gravity
Recent years have witnessed a rise in interest in the geometrical trinity of General Relativity and its extensions. This interest has been fuelled by novel insights into the nature of gravity, the possibility to address computational and conceptual questions -- such as the determination of black hole entropy or the definition of gravitational energy-momentum -- from a new perspective. In particular, $f(Q)$ gravity has also inspired numerous works on black holes, wormholes, and cosmology. In the latter case, $f(Q)$ models have the potential to elucidate phenomena in both early and late-time cosmology without necessitating the inclusion of dark energy, the inflaton field, or dark matter. Particularly noteworthy is the role of $f(Q)$ theories in addressing cosmological tensions, presenting exciting possibilities for reshaping our understanding of gravity and its manifestations in cosmology. The emergence of intriguing new black hole solutions and the potential existence of wormhole solutions suggest the presence of novel physics within the realm of strong gravity. These phenomena have become increasingly measurable only in recent times, opening up exciting avenues for further exploration and discovery. This review is tailored to students and researchers alike. It offers a self-contained and pedagogical introduction to metric-affine geometry--The mathematical foundation and indispensable tool upon which the geometrical trinity of General Relativity as well as its various extensions are built.
Lavinia Heisenberg
2023-09-27T19:16:19Z
http://arxiv.org/abs/2309.15958v1
# Review on \(f(\mathbb{Q})\) Gravity ###### Abstract Recent years have witnessed a rise in interest in the geometrical trinity of General Relativity and its extensions. This interest has been fuelled by novel insights into the nature of gravity, the possibility to address computational and conceptual questions--such as the determination of black hole entropy or the definition of gravitational energy-momentum--from a new perspective. In particular, \(f(\mathbb{Q})\) gravity has also inspired numerous works on black holes, wormholes, and cosmology. In the latter case, \(f(\mathbb{Q})\) models have the potential to elucidate phenomena in both early and late-time cosmology without necessitating the inclusion of dark energy, the inflaton field, or dark matter. Particularly noteworthy is the role of \(f(\mathbb{Q})\) theories in addressing cosmological tensions, presenting exciting possibilities for reshaping our understanding of gravity and its manifestations in cosmology. The emergence of intriguing new black hole solutions and the potential existence of wormhole solutions suggest the presence of novel physics within the realm of strong gravity. These phenomena have become increasingly measurable only in recent times, opening up exciting avenues for further exploration and discovery. This review is tailored to students and researchers alike. It offers a self-contained and pedagogical introduction to metric-affine geometry-The mathematical foundation and indispensable tool upon which the geometrical trinity of General Relativity as well as its various extensions are built. ###### Contents * List of Symbols * Acronyms * 1 Introduction * 2 Fundamentals of Metric-Affine Geometries * 2.1 Manifolds, Diffeomorphisms, Curves, and Scalar Fields * 2.2 Vector Fields, Tensor Fields, and Densities * 2.3 The Flow of a Vector Field and the Lie Derivative * 2.4 Covariant Derivatives and the Connection * 2.5 The Metric Tensor and the Geodesic Equation * 3 Curvature Torsion, Non-Metricity: The Fundamental Objects of Metric-Affine Geometries * 3.1 Parallel Transport * 3.2 Curvature * 3.3 Torsion * 3.4 Non-Metricity * 3.5 Classification of Metric-Affine Geometries and the Decomposition of the Connection * 3.6 The Lie Derivative Revisited: Symmetries of Metric-Affine Geometries * 3.7 Integration in the Presence of Torsion and Non-Metricity: The Generalized Gauss Theorem * 3.8 Collection of Geometric Identities * 4 The Geometrical Trinity of General Relativity * 4.1 Einstein's Original Formulation of General Relativity * 4.2 The Teleparallel Equivalent of General Relativity (TEGR) * 4.3 The Symmetric Teleparallel Equivalent of General Relativity (STEGR) * 4.4 Coincident General Relativity (CGR) * 4.5 The General Teleparallel Equivalent of General Relativity (GTEGR) * 4.6 Non-flat combinations in the edges and in the dot * 4.7 Matter Coupling * 5 The Geometrical Trinity of Modified Gravity Theories * 5.1 Quadratic Actions for Torsion Theories * 5.2 Quadratic Actions for Non-Metricity Theories * 5.3 Non-Linear Extensions: \(f(\mathcal{R})\), \(f(\mathbb{T})\), \(f(\mathbb{Q})\) and \(f(\mathbb{G})\) Theories * 6 \(f(\mathbb{Q})\) Gravity * 6.1 Cosmology in \(f(\mathbb{Q})\) * 6.2 Black Holes in \(f(\mathbb{Q})\) * 6.3.1 Hamiltonian Analysis and Degrees of Freedom of \(f(\mathbb{Q})\) Gravity * 7 Summary List of Symbols Manifolds and other Spaces \(\mathcal{M}\) Spacetime manifold (connected, usually \(4\)-dimensional) \(\partial\mathcal{M}\) Boundary of the manifold \(\mathcal{M}\) \(T_{p}\mathcal{M}\), \(T_{p}^{*}\mathcal{M}\) Tangent and cotangent space at point \(p\in\mathcal{M}\) \(T\mathcal{M}\), \(T^{*}\mathcal{M}\) Tangent and cotangent bundle to \(\mathcal{M}\) \(\Sigma\) Co-dimension one hypersurface which is embedded in \(\mathcal{M}\) (usually \(3\)-dimensional and spacelike) \(\mathbb{R}^{n}\) Euclidean space of dimension \(n\) \(\mathbb{S}^{2}\) Topological \(2\)-sphere Maps \(\gamma\) Curve on \(\mathcal{M}\) defined as \(\gamma:[0,1]\to\mathcal{M}\) \(\phi\) Diffeomorphism, \(\phi:\mathcal{M}\to\mathcal{M}\) \(\phi_{s}\) 1-parameter family of diffeomorphisms \(\phi_{s}:\mathbb{R}\times\mathcal{M}\to\mathcal{M}\) with \(\phi_{0}=\mathrm{id}\) Real scalar field, \(f:\mathcal{M}\to\mathbb{R}\) Connections General affine connection (can have curvature, torsion, and non-metricity) Levi-Civita connection (unique torsionless and metric-compatible connection) Matrix belonging to the real four-dimensional general linear group, \(GL(4,\mathbb{R})\) (used to parametrize flat connections) Collection of four arbitrary functions (Not a vector! Used to parametrize flat and torsionless connections) Metrics \(\eta_{\mu\nu}\) Minkowski metric, signature \((-,+,+,+)\) \(g_{\mu\nu}\) Spacetime metric, signature \((-,+,+,+)\) \(g\), \(|g|\) Determinant of \(g_{\mu\nu}\); Modulus of \(g\) \(h_{ab}\) Intrinsic metric of \(\partial\mathcal{M}\) or \(\Sigma\), signature \((+,+,+)\) if \(\partial\mathcal{M}\) (or \(\Sigma\)) is spacelike, \((-,+,+)\) if \(\partial\mathcal{M}\) (or \(\Sigma\)) is timelike Determinant of \(h_{ab}\); Modulus of \(h\) ### Derivative Operators \(\partial_{\mu}\) \(\mathcal{L}_{v}\) \(\nabla_{\mu}\) \(\mathcal{D}_{\mu}\) \(\delta\) Coordinate derivative, does not transform covariantly Lie derivative along the vector field \(v\) Covariant derivative with respect to a general affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\) Covariant derivative with respect to the Levi-Civita connection \(\left\{{}^{\alpha}_{\mu\nu}\right\}\) Variational derivative ### Tensor Fields \(R^{\alpha}{}_{\mu\nu\rho}\) \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}\) \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}\) \(\mathcal{H}_{\alpha}{}^{\mu\nu}\) \(K^{\alpha}{}_{\mu\nu}\) \(L^{\alpha}{}_{\mu\nu}\) \(P^{\alpha}{}_{\mu\nu}\), \(\hat{P}^{\alpha}{}_{\mu\nu}\) \(Q_{\alpha\mu\nu}\) \(S_{\alpha}{}^{\mu\nu}\), \(\hat{S}_{\alpha}{}^{\mu\nu}\) \(T^{\alpha}{}_{\mu\nu}\) \(G_{\mu\nu}\) \(\mathcal{K}_{\mu\nu}\) \(q_{\mu\nu}\), \(\hat{q}_{\mu\nu}\) \(q_{\mu\nu}\), \(\hat{q}_{\mu\nu}\) \(R_{\mu\nu}\) **Scalar Fields** Norm of the normal vector \(n^{\mu}\) to a co-dimension one hypersurface, \(\varepsilon\coloneqq n_{\mu}n^{\mu}\) and \(\varepsilon=+1\) if the hypersurface is timelike, \(\varepsilon=-1\) if the hypersurface is spacelike \(N\) Lapse function, assumed to be nowhere zero \(\mathcal{K}\) Trace of the extrinsic curvature tensor, \(\mathcal{K}\coloneqq g^{\mu\nu}\mathcal{K}_{\mu\nu}\) \(\mathbb{Q}\), \(\hat{\mathbb{Q}}\) Non-metricity scalar of STEGR; Most general scalar which is quadratic in the non-metricity tensor (appears in STG and \(\mathbb{Q}\) is a special case of \(\hat{\mathbb{Q}}\)) \(R\), \(\mathcal{R}\) Ricci scalar of the affine connection, \(R\coloneqq g^{\mu\nu}R_{\mu\nu}\) (sometimes also denoted by \(R(\Gamma)\)); Ricci scalar of the Levi-Civita connection, \(\mathcal{R}\coloneqq g^{\mu\nu}\mathcal{R}_{\mu\nu}\) (sometimes also denoted by \(\mathcal{R}(g)\)) T, \(\hat{\mathbb{T}}\) Torsion scalar of TEGR; Most general scalar which is quadratic in the torsion tensor (appears in TG and \(\mathbb{T}\) is a special case of \(\hat{\mathbb{T}}\)) **Other Symbols** \(\Lambda\) Cosmological constant Left side defined by right side; Right side defined by left side \(\equiv\) Identity Commutator or Lie bracket Poisson bracket \(\tilde{T}\) A tilde \(\tilde{\ }\) on top indicates that \(T\) is a tensor density of weight \(w=+1\) ## Acronyms CG Coincident Gauge CGR Coincident General Relativity EH Einstein-Hilbert FCC First Class Constraint(s) GHY Gibbons-Hawking-York GR General Relativity GTEGR General Teleparallel Equivalent of General Relativity NGR Newer General Relativity SCC Second Class Constraint(s) STEGR Symmetric Teleparallel Equivalent of General Relativity STG Symmetric Teleparallel Gravity TEGR Teleparallel Equivalent of General Relativity TG Teleparallel Gravity Introduction In 1912, Einstein's study of static gravitational fields had led him to a bold hypothesis. A simple application of his equivalence principle, in conjunction with basic results of special relativity, suggested that the gravitational field is described by the metric tensor. He conjectured that this is also true beyond the static limit and thus embarked on a three year long journey, which culminated in November 1915 with the field equations of his General Relativity (GR). This feat was only possible after having learned what we nowadays call Riemannian geometry. Back then, this branch of mathematics was relatively new and many concepts we now take for granted were either not as clear-cut as they are now, or they were not even conceived yet. One such example is the concept of an affine connection, which was in part developed by mathematicians in response to the advent and success of GR. It is therefore not surprising that Einstein's original theory is based on the Riemann curvature tensor. This tensor is in fact fully determined by the metric and does not require the introduction of an independent affine connection. In later years, Einstein would famously attempt the unification of GR and electromagnetism. By then, the concept of an affine connection had been introduced by mathematicians such as Weyl and Einstein made use of these new tools. Even tough his unification attempts were ultimately not successful, he developed the first theory where gravity is mediated by torsion, rather than by curvature []. This culminated in a whole class of so-called metric teleparallel theories of gravity [2, 3]. Only decades later was it realized that teleparallel theories of gravity can also be formulated in flat, torsionless geometries, if one attributes gravitational phenomena to the so-called non-metricity tensor [4]. Postulating that curvature vanishes, but allowing for torsion, or non-metricity, or both, leads to what we now call the geometric trinity of GR [5, 6, 7]: Three distinct but equivalent description of General Relativity. All these theories are rooted in the mathematical framework of metric-affine geometry [8, 9]. The geometric trinity, as well as its various extensions and modifications, have witnessed a rising interest and a flurry of research activities. Their popularity is due to two factors. First of all, having different but physically equivalent formulations of GR sheds new light on its foundations. It also allows to address old problems from a new perspective. For instance, issues regarding the definition of gravitational energy-momentum have gained new momentum due to developments in teleparallel theories of gravity [10, 11, 12, 13]. So have questions regarding the computation of black hole entropy [14, 10, 15]. Secondly, the geometrical trinity has given rise to different extensions and modifications of gravity. There is a growing number of cosmological observations and tensions, which hint at physics beyond the standard \(\Lambda\)CDM model. While GR has passed every empirical test it has been subjected to, there remain phenomena which cannot be explained on the basis of GR alone. Most notably, the early- and late-time expansion of the universe requires the introduction of an inflaton field and dark energy, respectively. Furthermore, several observations strongly suggest the existence of dark matter. Rather than introducing new matter fields or exotic forms of energy, one can also attempt to explain these phenomena using modified theories of gravity. Indeed, a model known as \(f(\mathbb{Q})\) gravity has gained considerable popularity in the past couple of years and the bulk of the research efforts have been concentrated on cosmological applications [16, 17, 18, 19, 20]. This model has also been applied to large structure formation [21], the development of relativistic versions of Modified Newtonian Dynamics (MOND) [22, 23], bouncing cosmologies [24, 25, 26], and even quantum cosmology [27, 28]. A lot of effort has also gone into constraining or testing \(f(\mathbb{Q})\) models [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. Extensions that involve incorporating boundary terms [42, 43, 44] or non-minimally coupled scalar field [45, 46] have also been explored. Other very active area of research are black holes within \(f(\mathbb{Q})\) gravity [47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58], modified stellar solutions [58, 59, 60, 61, 62, 63, 64, 65, 66, 67], and wormholes [47, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 22, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 287, 289, 285, 286, 287, 288, 289, 291, 288, 289, 300, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58]. Also in this regard, some thought has been given to how observational data could be used to constrain \(f(\mathbb{Q})\) gravity [69]. The beyond-GR stellar solutions could play an important role in this regard. However, \(f(\mathbb{Q})\) gravity, or teleparallel theories of gravity in general, have also stirred up new challenges. As a particular example we mention the Hamiltonian analysis of \(f(\mathbb{Q})\) gravity, which need to overcome certain technical challenges which may require new techniques [70, 71, 72]. This review is dedicated to a pedagogical introduction into the subject of teleparallel theories of gravity and its extensions. The first two sections cover the necessary mathematical foundations, which are needed to formulate, understand, and work with teleparallel theories of gravity. Section 4 discusses the geometrical trinity of gravity in detail. In particular, we cover Einstein's original formulation of GR, the Teleparallel Equivalent of GR (TEGR), the Symmetric Teleparallel of Gravity (STEGR), Coincident GR (CGR), the General Teleparallel Equivalent of GR (GTEGR), theories of gravity which renounce the flatness condition, and finally we also discuss matter coupling. In section 5 we turn to modified theories of gravity, focusing mostly on general quadratic extensions of TEGR and STEGR. Non-linear extensions such as \(f(\mathcal{R})\), \(f(\mathbb{T})\), and \(f(\mathbb{G})\) are discussed only tangentially. An exception is made for \(f(\mathbb{Q})\) gravity, which is the main subject of section 6. A particular focus is laid on cosmology, black holes, and the Hamiltonian analysis as well as the open question regarding how many degrees of freedom the theory propagates. Finally, we conclude with a summary in section 7. ## 2 Fundamentals of Metric-Affine Geometries This review is dedicated to the geometrical trinity of gravity and its extensions, with a particular focus on \(f(\mathbb{Q})\) gravity. It is therefore indispensable to first talk about the geometric foundations which underpin these different descriptions of gravity. Our objective is to provide a didactical overview over the basic concepts of metric-affine geometry needed to formulate, understand, and work with the geometric trinity of gravity and its various extensions. We do not strive for mathematical rigour nor completeness and refer readers interested in mathematical aspects to the literature [73, 74, 75, 76, 77]. Our approach is to start from the most basic structure--a bare manifold with neither a metric nor a connection nor any other field defined on it--and to introduce step by step concepts and structures. The aim is to illustrate the meaning and physical relevance of each concept. This step-by-step approach also serves the purpose to highlight at which point it is necessary to introduce new structures--such as a metric or a connection--in order to deepen our description of the physical world. ### Manifolds, Diffeomorphisms, Curves, and Scalar Fields The world we inhabit seems to be four-dimensional and what we call "spacetime" is, at least in classical physics, well-described by a "four-dimensional continuum" in the sense that we need four numbers to label events. In pre-relativistic physics as well as in special relativistic physics, it is assumed that there is indeed a one-to-one correspondence between spacetime events and the topological space \(\mathbb{R}^{4}\). Thus, a fixed spacetime topology is _postulated_. However, General Relativity (GR) teaches us that spacetime is dynamical and governed by its own field equations, in stark contrast to the absolute space and time of Newtonian physics or the rigid Minkowski spacetime of special relativity. Assuming any global properties of spacetime, such as its topology, would thus severely limit the possible solutions to Einstein's field equations and hide a wealth of interesting physical phenomena from us. No black hole or cosmological solutions could be found under such restrictive assumptions. To overcome this obstacle, we introduce the concept of a **manifold**. As will be familiar to most readers, a real \(n\)-dimensional manifold1\(\mathcal{M}\) can be thought of as a space which "locally looks like Euclidean space \(\mathbb{R}^{n}\)". To be slightly more precise, \(\mathcal{M}\) is a **topological space** which is _locally homeomorphic_ to the _topological space_\(\mathbb{R}^{n}\). It is important to distinguish the topological space \(\mathbb{R}^{n}\) from the vector space \(\mathbb{R}^{n}\). In the former we can talk about points \(p\) and their neighbourhoods, while in the latter we have a space with points in it which also satisfy certain axioms. Namely the axioms of how to do "computations" with these points such as add them together and multiply them by scalars. In other words, supplementing the topological space \(\mathbb{R}^{n}\) with vector space axioms turns points \(p\) into vectors \(\vec{p}\), loosely speaking. In our current context, however, we are only interested in the topological aspects. Vectors will concern us in the next subsection. What our loose definition of a manifold means is therefore the following: The manifold \(\mathcal{M}\) is a space inhabited by **points**\(p\). Since \(\mathcal{M}\) is a topological space, the notion of neighbourhood is well-defined, which allows us to talk about what is happening "locally", i.e., in "close proximity" of the point \(p\). By definition, there is a local homeomorphism, i.e., a map from one topological space to an other topological space, which maps \(p\) and the points in its neighbourhood to a point and its neighbourhood in \(\mathbb{R}^{n}\). Since in \(\mathbb{R}^{n}\) we have a standard way of labelling each point unambiguously by \(n\) numbers by laying out a coordinate grid, we have now a method to assign **coordinates** to the points in \(\mathcal{M}\). Put simply: A bare manifold \(\mathcal{M}\), i.e., a manifold _without_ any additional structure allows us first and foremost to assign coordinates to points \(p\). These points have the physical interpretation of **spacetime events**. Of course, the assignment of coordinates is not unique. Even tough we have a standard way of labelling points with \(n\) numbers in \(\mathbb{R}^{n}\), two different persons might choose two different coordinate grids to do so. Let us denote a **coordinate system** by \(\{x^{\mu}\}\). In order to relate one coordinate system to an other one, we introduce the concept of a **change of coordinates**. This is really a special case of the more general concept of a **diffeomorphism**\(\phi\), which is a smooth (i.e., infinitely differentiable) map \(\phi:\mathcal{N}\rightarrow\mathcal{M}\) between the manifolds \(\mathcal{N}\) and \(\mathcal{M}\). A change of coordinates is then a diffeomorphism \(\phi:\mathcal{M}\rightarrow\mathcal{M}\) between \(\mathcal{M}\) and itself, which also has a smooth inverse and which maps \(\{x^{\mu}\}\) onto \(\{x^{\nu\mu}\}\coloneqq\{\phi(x^{\mu})\}\). Two points are worth emphasizing: In general, and in contrast with Newtonian or special relativistic physics, we need more than one coordinate system to cover a manifold \(\mathcal{M}\). In fact, this is already the case for simple manifolds such as the example of \(\mathbb{S}^{2}\) shown in Figure 1. In the technical jargon we say that we need an **atlas** in order to cover all of \(\mathcal{M}\) with coordinates. However, this technical point will play no role for us and we will always simply talk about the coordinate system \(\{x^{\mu}\}\). The second point is that coordinates have no intrinsic physical meaning and they only serve the purpose of labelling spacetime events. Ultimately, however, all physical observables have to be independent of the choice of coordinate system. A common example of a manifold is the sphere \(\mathbb{S}^{2}\). Figure 1 shows a picture of our world modelled as a two-dimensional sphere. By introducing longitude and latitude we can label points, i.e., locations on the \(2\)-sphere. However, longitude and latitude are just one particular example of a coordinate system, which is based on arbitrary choices2. Other coordinate systems could be chosen without having any substantial effect, since coordinate systems are a mere matter of convenience and convention. Given that coordinate transformations are generated by diffeomorphisms, which possess smooth inverses by definition, we can transform back and forth between coordinate systems without loosing information. Thus, all coordinate systems are on equal footing, reinforcing the notion that there are no preferred coordinate systems. Footnote 2: The prime meridian, which defines \(0^{\circ}\) longitude, is defined as the one which passes through a certain point near the Royal Observatory in Greenwich, England. The equator is chosen to represent \(0^{\circ}\) latitude. So far, we only have the bare manifold \(\mathcal{M}\) at our disposal, without any additional structure or fields defined on it, and the concept of a diffeomorphism. There are two more concepts which are completely intrinsic Figure 1: The world modelled as the manifold \(\mathbb{S}^{2}\). Longitude and latitude, which is just a specific choice for \(\alpha\) coordinate system, allow us to label points (i.e., location) on the \(2\)-sphere. (i.e., which do not require us to introduce any new structure) to \(\mathcal{M}\) and which can be constructed using maps: Curves and scalar fields. Curves provide us with a good model for **observers** and **test particles**. Mathematically, a **curve** is defined as a map \(\gamma:I\to\mathcal{M}\) from an interval \(I\subseteq\mathbb{R}\) into the manifold \(\mathcal{M}\). We say that a curve is **parametrized by \(\mathbf{s}\in\mathbf{I}\)**. What the map \(\gamma\) ultimately does, is assign a point \(\gamma(s)\) in \(\mathcal{M}\) to every value of the parameter \(s\). Again, this concept is completely intrinsic to \(\mathcal{M}\). Figure 1(a) illustrates this concept and we emphasize that we cannot yet talk about "the shortest path between two points" (aka geodesics) since we have not yet introduced a metric. The concept of a metric is relegated to subsection 2.2 since, as we will see, many things can be done without having to resort to metrics. We can translate the rather abstract notion of a curve as map from \(I\) to \(\mathcal{M}\) into the more familiar component language. All we need is the fact that (a) a coordinate system assigns to every point \(p\) a set of \(n\) numbers \(x^{\mu}(p)\) and that (b) a curve assigns to every parameter value \(s\) a point \(\gamma(s)\) in \(\mathcal{M}\). Thus, we can define the components of \(\gamma\) with respect to the coordinate system \(\{x^{\mu}\}\) as \[\gamma^{\mu}(s)\coloneqq x^{\mu}(\gamma(s))\,. \tag{2.1}\] For all our purposes we can always assume that the curve \(\gamma\) in question is differentiable. Therefore, we introduce for later convenient the shorthand notation \[\dot{\gamma}^{\mu}(s)\coloneqq\frac{\mathrm{d}\gamma^{\mu}(s)}{ \mathrm{d}s}\,. \tag{2.2}\] Now we turn to the second and last concept we can introduce on \(\mathcal{M}\) using a map: A **scalar field**\(\mathbf{f}\) is a map \(f:\mathcal{M}\to\mathbb{R}\). In simple words, the scalar field assigns to every point of \(\mathcal{M}\) a real number. The temperature field of Earth shown in Figure 1(b) is an example of a scalar field. Again, the concept is intrinsic to \(\mathcal{M}\) since we did not introduce any new structure. In the next subsection we show how scalar fields and curves help us in defining vector fields, 1-forms, and the spaces they live in. Namely the tangent and co-tangent space. Since these spaces are derived from \(\mathcal{M}\) and other concepts intrinsic to \(\mathcal{M}\), we ultimately find that tensor fields are concepts purely intrinsic to \(\mathcal{M}\) Figure 2: _Curves and scalar fields are two important examples of maps between manifolds (here between \(\mathbb{R}\) and \(\mathcal{M}\)) which are of direct physical relevance. Curves are used to model observers and test particles, while scalar fields play an important role in inflationary cosmology, for instance._ We emphasize this point because in subsection 2.4 we will be forced for the first time to introduce a new structure which is not intrinsic or naturally present in \(\mathcal{M}\). This refers to the concept of connection. Similarly, in subsection 2.5 we will be forced to recognize that also the metric is a concept which is not intrinsic or naturally present in \(\mathcal{M}\). The affine structure described by the connection and the metric structure of a manifold described by the metric tensor are both concepts which have to be stipulated separately. ### Vector Fields, Tensor Fields, and Densities Vector fields are omnipresent in physics and every physicist has an intuitive understanding as well as ample mental pictures of them. For instance, one picture that could come to mind is the one of a wind field on the surface of the Earth. Figure 3 shows such a wind field, represented by an arrow at every point on \(\mathbb{S}^{2}\). How do we translate this intuitive mental picture of a vector field into mathematical language? How can we give meaning to these arrows in a way which is intrinsic to the manifold \(\mathcal{M}\), i.e., in a way which does not refer to any structure that lies outside of \(\mathcal{M}\)? The key is to realize that a vector allows us to define the directional derivative of scalar fields. This idea combines the intuitive notion that a vector has a direction with an object which is intrinsically defined on the manifold, namely the scalar field \(f:\mathcal{M}\to\mathbb{R}\). In a given coordinate system, say \(\{x^{\mu}\}\), we can write the directional derivative of \(f\) (in this coordinate system) as \[v^{\mu}\partial_{\mu}f\,, \tag{3}\] where \(v^{\mu}\in C^{\infty}(\mathcal{M})\) are \(n\) smooth functions of the coordinates. It is common to introduce the notations \[v(f)\coloneqq v^{\mu}\partial_{\mu}f\] and \[v\coloneqq v^{\mu}\partial_{\mu}\,. \tag{4}\] The notion of directional derivative gives us the correct intuition to define vector fields. Let us temporarily forget the explicit coordinate-dependent expression (4). Rather, we focus on the key properties of the directional derivative and distill a set of axioms from it in order to define what we mean by a vector field on the manifold \(\mathcal{M}\): A **vector field**\(v\) on \(\mathcal{M}\) is a map which takes \(f\in C^{\infty}(\mathcal{M})\) as input, produces \(v(f)\in C^{\infty}(\mathcal{M})\) as output, and which satisfies Figure 3: The wind field on Earth is an example of a vector field. At each point on \(\mathbb{S}^{2}\) we can assign a little arrow, with magnitude and direction, which is tangential to \(\mathbb{S}^{2}\). \[\begin{array}{llll}\text{A1}&v(c\,f_{1}+f_{2})=c\,v(f_{1})+v(f_{2})&\text{ (Linearity)}\\ \text{A2}&v(f_{1}f_{2})=v(f_{1})\,f_{2}+f_{1}\,v(f_{2})&\text{(Leibniz rule)}\\ \text{A3}&(g\,v_{1}+v_{2})(f)=g\,v_{1}(f)+v_{2}(f)&\text{(Vector addition and scalar multiplication)}\end{array}\] for all scalar fields \(f,f_{1},f_{2},g\in C^{\infty}(\mathcal{M})\) and constants \(c\in\mathbb{R}\). Observe that the definition is independent of any coordinate system! The vector field is simply a linear map between smooth functions. The Leibniz rule captures the notion of differentiation inherent to the directional derivative. For concrete computations, it is nevertheless useful to have a coordinate-representation of a vector field. To that end, we define the **components of the vector field**\(v\) with respect to a coordinate system \(\{x^{\mu}\}\) as \[v^{\mu}\coloneqq v(x^{\mu})\,, \tag{2.5}\] where \(x^{\mu}\) is the \(\mu\)-th coordinate. If we remember the coordinate expression of the directional derivative (2.4) again, we see that the above definition of vector component is consistent with (2.4), which implies \[v(x^{\mu})=v^{\alpha}\partial_{\alpha}x^{\mu}=v^{\alpha}\delta^{\mu}{}_{ \alpha}=v^{\mu}\,. \tag{2.6}\] In physics, we often call \(v^{\mu}\) the vector field, rather than component of a vector field. However, it is important to remember that (i) vector fields are defined in a way which is independent of any coordinate system and (ii) the vector field \(v\) can have different components with respect to different coordinate systems (more on this below). The definition of vector field given above has the advantage of being coordinate-independent, but it is not clear how this idea relates to the intuitive conception that a vector field assigns an arrow to every point of \(\mathcal{M}\). To remedy that, we introduce the notion of **tangent vector**. Let \(\gamma:[0,1]\to\mathcal{M}\) be a curve on \(\mathcal{M}\) which is parametrized by \(s\) and let \(f\in C^{\infty}(\mathcal{M})\) be a smooth function (scalar field). Then consider the derivative \[\frac{\mathrm{d}}{\mathrm{d}s}f(\gamma(s))=\frac{\mathrm{d}\gamma^{\mu}}{ \mathrm{d}s}\frac{\partial}{\partial\gamma^{\mu}}f\equiv\dot{\gamma}^{\mu} \partial_{\mu}f\,, \tag{2.7}\] where we used the product rule and the fact that given a coordinate system \(x^{\mu}\), the curve \(\gamma\) has components \(\gamma^{\mu}\coloneqq x^{\mu}(\gamma)\). Looking at the right hand side of (2.7), it is clear that the derivative we have just computed satisfies our abstract definition of vector field. Moreover, it is intuitively clear (see Figure 4) that \(\frac{\mathrm{d}\gamma^{\mu}}{\mathrm{d}s}\) is a vector which is tangent to \(\gamma\). Thus, the left hand side of (2.7) simply gives us the directional Figure 4: On the process of generating tangent vectors: The smaller \(\epsilon\) becomes, the better the separation between \(\gamma(s)\) and \(\gamma(s+\epsilon)\) is described by the vector \(\epsilon\,\dot{\gamma}(s)\) tangential to the curve. derivative of \(f\) in the direction which is tangent to the curve \(\gamma\). The advantage of this computation is that it gives us a clear relation with arrows and thus with the intuitive notion of vectors we know from \(\mathbb{R}^{n}\). It is clear that (2.7) defines a vector field for every \(f\in C^{\infty}(\mathcal{M})\) and every curve \(\gamma\). Moreover, one can show [75] that every vector field \(v\in\mathcal{X}(\mathcal{M})\) can be represented as in (2.7). Before proceeding, it is useful to introduce some terminology and define tangent vectors in slightly more abstract terms. Recall that a vector field \(v\) is a map from \(C^{\infty}(\mathcal{M})\) to \(C^{\infty}(\mathcal{M})\). We define a **tangent vector at \(\mathbf{p}\)** to be a map \(v_{p}\) from \(C^{\infty}(\mathcal{M})\) to \(\mathbb{R}\). This is achieved by evaluating the vector field \(v\) at the point \(p\in\mathcal{M}\), \[v_{p}:C^{\infty}(\mathcal{M})\to\mathbb{R}\] \[v_{p}(f)\coloneqq\left.v(f)\right|_{p}\,. \tag{2.8}\] In other words, a tangent vector \(v_{p}\) is simply obtained by evaluating the smooth function \(v(f)\) at the point \(p\), thus giving us a real number. The **set of all tangent vectors at \(\mathbf{p}\)** is called **tangent space at \(\mathbf{p}\)** and denoted by \(T_{p}\mathcal{M}\). A two dimensional visualisation of this concept is given in Figure 5 below. The space \[T\mathcal{M}\coloneqq\bigsqcup_{p\in\mathcal{M}}T_{p}\mathcal{M} \equiv\bigcup_{p\in\mathcal{M}}\{p\}\times T_{p}\mathcal{M} \tag{2.9}\] is called the **tangent bundle** and can be thought of as the collection of all tangent spaces at every point of \(\mathcal{M}\). We sometimes distinguish this from the **set of all smooth vector fields of \(\mathcal{M}\)** which we denote by \(\mathcal{V}(\mathcal{M})\). The distinction is not of great importance to us. What is important, however, is that the tangent space at \(p\), i.e., \(T_{p}\mathcal{M}\) is a **real**, \(n\)**-dimensional vector space**. This means that given two elements, say \(v_{p}\) and \(u_{p}\) of \(T_{p}\mathcal{M}\), we can do everything we can do with regular vectors in \(\mathbb{R}^{n}\). All elements of \(T_{p}\mathcal{M}\) follow the rules of vector addition, multiplication by scalars, etc. However, what we have _not_ yet defined, is a notion of scalar product. In Euclidean geometry, a scalar product Figure 5: _The tangent space \(T_{p}\mathcal{M}\) at \(p\) is obtained by considering all linear independent vectors which are tangent to curves \(\gamma\) passing through \(p\in\mathcal{M}\)._ takes two vectors as input and produces a real number as output. This real number comes attached with geometric meaning, since it provides us with a measure of angles between vectors and a measure for the magnitude of vectors. Generalizing this notion requires us to introduce a metric, which we will do in subsection 2.5. Recall, however, that we know a second procedure from linear algebra which allows us to produce a number out of a vector. Namely, we can apply a linear functional, or, in other terms, pair a vector with a dual vector. This leads us to the concept of **1-form**, which are sometimes also referred to as **co-vectors**. Since \(T_{p}\mathcal{M}\) is a real, \(n\)-dimensional vector space, it automatically possesses a real, \(n\)-dimensional **dual space \(\mathbf{T^{*}\mathcal{M}}\)**. This space is also called **co-tangent space at \(\mathbf{p}\)** and it consists of **linear functionals**. We recall that a linear functional \(\omega\) is a map which takes a vector as input and produces a real number as output. More formally we can define it as the linear map \[\omega:T_{p}\mathcal{M} \to\mathbb{R}\] \[v \mapsto\left\langle\omega,v\right\rangle\in\mathbb{R}\,, \tag{2.10}\] where \(\omega\) is called a **1-form** and the bracket \(\left\langle\cdot,\cdot\right\rangle\) symbolizes the pairing of a \(1\)-form with a vector. Given a coordinate system \(\{x^{\mu}\}\), we can define the components of \(\omega\) as \[\omega_{\mu}\coloneqq\left\langle\omega,\partial_{\mu}\right\rangle, \tag{2.11}\] i.e., we obtain the components by evaluating the linear functional on the basis elements of \(T_{p}\mathcal{M}\). Since \(\omega\) is really a _linear_ map, we obtain for the pairing of \(v\) with \(\omega\) the following coordinate expression: \[\left\langle\omega,v\right\rangle =\left\langle\omega,v^{\mu}\partial_{\mu}\right\rangle\] \[=v^{\mu}\left\langle\omega,\partial_{\mu}\right\rangle\] \[=v^{\mu}\omega_{\mu}\,. \tag{2.12}\] In the first line we simply expanded \(v\) in its basis, then we used the linearity of \(\left\langle\cdot,\cdot\right\rangle\), and finally the definition of \(1\)-form components we have just given. Notice that the contraction \(\omega_{\mu}v^{\mu}\) does _not_ require a metric: The components of \(v\) are naturally defined with an upper index, while the components of \(\omega\) are naturally defined with a lower index. Given a coordinate system \(\{x^{\mu}\}\), we can define the basis co-vectors of \(T_{p}^{*}\mathcal{M}\) as \(\mathrm{d}x^{\mu}\) and write the \(1\)-form as \[\omega\coloneqq\omega_{\mu}\mathrm{d}x^{\mu}\,. \tag{2.13}\] These basis elements have to satisfy \[\left\langle\mathrm{d}x^{\mu},\partial_{\nu}\right\rangle=\delta^{\mu}{}_{\nu} \tag{2.14}\] in order to be able to reproduce (2.12) and be consistent with the definitions we have given so far. Observe that we have defined vector fields as well as \(1\)-forms as _linear maps_. This fact allows us to define more general tensors as **multilinear maps**. To do so, we define a **tensor of type \((\mathbf{p},\mathbf{q})\)** to be a multilinear map \[S:\underbrace{T\mathcal{M}\otimes\cdots\otimes T\mathcal{M}}_{p-\text{times}} \otimes\underbrace{T^{*}\mathcal{M}\otimes\cdots\otimes T^{*}\mathcal{M}}_{q- \text{times}}\to\mathbb{R} \tag{2.15}\] which takes \(p\) vectors and \(q\) co-vectors as input and produces a real number. This is sometimes written as \[S(v_{1},\ldots,v_{p},\omega_{1},\ldots,\omega_{q})\,. \tag{2.16}\] Since \(S\) is a _multilinear map_, i.e., since \(S\) is linear in every one of its \(p+q\) slots, it follows that in a coordinate system \(\{x^{\mu}\}\) we can write \[S(v_{1},\ldots,v_{p},\omega_{1},\ldots,\omega_{q}) =v_{1}^{\mu_{1}}\cdots v_{p}^{\mu_{p}}(\omega_{1})_{\nu_{1}} \cdots(\omega_{q})_{\nu_{q}}S(\partial_{\mu_{1}},\ldots,\partial_{\mu_{p}}, \mathrm{d}x^{\nu_{1}},\ldots,\mathrm{d}x^{\nu_{q}})\] \[=v_{1}^{\mu_{1}}\cdots v_{p}^{\mu_{p}}(\omega_{1})_{\nu_{1}} \cdots(\omega_{q})_{\nu_{q}}S_{\mu_{1}\cdots\mu_{p}}^{\nu_{1}\cdots\nu_{q}}\,, \tag{2.17}\] where in the last line we defined the components of \(S\) as \[S_{\mu_{1}\cdots\mu_{p}}^{\nu_{1}\cdots\nu_{q}}\coloneqq S(\partial_{\mu_{1}}, \ldots,\partial_{\mu_{p}},\mathrm{d}x^{\nu_{1}},\ldots,\mathrm{d}x^{\nu_{q}})\,. \tag{2.18}\] Due to their multilinearity, tensors have a very characteristic behaviour under changes of coordinates. We define a **change of coordinates** as a diffeomorphism which maps the coordinates \(x^{\mu}\) to the new coordinates \(x^{\prime\mu}(x)\). We will sometimes use the shorthand notation \(x^{\mu}\mapsto x^{\prime\mu}(x)\). One can easily deduce that under such a change of coordinates partial derivatives transform as \[\frac{\partial}{\partial x^{\mu}}=\frac{\partial x^{\prime\lambda}}{\partial x ^{\mu}}\frac{\partial}{\partial x^{\prime\lambda}}\eqqcolon J^{\lambda}{}_{ \mu}\frac{\partial}{\partial x^{\prime\lambda}}\,, \tag{2.19}\] where in the last equation we have introduced the **Jacobian matrix**\(J^{\mu}{}_{\nu}\), defined as \[J^{\mu}{}_{\nu}\coloneqq\frac{\partial x^{\prime\mu}}{\partial x^{\nu}}\,. \tag{2.20}\] Since \(x^{\prime\mu}\) is generated from \(x^{\mu}\) via a diffeomorphism, the Jacobian is never degenerate. This means it always possesses a well-defined inverse \[(J^{-1})^{\mu}{}_{\nu}\coloneqq\frac{\partial x^{\mu}}{\partial x^{\prime\nu} }\,. \tag{2.21}\] Now recall that we defined a vector field in a manner which is manifestly coordinate independent. Thus, we should have \[v=v^{\mu}\partial_{\mu}\overset{!}{=}v^{\prime\mu}\partial_{\mu}^{\prime}\,, \tag{2.22}\] where \(v^{\prime\mu}\) and \(\partial_{\mu}^{\prime}\) are the vector components and basis elements in the coordinate system \(\{x^{\prime\mu}\}\). Since we know how partial derivatives transform under changes of coordinates, it follows that \[v^{\prime\nu}\partial_{\nu}^{\prime}=v^{\mu}J^{\nu}{}_{\mu}\partial_{\nu}^{ \prime}=v^{\mu}\frac{\partial x^{\prime\nu}}{\partial x^{\mu}}\partial_{\nu}^ {\prime}\qquad\implies\qquad v^{\prime\nu}=v^{\mu}\frac{\partial x^{\prime\nu }}{\partial x^{\mu}}\,. \tag{2.23}\] In other words, the components of the vector field in the new coordinate system are obtained by multiplying the old components by the Jacobian matrix, \[v^{\prime\nu}=J^{\nu}{}_{\mu}v^{\mu}\,. \tag{2.24}\] The transformation behaviour of \(1\)-forms follows now from simple considerations. Since we defined \(1\)-forms in a coordinate independent manner, and since they map vectors to real numbers, we have \[\langle\omega,v\rangle=\omega_{\mu}v^{\mu}\overset{!}{=}\omega_{\mu}^{\prime} v^{\prime\mu}\,. \tag{2.25}\] Using (2.24), it then follows that \[\omega_{\mu}(J^{-1})^{\mu}{}_{\nu}v^{\prime\nu}=\omega_{\mu}^{\prime}v^{ \prime\mu}\qquad\implies\qquad\omega_{\nu}^{\prime}=(J^{-1})^{\mu}{}_{\nu} \omega_{\mu}\,. \tag{2.26}\] Knowing the transformation behaviour of vectors and \(1\)-forms immediately allows us to work out the trans formation behaviour of tensors. All we have to do is exploit their multilinearity in order to find \[S^{\prime\mu_{1}\dots\mu_{p}}{}_{\nu_{1}\dots\nu_{q}}=J^{\mu_{1}}{}_{ \alpha_{1}}\cdots J^{\mu_{p}}{}_{\alpha_{p}}(J^{-1})^{\beta_{1}}{}_{\nu_{1}} \cdots(J^{-1})^{\beta_{q}}{}_{\nu_{q}}\,S^{\alpha_{1}\cdots\alpha_{p}}{}_{\beta _{1}\cdots\beta_{q}}\,. \tag{27}\] As a last concept, we introduce **tensor densities.** A tensor density is a tensor (this includes scalar fields, which are tensors of type \((0,0)\)) which does _not_ transform according to (27), because it picks up an even or odd power of the determinant of the Jacobian. Concretely, a **tensor density \(\tilde{\mathbf{S}}\) of weight \(w\)** transforms as \[\tilde{S}^{\alpha_{1}\cdots\alpha_{p}}{}_{\beta_{1}\cdots\beta_{q} }=(\det(J))^{w}\;(J^{-1})^{\alpha_{1}}{}_{\mu_{1}}\cdots(J^{-1})^{\alpha_{p}}{ }_{\mu_{p}}J^{\nu_{1}}{}_{\beta_{1}}\cdots J^{\nu_{q}}{}_{\beta_{q}}\,\tilde{ S}^{\prime\mu_{1}\dots\mu_{p}}{}_{\nu_{1}\dots\nu_{q}}\,, \tag{28}\] where \(w\) is called the **density weight**. Notice our convention for defining the weight: The untransformed tensor density \(\tilde{S}\) is on the left of this equation, while on the right we have the transformed tensors density \(\tilde{S}^{\prime}\) together with the Jacobian matrices and, importantly, the Jacobian determinant. Only in this form do we read off the density weight \(w\). Notice that the weight can be positive, negative, or zero. A tensor density of weight zero is simply an ordinary tensor. Also, our convention is to denote tensor densities with a tilde on top, in order to highlight their special transformation behaviour. We only make an exception for Lagrangian densities \(\mathcal{L}\), Hamiltonian densities \(\mathcal{H}\), and the determinant of the metric, \(g\). Tensor densities play an important role when it comes to integration on manifolds. In order to guarantee that an integral constructed from tensorial objects is independent of the coordinate system we chose to represent these quantities in (and which we chose to perform the integration), the integrand has to transform as a scalar density of weight \(w=+1\). We will later see that the square root of the metric determinant, \(\sqrt{|g|}\), transforms as a tensor density of weight \(w=+1\). Thus, integrals of the form \[\int_{\mathcal{M}}\sqrt{|g|}\,f\,\mathrm{d}^{n}x\,, \tag{29}\] where \(f\) is a scalar field, are coordinate-independent. Moreover, we will also encounter other tensor densities when we construct action functional for teleparallel theories of gravity in section 4. ### The Flow of a Vector Field and the Lie Derivative In the previous subsection we mentioned that every vector field can be understood as the tangent vector to some curve. Indeed, if we are given a vector field \(v\), we can find the corresponding curve by solving the first order differential equation \[\dot{\gamma}(t)=v_{\gamma(t)}\,, \tag{30}\] where \(v_{\gamma(t)}\) is the vector \(v\) evaluated at the point \(\gamma(t)\). Since this is a first order ordinary differential equation, a solution always exists (at least locally). We call the curve \(\gamma\) the **integral curve of \(v\)**. Globally, the integral curve may not exist because \(\gamma\) can diverge in a finite amount of time. Nevertheless, locally we can visualize the vector field with its integral curves as in Figure 6. Qualitatively speaking, the integral curves describe the flow of some fluid, while \(v\) assigns a velocity vector to each point in that fluid. This idea of flow can also be expressed as a diffeomorphism which takes a point \(p\in\mathcal{M}\) as input, and maps it to some other point in \(\mathcal{M}\). This represents the movement or flow of \(p\) on the manifold. To make this idea concrete, we introduce a \(1\)-parameter family of diffeomorphisms \[\phi_{t}:\mathbb{R}\times\mathcal{M} \to\mathcal{M}\] \[(t,p) \mapsto\phi_{t}(p) \tag{31}\] with \(\phi_{0}=\mathrm{id}\) and \(\phi_{s}\circ\phi_{t}=\phi_{s+t}\) for all \(s,t\in\mathbb{R}\). Importantly, if we work in the components language which is common in the physicist's literature, we have for a point \(p\) with coordinates \(x^{\mu}\) \[\phi_{t}^{\mu}(x)\coloneqq\phi_{t}(x^{\mu}(p))\] and \[\phi_{t=0}^{\mu}(x)=x^{\mu}(p)\,. \tag{2.32}\] Using this \(1\)-parameter family of diffeomorphisms, we can re-write equation (2.30) as \[\frac{\mathrm{d}}{\mathrm{d}t}\phi_{t}(p)=v_{\phi_{t}(p)}\,. \tag{2.33}\] The family \(\phi_{t}\) is called the **flow generated by \(\mathbf{v}\)**. Since we based the concept of flow on diffeomorphisms, it is easy to see that a flow not only affects the points on a manifold, but also tensors defined on it. The simplest example is the one of a scalar field \(f\), which is carried along by the flow \(\phi_{t}\) generated by the vector field \(v\). The carried-along \(f\), which we denote3 by \(\phi_{t}^{*}f\), is defined as Footnote 3: Technically speaking, \(\phi_{t}^{*}f\) is the **pull-back of \(\mathbf{f}\) by \(\mathbf{\phi}_{t}\)**. \[(\phi_{t}^{*}f)(p)\coloneqq f(\phi_{t}(p))\,. \tag{2.34}\] In practice, we are often interested in the infinitesimal action of a flow on a vector field. Thus, we may expand \(\phi_{t}^{*}f\) around \(t=0\) up to first order. The first order derivative in this expansion is given by \[\frac{\mathrm{d}}{\mathrm{d}t}(\phi_{t}^{*}f)(p)\bigg{|}_{t=0} =\left.\frac{\mathrm{d}}{\mathrm{d}t}f(\phi_{t}(p))\right|_{t=0}= \left.\frac{\partial f}{\partial\phi_{t}^{\mu}}\frac{\mathrm{d}\phi_{t}^{\mu} }{\mathrm{d}t}\right|_{t=0}\] \[=v_{p}^{\mu}\partial_{\mu}f=v_{p}(f)\,, \tag{2.35}\] where we used equations (2.33) and (2.4), as well as \(\left.\frac{\partial f}{\partial\phi_{t}^{\mu}}\right|_{t=0}=\frac{\partial f} {\partial x^{\mu}}\), which follows directly from (2.32). Thus, to first order, a scalar field which is carried along by a flow changes by the directional derivative generated by the vector field \(v\). More concretely: \[(\phi_{t}^{*}f)(p)=f(p)+t\,v_{p}(f)+\mathcal{O}(t^{2})\,. \tag{2.36}\] Similarly, we may ask how a vector field \(u\) changes when it is carried along by the flow generated by \(v\). We could give an abstract definition of the carried-along vector field \(\phi_{t}^{*}u\) in terms of pull-backs. However, to keep the discussion lighter, we point out that in the components language, this essentially amounts to performing a change of coordinates: \[(\phi_{t}^{*}u)(p)\quad\leadsto\quad\frac{\partial\phi_{-t}^{\mu}(p)}{\partial x ^{\nu}}u^{\nu}(\phi_{t}(p))\;. \tag{2.37}\] If we consider an infinitesimal change of coordinates, we can expand the above expression to first order in \(t\) around \(t=0\). The first order term in this expansion is then given by \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{\partial\phi_{-t}^{\mu }(p)}{\partial x^{\nu}}u^{\nu}(\phi_{t}(p))\right)\bigg{|}_{t=0} =\left.\frac{\mathrm{d}}{\mathrm{d}t}u^{\nu}(\phi_{t}(p))\right| _{t=0}\delta^{\mu}{}_{\nu}+\left.\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial \phi_{-t}^{\mu}(p)}{\partial x^{\nu}}\right|_{t=0}u^{\nu}(p)\] \[=\left.\frac{\partial u^{\mu}(p)}{\partial x^{\lambda}}\left. \frac{\mathrm{d}\phi^{\lambda}(p)}{\mathrm{d}t}\right|_{t=0}-\frac{\partial v ^{\mu}(p)}{\partial x^{\nu}}u^{\nu}(p)\] \[=\frac{\partial u^{\mu}(p)}{\partial x^{\lambda}}v^{\lambda}(p)- \frac{\partial v^{\mu}(p)}{\partial x^{\nu}}u^{\nu}(p)\] \[=v^{\lambda}\partial_{\lambda}u^{\mu}-u^{\nu}\partial_{\nu}v^{ \mu}\,. \tag{2.38}\] In components free notation, we can introduce the **Lie bracket**\(\boldsymbol{[v,u]}\) to express the above result more concisely: \[[v,u]\coloneqq vu-uv\coloneqq v^{\mu}\partial_{\mu}u^{\nu} \partial_{\nu}-u^{\nu}\partial_{\nu}v^{\mu}\partial_{\mu}\,. \tag{2.39}\] If this expression seems cryptic, remember that vector fields act on scalars and they produce their directional derivative. Thus, a more precise way to write the Lie bracket would be \[[v,u](f)\coloneqq v(u(f))-u(v(f))=v^{\mu}\partial_{\mu}u^{\nu} \partial_{\nu}f-u^{\nu}\partial_{\nu}v^{\mu}\partial_{\mu}f\,. \tag{2.40}\] This allows us to interpret the Lie bracket in a neat geometric fashion: First of all, notice that \(v(f)\) is a scalar. In fact, \(v(f)=v^{\mu}\partial_{\mu}f\) is just the directional derivative of \(f\) along \(v\). Because \(v(f)\) is a scalar, the operation \(u(v(f))\) is well-defined. It simply means taking the directional derivative of the scalar \(v(f)\) along the direction \(u\). Now recall that if \(f\) is dragged along by a flow \(\phi_{t}\) for an infinitesimal amount of "time" \(t\), it changes to first order by \(t\,v(f)\). Thus, \(u(v(f))\) tells us by how much \(f\) changes when we first flow along \(v\) for a little while and then along \(u\). Conversely, \(v(u(f))\) tells us about the change in \(f\) if we first flow it along \(u\) and then along \(v\) for a small amount of time. The Lie bracket is then simply a measure for the discrepancy between the two procedures. Since \(v(u(f))\) and \(u(v(f))\) land on different points, we can visualize the situation by a parallelogram which does not close (cf. Figure 7). That parallelograms built in this way do not close has nothing to do with curvature or torsion and is true even in Euclidean geometry. The concept of determining how much a tensor field changes to first order when it is dragged along by a flow generated by a vector field is sufficiently important that it deserves its own name: It is called a **Lie derivative \(\boldsymbol{\mathcal{L}_{v}}\) along \(\boldsymbol{v}\)**. Formally, the Lie derivative is defined by pulling-back tensor fields by the flow \(\phi_{t}\) generated by \(v\). Since this is again tantamount to essentially considering an infinitesimal change of coordinates, one can work out that the coordinate expression for the Lie derivative of a \((p,q)\) tensor reads \[\mathcal{L}_{v}T^{\mu_{1}\cdots\mu_{p}}{}_{\nu_{1}\cdots\nu_{q}} =v^{\lambda}\partial_{\lambda}T^{\mu_{1}\cdots\mu_{p}}{}_{\nu_{1} \cdots\nu_{q}}-\partial_{\lambda}v^{\mu_{1}}T^{\lambda\cdots\mu_{p}}{}_{\nu_{1} \cdots\nu_{q}}-\cdots-\partial_{\lambda}v^{\mu_{p}}T^{\mu_{1}\cdots\lambda}{}_ {\nu_{1}\cdots\nu_{q}}\] \[\quad+\partial_{\nu_{1}}v^{\lambda}T^{\mu_{1}\cdots\mu_{p}}{}_{ \lambda\cdots\nu_{q}}+\cdots+\partial_{\nu_{q}}v^{\lambda}T^{\mu_{1}\cdots\mu_ {p}}{}_{\nu_{1}\cdots\lambda}\,. \tag{41}\] The Lie derivatives of scalar and vector fields are contained as special cases: \[\mathcal{L}_{v}f =v(f)=v^{\mu}\partial_{\mu}f\] \[\mathcal{L}_{v}u =[v,u]=v^{\mu}\partial_{\mu}u^{\nu}-u^{\mu}\partial_{\mu}v^{\nu}\,. \tag{42}\] The Lie derivative can also be generalized to tensor densities \(\tilde{T}\). Since the transformation law of tensor densities differs slightly from the one of regular tensors, one finds that the Lie derivative in components language acquires an additional term compared to (41): \[\mathcal{L}_{v}\tilde{T}^{\mu_{1}\cdots\mu_{p}}{}_{\nu_{1}\cdots \nu_{q}} =v^{\lambda}\partial_{\lambda}\tilde{T}^{\mu_{1}\cdots\mu_{p}}{}_{ \nu_{1}\cdots\nu_{q}}-\partial_{\lambda}v^{\mu_{1}}\tilde{T}^{\lambda\cdots \mu_{p}}{}_{\nu_{1}\cdots\nu_{q}}-\cdots-\partial_{\lambda}v^{\mu_{p}}\tilde{T} ^{\mu_{1}\cdots\lambda}{}_{\nu_{1}\cdots\nu_{q}}\] \[\quad+\partial_{\nu_{1}}v^{\lambda}\tilde{T}^{\mu_{1}\cdots\mu_{p }}{}_{\lambda\cdots\nu_{q}}+\cdots+\partial_{\nu_{q}}v^{\lambda}\tilde{T}^{ \mu_{1}\cdots\mu_{p}}{}_{\nu_{1}\cdots\lambda}+\mathbf{w}\,\left(\mathbf{\partial}\bm {v}^{\lambda}\right)\tilde{T}^{\mathbf{\mu}_{1}\cdots\mathbf{\mu_{p}}}{}_{\nu_{1} \cdots\nu_{q}}\,, \tag{43}\] where, as we recall, \(w\) is the weight of the tensor density. Notice that (43) contains (41) as the \(w=0\) special case. Later, in subsection 3.6, we will see that the Lie derivative can also be defined for connections and, importantly, that the coordinate expression is _not_ given by either (41) or (43). Rather, it is given by (3.52). ### Covariant Derivatives and the Connection For a scalar field \(f\), we defined the directional derivative as \(v(f)=v^{\mu}\partial_{\mu}f\). Since the field is directly defined on the smooth manifold \(\mathcal{M}\), there is no issue in giving a precise meaning to \(\partial_{\mu}f\). It simply amounts to the Figure 7: Dragging \(f\) a little bit along \(v\) and then along \(u\) results in a different outcome than dragging it along \(u\) for a short while and then along \(v\). The Lie bracket measures the failure for the so obtained parallelogram to close. usual definition from multivariable calculus: \[\partial_{\mu}f\coloneqq\frac{\mathrm{d}f}{\mathrm{d}x^{\mu}}=\lim_{\epsilon \to 0}\frac{f(x^{1},\ldots,x^{\mu}+\epsilon,\ldots,x^{n})-f(x^{1},\ldots,x^{\mu}, \ldots,x^{n})}{\epsilon}\,. \tag{2.44}\] Now that we have introduced vector fields and other tensors we would like to define a similar notion of taking the derivative of a tensor in the direction of a vector field. A prime candidate for such a derivative is the Lie derivative, which we discussed in the previous subsection. It tells us how a tensor field changes when dragged infinitesimally along a flow generated by a vector field. However, at closer inspection it does not truly behave like a directional derivative. For instance, if \(u_{p}\) and \(v_{p}\) are two vectors at \(p\) and \(w_{p}\coloneqq u_{p}+v_{p}\) their vector sum, then the directional derivative of a scalar field satisfies \[u_{p}(f)+v_{p}(f)=\left(u_{p}+v_{p}\right)(f)=w_{p}(f)\,, \tag{2.45}\] whereas the Lie derivative of a tensor \(T_{p}\) at \(p\) fails to have this property, \[\mathcal{L}_{u_{p}}T_{p}+\mathcal{L}_{v_{p}}T_{p}\neq\mathcal{L}_{u_{p}+v_{p}} T_{p}=\mathcal{L}_{w_{p}}T_{p}\,. \tag{2.46}\] Thus, the Lie derivative is not linear in this sense. In other words, in the case of a scalar field we can take the derivative in the direction \(v_{p}\), add the derivative in the direction \(u_{p}\) to it, and we are guaranteed that this is the same as if we had taken the derivative in the direction \(w_{p}=u_{p}+v_{p}\) to begin with. The Lie derivative does not behave in this way. Furthermore, the directional derivative \(v_{p}(f)\) only depends on the properties of \(v_{p}\) and \(f\) at the point \(p\). It is thus local in this sense. The Lie derivative, on the other hand, depends on the properties of \(v_{p}\) and \(T_{p}\) in a _neighbourhood_ of \(p\) and is, in this sense, slightly "non-local". To illustrate this point, we take and slightly adapt the nice example from [78]: Let us work in a coordinate chart \(\{x,y\}\) and consider the scalar field \(f(x,y)\) together with the vector fields \(u=\partial_{x}\), \(v=(y+1)\partial_{x}\), and \(w=\partial_{y}\). Clearly, if we evaluate \(u\) and \(v\) at the point \(p\) with coordinates \((x_{0},0)\), they agree: \[u|_{y=0}=\left.\partial_{x}\right|_{y=0}=\left.(y+1)\partial_{x}\right|_{y=0} =\left.v\right|_{y=0}\,. \tag{2.47}\] For the directional derivative of \(f\) in the directions \(u\) and \(v\) evaluated at \((x_{0},0)\) we thus find \[u(f)|_{y=0}=\partial_{x}f(x,0)\qquad\qquad\qquad\text{and}\qquad\qquad\left.v( f)\right|_{y=0}=\left.\partial_{x}f(x,0)\,. \tag{2.48}\] That is, even though we take the derivatives in different directions, they agree with each other because the vector fields happen to agree in that particular point. For the Lie derivative we find instead a disagreement: \[\mathcal{L}_{u}w|_{y=0} =\left.u^{\mu}\partial_{\mu}w^{\nu}\partial_{\nu}\right|_{y=0}- \left.w^{\nu}\partial_{\nu}u^{\mu}\partial_{\mu}\right|_{y=0}\] \[=\left.\delta^{\mu}x\partial_{\mu}\delta^{\nu}y\partial_{\nu} \right|_{y=0}-\left.\delta^{\nu}y\partial_{\nu}\delta^{\mu}x\partial_{\mu} \right|_{y=0}\] \[=0 \tag{2.49}\] versus \[\mathcal{L}_{v}w|_{y=0} =\left.v^{\mu}\partial_{\mu}w^{\nu}\partial_{\nu}\right|_{y=0}- \left.w^{\nu}\partial_{\nu}v^{\mu}\partial_{\mu}\right|_{y=0}\] \[=\left.(y+1)\delta^{\mu}x\partial_{\mu}\delta^{\nu}y\partial_{ \nu}\right|_{y=0}-\left.\delta^{\nu}y\partial_{\nu}\left(\left(y+1\right) \delta^{\mu}x\right)\partial_{\mu}\right|_{y=0}\] \[=-\partial_{x}\,. \tag{2.50}\] Hence, even though the vector fields \(u\) and \(v\) coincide at the point \((x_{0},0)\), the Lie derivatives do not agree! Since the Lie derivative has not the desired linearity and locality properties we look for in a directional derivative, we might be tempted to mimic the definition of derivative for scalar functions instead. Concretely, we might try to define the directional derivative of a vector field \(\nabla_{u}v\) at a point \(p\), as follows: \[\nabla_{u}v|_{p}\coloneqq\lim_{\epsilon\to 0}\frac{v_{p+\epsilon\,u}-v_{p}}{ \epsilon}\,. \tag{2.51}\] This poses two problems: 1. The point \(p\) lives in \(\mathcal{M}\), while the vector \(u_{p}\) lives in the tangent space \(T_{p}\mathcal{M}\). The manifold is just a topological space, i.e., a space in which addition of points is not even defined, while the tangent space is a vector space. Thus, the expression "\(p+\epsilon\,u\)" has no mathematical meaning. It is as if we are trying to add apples and oranges. 2. The second problem is that even if we could give meaning to "\(p+\epsilon\,u\)", we would be subtracting vectors which live in two different spaces. Let's say "\(p+\epsilon\,u\)" represents the point \(q\). Then we are effectively asking to compute \(v_{q}-v_{p}\). But \(v_{q}\) lives in \(T_{q}\mathcal{M}\), while \(v_{p}\) is a vector in \(T_{p}\mathcal{M}\). Again, it is like subtracting apples from oranges. It is important to highlight that the above definition of directional derivative of a vector field _does_ make sense in Euclidean geometry. The reason is that points in the _manifold_\(\mathbb{R}^{n}\) can be identified with vectors in the _vector space_\(\mathbb{R}^{n}\). Thus, the operation \(p+\epsilon\,u\), i.e., adding a vector to a point, becomes meaningful. Furthermore, the tangent space \(T_{p}\mathbb{R}^{n}\) is isomorphic to \(\mathbb{R}^{n}\) itself. Since this is true for any point in \(\mathbb{R}^{n}\), we find that all tangent spaces are isomorphic to each other. This gives rise to the usual notion of Euclidean geometry that we can add and subtract vectors at different points of space. Thus, \(v(q)-v(p)\) is a meaningful operation in Euclidean geometry because there is a canonical way of transporting vectors from point to point. In subsection 3.1 we will see how both issues can be solved by a direct approach. This will lead us to introduce the notion of **parallel transport**. In the present subsection, we shall follow a different strategy to overcome the obstacles. We emulate the axiomatic approach we already used to define the directional derivative of scalar fields. To begin with, we change terminology: Instead of referring to \(\nabla\) as directional derivative, we shall call it the **covariant derivative**\(\nabla\) from now on. We define the covariant derivative as map from the space of all vector fields on \(\mathcal{M}\), \(\mathcal{V}(\mathcal{M})\), times the space of all tensor fields on \(\mathcal{M}\), \(\mathcal{TM}\), into that same space. Symbolically, we want to define a map \[\nabla:\mathcal{V}(\mathcal{M})\times\mathcal{T}(\mathcal{M}) \rightarrow\mathcal{T}(\mathcal{M})\] \[(v,T) \mapsto\nabla_{v}T \tag{2.52}\] This map takes a vector field \(v\in\mathcal{V}(\mathcal{M})\) together with a tensor field \(T\in\mathcal{T}(\mathcal{M})\) as input and produces the tensor field \(\nabla_{v}T\in\mathcal{T}(\mathcal{M})\) as output. It has to do so obeying the following set of axioms: 1. \(\nabla_{v}f=v(f)\) for all \(v\in\mathcal{V}(\mathcal{M})\) and \(f\in C^{\infty}(\mathcal{M})\) 2. \(\nabla_{v}(c\,T_{1}+T_{2})=c\,\nabla_{v}T_{1}+\nabla_{v}T_{2}\) for all \(v\in\mathcal{V}(\mathcal{M})\), \(T_{1},T_{2}\in\mathcal{T}(\mathcal{M})\), and \(c\in\mathbb{R}\) 3. \(\nabla_{v}(T_{1}\otimes T_{2})=(\nabla_{v}T_{1})\otimes T_{2}+T_{1}\otimes( \nabla_{v}T_{2})\) for all \(v\in\mathcal{V}(\mathcal{M})\) and \(T_{1},T_{2}\in\mathcal{T}(\mathcal{M})\) 4. \(\nabla_{c\,v_{1}+v_{2}}T=c\,\nabla_{v_{1}}T+\nabla_{v_{2}}T\) for all \(v_{1},v_{2}\in\mathcal{V}(\mathcal{M})\), \(T\in\mathcal{T}(\mathcal{M})\), and \(c\in\mathbb{R}\). The first axiom simply makes sure that the covariant derivative, which is supposed to generalize the notion of directional derivative acting on scalars, agrees with the definition we have given in subsection 2.2. Axiom A2 captures the linearity of the directional derivative, while axiom A3 is the general version of the Leibniz rule. This axiom truly captures the essence of \(\nabla\) being a _differential_ operator. Notice that if in A3 we have \(T_{1}=f\), then because of \(f\otimes T_{2}\equiv f\,T_{2}\), we find as special case \[\nabla_{v}(fT)=\left(\nabla_{v}f\right)T+f\,\nabla_{v}T=v(f)\,T+f\, \nabla_{v}T\,, \tag{2.53}\] where we also made use of Al. Finally, axiom A4 captures the idea we have discussed further above. Namely, that the covariant derivative along \(c\,v_{1}+v_{2}\) should be the same as when computing it along \(c\,v_{1}\) and \(v_{2}\) separately and then summing the results. Notice that the Lie derivative also satisfies axioms AI (action on scalars), A2 (linearity in \(\mathcal{T}(\mathcal{M})\) argument), and A3 (Leibniz rule). What sets the covariant derivative apart from the Lie derivative is axiom A4, which is _not_ satisfied by the Lie derivative. Working with axioms might seem overly abstract, but it is actually quite simple to work out in a coordinate chart how the covariant derivative acts on vectors and \(1\)-forms. Once this is understood, it is straightforward to generalize its action to any tensor (density). Let us begin with deriving a coordinate expression for the covariant derivative of vector fields. To do so, we work with coordinates \(\{x^{\mu}\}\) and we introduce a basis \(\{e_{\mu}\}\coloneqq\{\partial/\partial x^{\mu}\}\) for \(T\mathcal{M}\). The covariant derivative of \(v=v^{\nu}e_{\nu}\) in the direction of \(u=u^{\mu}e_{\mu}\) can then be written as \[\nabla_{u}v \stackrel{{\text{\sc A4}}}{{=}}u^{\mu}\nabla_{e_{ \mu}}(v^{\nu}e_{\nu})\stackrel{{\text{\sc A3},\text{\sc A2}}}{{=} }u^{\mu}\left(\nabla_{e_{\mu}}(v^{\nu})e_{\nu}+v^{\nu}\nabla_{e_{\mu}}(e_{\nu} )\right)\] \[\stackrel{{\text{\sc A1}}}{{=}}u^{\mu}\left(\partial _{\mu}v^{\nu}e_{\nu}+v^{\nu}\nabla_{e_{\mu}}e_{\nu}\right)\,. \tag{2.54}\] In the first line we made use of axiom A4 and the Leibniz rule A3. We have also made implicit use of A2 when applying the Leibniz rule, since \(v^{\nu}e_{\nu}\) really represents a linear combination and the derivative acts on each term in that sum. Finally, we used that the components \(v^{\nu}\) of a vector field are just smooth functions, which allows us to apply AI in order to write \(\nabla_{e_{\mu}}(v^{\nu})=e_{\mu}(v^{\nu})=\partial_{\mu}v^{\nu}\). That is, the covariant derivative just becomes the directional derivative along \(e_{\mu}\) and since \(e_{\mu}=\partial/\partial x^{\mu}\) this simply gives us the coordinate derivatives of the scalar functions \(v^{\nu}\). Recall that we defined the covariant derivative as a map from \(\mathcal{V}(\mathcal{M})\times\mathcal{T}(\mathcal{M})\) to \(\mathcal{T}(\mathcal{M})\). However, the last line of (2.54) does not look like an element of \(\mathcal{T}(\mathcal{M})\) since the term \(v^{\nu}\nabla_{e_{\mu}}e_{\nu}\) is not a linear combination of basis elements \(e_{\mu}\). We have already used up all axioms to arrive at (2.54). Therefore, to remedy the situation, we have to introduce a new concept: The **affine connection**\(\Gamma\). Concretely, we demand that in a coordinate chart \(\{x^{\mu}\}\) and with respect to a basis \(\{e_{\mu}\}\) of \(T\mathcal{M}\) the \(n\times n\times n\) components of the affine connection satisfy \[\boxed{\nabla_{e_{\mu}}e_{\nu}=\Gamma^{\alpha}_{\phantom{\alpha} \mu\nu}e_{\alpha}} \tag{2.55}\] We can take this as the defining equation for the affine connection and use it to simplify equation (2.54) to \[\nabla_{u}v=u^{\mu}\left(\partial_{\mu}v^{\alpha}+\Gamma^{\alpha }_{\phantom{\alpha}\mu\nu}v^{\nu}\right)e_{\alpha}\,. \tag{2.56}\] This is manifestly an element of \(\mathcal{T}(\mathcal{M})\) and it has a recognizable form. In fact, we can simply read off the **component expression** for the covariant derivative of a vector field, which is \[\boxed{\nabla_{\mu}v^{\nu}=\partial_{\mu}v^{\nu}+\Gamma^{\nu}_{ \phantom{\nu}\mu\lambda}v^{\lambda}} \tag{2.57}\] Let us briefly pause and comment on the role of equation (2.55). The salient point to notice is that the axioms AI-A4 do _not_ specify a unique covariant derivative operator \(\nabla\)! Rather, if someone hands us a concrete differential operator, we can check whether it satisfies the axioms and, if it does, we can use equation (2.55) to determine the coefficients of the affine connection. However, the logic can also be turned around and the whole paradigm of teleparallel gravity hinges on this mathematical fact: If someone hands us an affine connection \(\Gamma^{\alpha}_{\phantom{\alpha}\mu\nu}\), we can _define_ a covariant derivative operator \(\nabla\) which satisfies the axioms. In fact, it suffices to specify \(\Gamma^{\alpha}{}_{\mu\nu}\) and to declare that equation (2.57) holds. This unambiguously defines the meaning of the operator \(\nabla\) and we know how to apply it to _any_ tensor field. Actually, we have not yet shown that the last part is true, i.e., we still need to show that saying how \(\nabla\) acts on a vector field is sufficient in order to know how it acts on all tensor fields. To do so, we also need to work out how \(\nabla\) acts on \(1\)-forms. Recall that \(1\)-forms live in the dual space \(T^{*}\mathcal{M}\) and thus define a linear map which maps vector fields to scalar fields according to \(\langle\omega,v\rangle=\omega_{\mu}v^{\mu}\). For the directional derivative of this particular scalar field we find \[u(\langle\omega,v\rangle)\stackrel{{\text{\small{Al}}}}{{=}} \nabla_{u}(\langle\omega,v\rangle)\stackrel{{\text{\small{A3}}}}{{=}} \langle\nabla_{u}\omega,v\rangle+\langle\omega,\nabla_{u}v\rangle\,, \tag{2.58}\] where we first made use of axiom Al and then A3, the Leibniz rule. We can solve for the first term on the right hand side: \[\langle\nabla_{u}\omega,v\rangle=u(\langle\omega,v\rangle)-\langle\omega, \nabla_{u}v\rangle\,. \tag{2.59}\] Notice that we have reduced the task of finding the covariant derivative of \(\omega\) to computing the directional derivative of a scalar and the covariant derivative of a vector. Working again in a coordinate chart \(\{x^{\mu}\}\) and using (2.57), we can complete our task as follows: \[(\nabla_{u}w)_{\mu}\,v^{\mu} =u^{\mu}\partial_{\mu}\left(\omega_{\alpha}v^{\alpha}\right)- \omega_{\alpha}u^{\mu}\left(\partial_{\mu}v^{\alpha}+\Gamma^{\alpha}{}_{\mu\nu}\right)\] \[=u^{\mu}(\partial_{\mu}\omega_{\alpha})v^{\alpha}+u^{\mu}\omega_{ \alpha}\partial_{\mu}v^{\alpha}-\omega_{\alpha}u^{\mu}\left(\partial_{\mu}v^{ \alpha}+\Gamma^{\alpha}{}_{\mu\nu}\right)\] \[=u^{\mu}\left(\partial_{\mu}\omega_{\alpha}-\Gamma^{\lambda}{}_{ \mu\alpha}\omega_{\lambda}\right)v^{\alpha}\,. \tag{2.60}\] From this we can finally read off that the covariant derivative of a \(1\)-form, expressed in the component language, reads \[\boxed{\nabla_{\mu}\omega_{\alpha}=\partial_{\mu}\omega_{\alpha}-\Gamma^{ \lambda}{}_{\mu\alpha}\omega_{\lambda}} \tag{2.61}\] All we have used to arrive at this result are the axioms Al-A4 and equation (2.57). Thus, the covariant derivative of a \(1\)-form is completely determined once we know what the covariant derivative of a vector field is. Once we know these two covariant derivatives, we can work out the coordinate expression for the covariant derivative of any tensor field, simply by application of the Leibniz rule. The general result reads \[\boxed{\nabla_{\alpha}T^{\mu_{1}\dots\mu_{p}}{}_{\nu_{1}\dots\nu_{q}}=\partial _{\alpha}T^{\mu_{1}\dots\mu_{p}}{}_{\nu_{1}\dots\nu_{q}}+\Gamma^{\mu_{1}}{}_{ \alpha\lambda}T^{\lambda\dots\mu_{p}}{}_{\nu_{1}\dots\mu_{q}}+\dots+\Gamma^{ \mu_{p}}{}_{\alpha\lambda}T^{\mu_{1}\dots\lambda}{}_{\nu_{1}\dots\nu_{q}}} \tag{2.62}\] Let us stress again at this point that the axioms do not specify a unique operator \(\nabla\). Infinitely many covariant derivative operators exist and it is ultimately our choice, which one we use. From a mathematical point of view, this also means that we added a new structure. So far, everything we did could be defined on the manifold \(\mathcal{M}\) (curves, scalar fields) or on spaces derived from the manifold itself (vectors, \(1\)-forms, general tensors). However, defining a covariant derivative requires us to add something new by hand. Once we have selected an affine connection, we are working in the framework of an **affine geometry**\((\boldsymbol{\mathcal{M}},\boldsymbol{\Gamma})\). In the next subsection, where we introduce the metric tensor, we will finally arrive at metric-affine geometries. However, before we do so, we clarify a last point which sometimes causes confusion. The affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\) carries three indices, but it should not be mistaken for a tensor! A connection is a different type of object than a \((1,2)\) tensor! To see this more explicitly, one should recall that tensors have a very simple transformation behaviour under change of coordinates. For instance, a \((1,2)\) tensor \(S^{\alpha}{}_{\mu\nu}\) transforms as \[S^{\alpha}{}_{\mu\nu} \mapsto \hat{S}^{\alpha}{}_{\mu\nu}=\frac{\partial\hat{x}^{\alpha}}{ \partial x^{\beta}}\frac{\partial x^{\rho}}{\partial\hat{x}^{\mu}}\frac{ \partial x^{\sigma}}{\partial\hat{x}^{\nu}}S^{\beta}{}_{\rho\sigma} \tag{2.63}\] under the change of coordinates \(x^{\mu}\mapsto\tilde{x}^{\mu}(x)\). In contrast, an affine connection transforms under the same change of coordinates as \[\boxed{\Gamma^{\alpha}{}_{\mu\nu}}\quad\mapsto\quad\tilde{\Gamma}^{\alpha}{}_{ \mu\nu}=\frac{\partial\tilde{x}^{\alpha}}{\partial x^{\beta}}\frac{\partial x^ {\rho}}{\partial\tilde{x}^{\mu}}\frac{\partial x^{\sigma}}{\partial\tilde{x}^ {\nu}}\Gamma^{\beta}{}_{\rho\sigma}+\frac{\partial\tilde{x}^{\alpha}}{\partial x ^{\lambda}}\frac{\partial^{2}x^{\lambda}}{\partial\tilde{x}^{\mu}\partial \tilde{x}^{\nu}} \tag{64}\] We can distinguish between two pieces in this transformation law: A term which transforms homogeneously, like a tensor would, and an inhomogeneous term. The necessity for this second, inhomogeneous piece in the transformation behaviour can be seen by noticing that, for instance, \(\nabla_{\mu}v^{\nu}\) is a tensor by definition. As we have seen in (57), this can be written as a partial derivative plus the connection. However, the partial derivative of vector field components does _not_ transform in a tensorial way. It transforms in an inhomogeneous fashion and the connection compensates for this behaviour, rendering \(\nabla_{\mu}v^{\nu}\) indeed a proper tensor. As a final comment, we remark that this non-tensorial transformation behaviour implies that (a) adding a \((1,2)\) tensor \(S^{\alpha}{}_{\mu\nu}\) to a connection \(\Gamma^{\alpha}{}_{\mu\nu}\) gives us an equally valid but completely new connection \(\tilde{\Gamma}^{\alpha}{}_{\mu\nu}\coloneqq\Gamma^{\alpha}{}_{\mu\nu}+S^{ \alpha}{}_{\mu\nu}\) and (b) a connection which is not zero in one coordinate system can be made to vanish by a clever change of coordinates. We will re-encounter this fact in 4.5 when we discuss the coincident gauge. ### The Metric Tensor and the Geodesic Equation Up to this point, we mostly worked with the manifold \(\mathcal{M}\). This is sufficient to talk about events, curves (to model observers and test particles), scalar fields, vector fields, general tensor fields of type \((p,q)\), and tensor densities. This structure alone is also sufficient to introduce flows of tensor fields and define the Lie derivative. Only in the last subsection did we encounter the necessity to introduce a new structure: An affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\). This necessity arose in order to define a covariant derivative for vectors and other tensor fields. The connection is an object which we can freely choose and it defines the covariant derivative of any tensor via the equation (57). A manifold together with an affine connection is referred to as **affine geometry**\((\mathbf{\mathcal{M}},\mathbf{\Gamma})\). This pair is sufficient to describe all concepts introduced so far. However, our description is incomplete. For instance, even tough we can define curves, we cannot answer the question which curve is the shortest one between two points. More in general, we do not know how to measure the length of curves or even the magnitude of vectors. To remedy that, we now introduce the **metric tensor**\(\mathbf{g}\) and we extend the affine geometry \((\mathbf{\mathcal{M}},\Gamma)\) to the **metric-affine geometry**\((\mathbf{\mathcal{M}},\mathbf{g},\mathbf{\Gamma})\). The idea behind the metric is to generalize the notion of scalar product between vectors from Euclidean geometry to any kind of geometry. We proceed again in an axiomatic fashion and define \(g\) as a map \[g:T\mathcal{M}\times T\mathcal{M} \to\mathbb{R}\] \[(u,v) \mapsto g(u,v) \tag{65}\] which satisfies the following axioms: * Linearity in both slots: \(g(f\,v_{1}+v_{2},w)=f\,g(v_{1},w)+g(v_{2},w)\) \[g(v,fw_{1}+w_{2})=f\,g(v,w_{1})+g(v,w_{2})\] * Symmetry: \(g(v,w)=g(w,v)\) * Non-degeneracy: If \(g(v,w)=0\) for all \(w\), then \(v=0\). Thus, the metric tensor is a map which takes two vectors as input and produces a real number. Given a coordinate chart \(\{x^{\mu}\}\) and a basis \(\{e_{\mu}\}\) of \(T\mathcal{M}\), this allows us to define the components of the metric tensor \(g\) with respect to that chart and that basis as \[g_{\mu\nu}\coloneqq g(e_{\mu},e_{\nu})\,. \tag{2.66}\] Together with axiom AI it then follows that \[g(v,w) =g(v^{\mu}e_{\mu},w^{\nu}e_{\nu})\overset{\text{AI}}{=}v^{\mu}g(e_ {\mu},w^{\nu}e_{\nu})\] \[\overset{\text{AI}}{=}v^{\mu}w^{\nu}g(e_{\mu},e_{\nu}) =g_{\mu\nu}v^{\mu}w^{\nu}\,. \tag{2.67}\] From axiom A2 it follows that \(g_{\mu\nu}=g_{\nu\mu}\), while axiom A3 implies the existence of an **inverse metric**, which we denote by \(g^{\mu\nu}\). Importantly, the metric and its inverse satisfy the identity \(g_{\mu\lambda}g^{\lambda\nu}=\delta_{\mu}{}^{\nu}\). This generalizes the familiar scalar product between vectors from Euclidean to more general geometries. Consequently, once we are given a metric tensor \(g\), we can define the norm of vectors, angles between vectors, areas, volumes, and so on. For instance, we define the norm of a vector as \[\|v\|^{2}\coloneqq g(v,v)=g_{\mu\nu}v^{\mu}v^{\nu}\,. \tag{2.68}\] It should be emphasized that this norm is not always positivel In fact, depending on the **signature of the metric**, there can be non-zero vectors for which the norm is positive, zero, or even negative. Concretely, the signature \((p,n)\) is defined by the number of positive (\(p\)) and negative (\(n\)) eigenvalues of \(g\). A Euclidean metric has signature \((n,0)\) and the norm of non-zero vectors is always positive. A Lorentzian metric on the other hand has signature \((n-1,1)\) and the norm of non-zero vectors can be positive, negative, or zero. Here we are mostly interested in metrics with Lorentzian signature and we can classify vectors as being **spacelike**, **timelike**, and **null**. The definition goes as follows: \[\text{A vector $v$ is called }\left\{\begin{array}{c}\text{\bf spacelike}\\ \text{\bf null}\\ \text{\bf timelike}\end{array}\right\}\text{ if }g(v,v)\text{ is }\left\{\begin{array}{c}>0\\ =0\\ <0\end{array}\right\}\,. \tag{2.69}\] We emphasize that this definition relies on the convention that the signature of \(g\) is mostly plus. We could also have defined signature of a Lorentzian metric as \((1,n-1)\), which would invert \(>\) to \(<\) and vice versa for even \(n\) in the above definition. The definitions coincide for both conventions if the number of dimensions \(n\) is odd. This classification can also be extended to curves and hypersurfaces: \[\text{A curve $\gamma$ is }\left\{\begin{array}{c}\text{spacelike}\\ \text{null}\\ \text{timelike}\end{array}\right\}\text{ if its **tangent** vector is everywhere }\left\{\begin{array}{c}\text{spacelike}\\ \text{null}\\ \text{timelike}\end{array}\right\}\,. \tag{2.70}\] \[\text{A hypersurface is }\left\{\begin{array}{c}\text{spacelike}\\ \text{null}\\ \text{timelike}\end{array}\right\}\text{ if its **normal** vector is everywhere }\left\{\begin{array}{c}\text{timelike}\\ \text{null}\\ \text{spacelike}\end{array}\right\}\,. \tag{2.71}\] Notice the reversed order in the second bracket! Using this terminology and the concept of a metric, we can define the **length of a spacelike curve**\(\gamma\) as \[L[\gamma]\coloneqq\int_{I}\sqrt{g_{\mu\nu}(\gamma)\dot{\gamma}^{\mu}\dot{ \gamma}^{\nu}}\text{d}s \tag{2.72}\] where \(s\) is the parameter along the curve and \(\dot{\gamma}^{\mu}\) its (spacelike) tangent vector. Similarly, we can define the proper time** along a timelike curve as \[T[\gamma]\coloneqq\int_{I}\sqrt{-g_{\mu\nu}(\gamma)\dot{\gamma}^{\mu}\dot{\gamma}^ {\nu}}\mathrm{d}s\,. \tag{2.73}\] The minus sign under the square root is necessary since the scalar product is negative for timelike tangent vectors. Also, up to a dimensionful constant, the proper time gives us the **action of massive point particles**, namely \[\mathcal{S}[\gamma]\coloneqq m\,T[\gamma]=m\int_{I}\sqrt{-g_{\mu\nu}(\gamma) \dot{\gamma}^{\mu}\dot{\gamma}^{\nu}}\mathrm{d}s\,, \tag{2.74}\] where \(m>0\) is the mass of the particle. By varying this functional with respect to the (timelike) particle trajectory \(\gamma\), one obtains the so-called **geodesic equation** \[\frac{\delta S[\gamma]}{\delta\gamma^{\alpha}} \stackrel{{!}}{{=}}0 \Longrightarrow \ddot{\gamma}^{\alpha}+\left\{\begin{array}{c}\alpha\\ \mu\nu\end{array}\right\}\dot{\gamma}^{\mu}\dot{\gamma}^{\nu}=0\,, \tag{2.75}\] where we have introduced the **Christoffel symbols** (aka Levi-Civita connection) \[\left\{\begin{array}{c}\alpha\\ \mu\nu\end{array}\right\}\coloneqq\frac{1}{2}g^{\alpha\lambda}\left(\partial _{\mu}g_{\nu\lambda}+\partial_{\nu}g_{\mu\lambda}-\partial_{\lambda}g_{\mu\nu }\right)\,. \tag{2.76}\] As is well-known, these symbols do **not** transform as tensors, despite appearances. In fact, this is a first concrete example for a connection and we can use it to define a covariant derivative \(\mathcal{D}\) via the equation \[\mathcal{D}_{\mu}v^{\nu}\coloneqq\partial_{\mu}v^{\nu}+\left\{ \begin{array}{c}\nu\\ \mu\alpha\end{array}\right\}v^{\alpha}\,. \tag{2.77}\] We will return to this special connections and its properties in subsection 3.5. The crucial point we wish to emphasize here is the following: Manifolds \(\mathcal{M}\) are just mere topological spaces which do not come equipped with metrics. We are free to choose one. Once we have made a choice, we can automatically define a covariant derivative, namely the derivative defined by the Levi-Civita connection (2.76), without any further choices. A geometry based on \(\mathcal{M}\) and \(g\) is called a **Riemannian geometry \((\mathcal{M},\boldsymbol{g})\)**. Before concluding this subsection, we point out that the determinant of the metric, which we denote by \(g\), is a tensor density of weight \(w=+2\). This follows easily from simple linear algebra considerations and the transformation behaviour of a \((0,2)\) tensor. First of all, we note that the metric can be thought of as a \(n\times n\) square matrix with components \(g_{\mu\nu}\). Thus, the tools of linear algebra certainly apply. Moreover, under a change of coordinates \(x^{\mu}\mapsto x^{\prime\mu}(x)\) the metric transforms as \[g^{\prime}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial x^{\prime\mu}}\frac{ \partial x^{\beta}}{\partial x^{\prime\nu}}g_{\alpha\beta}\,. \tag{2.78}\] The right hand side is simply the product of three matrices: The metric and two copies of the inverse Jacobian matrix \((J^{-1})^{\mu}{}_{\nu}=\frac{\partial x^{\alpha}}{\partial x^{\prime\nu}}\). In terms of matrices, we can thus write \[g^{\prime}=J^{-1}\,g\,J^{-1}\,. \tag{2.79}\] From linear algebra we further know that \[\det(A\,B)=\det(A)\,\det(B) \tag{2.80}\] for any two square matrices \(A\) and \(B\). Therefore, by solving (2.79) for the untransformed metric and applying the identity (2.80) twice, we find for the determinant of the metric \[\det(g)=\det(J\,g^{\prime}\,J)=\det(J)^{2}\,\det(g^{\prime})\,. \tag{2.81}\] According to the convention of 2.2, this means that the determinant is a scalar density of weight \(w=+2\). This result is important because it implies that the square root of the determinant is a scalar density of weight \(w=+1\). If we multiply a scalar field \(f\) by \(\sqrt{|g|}\) and we integrate it, we are guaranteed that the resulting integral is independent of the choice of coordinates. This plays an important role in the construction of action functionals. ## 3 Curvature, Torsion, Non-Metricity: The Fundamental Objects of Metric-Affine Geometries Metric-affine geometries are characterized by having curvature, torsion, non-metricity, or any combination of these three properties. These properties are all defined in terms of the connection \(\Gamma\) and, in the case of non-metricity, in terms of the connection and the metric tensor \(g\). In this section we will first properly define these terms by using the concept of parallel transport. This will aid us in gaining an intuitive understanding of curvature, torsion, and non-metricity. Once these concepts have been clarified, we will deepen our understanding of metric-affine geometries and discuss many important results. These results will come in handy when we formulate and analyze teleparallel theories of gravity in sections 4, 5, and 6. ### Parallel Transport As we discussed in subsection 2.4, there is no canonical way to compare vectors (or tensors in general) at different points on a manifold. This posed an obstacle for defining the directional derivative of vectors and other tensors. We resolved the problem by introducing a connection \(\Gamma\). However, we could just as well have chosen an alternative route. Namely, we could have introduced the concept of **parallel transport**. This notion will prove useful in better understanding metric-affine geometries \((\mathcal{M},g,\Gamma)\) and it will lead us to a sort of classification of these geometries. Recall that we faced two problems in defining a covariant derivative for tensor fields: The first problem was that expressions of the form "\(p+\epsilon\,u\)" are nonsensical from a mathematical point of view, since \(p\) lives in \(\mathcal{M}\), while \(u\) lives in \(T_{p}\mathcal{M}\). In general, these are two completely different spaces. The second problem was that in generalizing the difference quotient of ordinary calculus, we would have to compute the difference \(v_{q}-v_{p}\). That is, we are asked to compute the difference of a vector living in \(T_{p}\mathcal{M}\) and one living in \(T_{q}\mathcal{M}\). This is again a meaningless operation. The concept of parallel transport resolves both of these problems: The first problem is resolved by replacing the nonsensical expression "\(p+\epsilon\,u\)" by \(\gamma(s)\), where \(\gamma\) is a curve passing through \(p\) and \(q\) with tangent vector \(u\). The second problem is resolved by _choosing a prescription_ of how to move a given vector \(v\) from one point \(p\) to another point \(q\) along the curve \(\gamma\). Importantly, we find again that there is no canonical way of providing such a prescription. We simply have to _choose_ one. This is reminiscent of the fact that the covariant derivative is not uniquely determined by the axioms we formulated in 2.4. Infinitely many covariant derivative operators can be chosen which satisfy all the axioms. To implement the two solutions described above, we define a **parallel transport map** \[P(\gamma)_{s}^{t}:T_{\gamma(s)}\mathcal{M} \to T_{\gamma(t)}\mathcal{M}\] \[v_{\gamma(s)} \mapsto P(\gamma)_{s}^{t}v_{\gamma(s)}\,, \tag{3.1}\] where \(\gamma(s)=p\), \(\gamma(t)=q\) and which satisfies the following axioms * \(P(\gamma)_{t}^{t}=\text{id}\); * \(P(\gamma)_{u}^{t}\circ P(\gamma)_{s}^{u}=P(\gamma)_{s}^{t}\); * \(P(\gamma)_{s}^{t}\) is smooth in \(s\), \(t\), and \(\gamma\). Given a vector \(v_{p}\) at \(p\) (i.e., a vector which lives in \(T_{p}\mathcal{M}\)), we can now transport this vector to \(q\) along the curve \(\gamma\) and we obtain the new vector \[P(\gamma)_{s}^{t}v_{p}\,, \tag{3.2}\] which lives in the tangent space \(T_{q}\mathcal{M}\). We emphasize that the choice of the map \(P(\gamma)^{t}_{s}\) is completely arbitrary, as long as it satisfies the above axioms. Its introduction merely serves the purpose to compare a vector at \(p\) to a vector at \(q\). This is exactly what is needed when talking about the derivative of a vector and we are therefore led to define the covariant derivative as \[\nabla_{w}v\coloneqq\lim_{s\to 0}\frac{P(\gamma)^{0}_{s}v_{\gamma(s)}-v_{ \gamma(0)}}{s}=\frac{\mathrm{d}}{\mathrm{d}s}\left.\left[P(\gamma)^{0}_{s}v_{ \gamma(s)}\right]\right|_{s=0}\,, \tag{3.3}\] where \(w\coloneqq\dot{\gamma}\) is the tangent vector to the curve \(\gamma\). Recall that the connection \(\Gamma\), which we introduced in subsection 2.4 in order to define the covariant derivative, can be freely chosen. This suggests that there is a relation between the parallel transport map \(P(\gamma)^{t}_{s}\) and the connection \(\Gamma\). To see this relation, we choose to work in a coordinate chart \(\{x^{\mu}\}\) and we shall compare the components of (3.3) to the components of (2.57). Equation (3.2), which describes the transported vector, reads in component language \[[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}v^{\mu}_{\gamma(s)}\,. \tag{3.4}\] Taylor expanding up to first order around \(s=0\), and using \(\gamma(0)=p\), we find \[[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}v^{\mu}_{\gamma(s)}=[P( \gamma)^{0}_{0}]^{\alpha}_{\phantom{\alpha}\mu}v^{\mu}_{p}+s\left.[P(\gamma)^{ 0}_{0}]^{\alpha}_{\phantom{\alpha}\mu}\frac{\mathrm{d}}{\mathrm{d}s}v^{\mu}_{ \gamma(s)}\right|_{s=0}+s\,v^{\mu}_{p}\frac{\mathrm{d}}{\mathrm{d}s}[P(\gamma )^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}\bigg{|}_{s=0}+\mathcal{O}(s^{2})\,. \tag{3.5}\] Notice that here we made use of the differentiability of \(P(\gamma)^{0}_{s}\) in \(s\), which is guaranteed by axiom A3. Using \([P(\gamma)^{0}_{0}]^{\alpha}_{\phantom{\alpha}\mu}=\delta^{\alpha}_{\phantom{ \alpha}\mu}\) (axiom AI in components language) together with \(\left.\frac{\mathrm{d}}{\mathrm{d}s}v^{\mu}_{\gamma(s)}\right|_{s=0}=w^{\nu}_{ p}\partial_{\nu}v^{\mu}_{p}\), this reduces to \[[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}v^{\mu}_{\gamma(s)}=v^{ \alpha}_{p}+s\,\left(w^{\nu}_{p}\partial_{\nu}v^{\alpha}_{p}+v^{\mu}_{p}\frac{ \mathrm{d}}{\mathrm{d}s}[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu} \bigg{|}_{s=0}\right)+\mathcal{O}(s^{2})\,. \tag{3.6}\] Next, we make again use of axiom A3, which assures us of the differentiability of \(P(\gamma)^{0}_{s}\) with respect to \(\gamma\), in order to compute \[\left.\frac{\mathrm{d}}{\mathrm{d}s}[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{ \alpha}\mu}\right|_{s=0}=\frac{\mathrm{d}[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom {\alpha}\mu}}{\mathrm{d}\gamma^{\nu}}\left.\frac{\mathrm{d}}{\mathrm{d}s} \gamma^{\nu}\right|_{s=0}=\frac{\mathrm{d}[P(\gamma)^{0}_{s}]^{\alpha}_{ \phantom{\alpha}\mu}}{\mathrm{d}\gamma^{\nu}}\bigg{|}_{s=0}\,w^{\nu}_{p}\,. \tag{3.7}\] By plugging (3.7) into (3.6) we finally find that the covariant derivative (3.3) is equal to \[(\nabla_{w}v)^{\alpha}=w^{\nu}_{p}\left(\partial_{\nu}v^{\alpha}_{p}+\left. \frac{\mathrm{d}[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}}{\mathrm{ d}\gamma^{\nu}}\right|_{s=0}v^{\mu}_{p}\right)\,. \tag{3.8}\] We left the subscript \(p\) on the right hand side to emphasize that everything is defined locally at \(p\), as one has to expect from the covariant derivative (see the discussion in subsection 2.4). From the same subsection we recall that the covariant derivative of a vector field (in components) is given by \[w^{\nu}\nabla_{\nu}v^{\alpha}=w^{\nu}\left(\partial_{\nu}v^{\alpha}+\Gamma^{ \alpha}_{\phantom{\alpha}\nu\mu}v^{\mu}\right)\,. \tag{3.9}\] By comparing the last two equations, we find the relation \[\boxed{\frac{\mathrm{d}[P(\gamma)^{0}_{s}]^{\alpha}_{\phantom{\alpha}\mu}}{ \mathrm{d}\gamma^{\nu}}\bigg{|}_{s=0}=\Gamma^{\alpha}_{\phantom{\alpha}\nu\mu}} \tag{3.10}\] This means that connection and parallel transport are equivalent to each other! Or, more precisey: * Given a connection \(\Gamma\), it induces a notion of parallel transport \(P(\gamma)^{t}_{s}\) which can be deduced from integrating equation (3.10); * Given a parallel transport map \(P(\gamma)^{t}_{s}\), we can determine a connection \(\Gamma\) associated with it by computing its derivative according to (3.10). We point out that because of the axioms listed above, it follows that \[P(\gamma)^{0}_{s}\circ P(\gamma)^{s}_{0}=P(\gamma)^{s}_{s}=\operatorname{id}. \tag{3.11}\] In other words, to every \(P(\gamma)^{0}_{s}\) there exists an inverse map \(P(\gamma)^{s}_{0}\). We can therefore view \(P(\gamma)^{0}_{s}\) as an element of \(GL(n,\mathbb{R})\), the group of real-valued, non-degenerate \(n\times n\) matrices. Now that the relation between the covariant derivative, the connection, and the map \(P(\gamma)^{t}_{s}\) is clarified, we introduce the following definition: A vector \(v\) is said to be **parallel transported along \(\gamma\)** if \[P(\gamma)^{t}_{s}v_{\gamma(s)}\overset{!}{=}v_{\gamma(t)}\,. \tag{3.12}\] Observe what this equation is saying: Given a vector field \(v\), we say that it has been parallel transported if the vector \(v_{\gamma(s)}\) at the point \(\gamma(s)\) is equal to the vector \(v_{\gamma(t)}\) at the point \(\gamma(t)\) after having been transported by \(P(\gamma)^{t}_{s}\) along the curve \(\gamma\). Since this **parallel transport condition** has to hold for any \(s\) and \(t\), we can also consider an infinitesimal version of it with \(t=s+\epsilon\). Expanding in \(\epsilon\) and using the definition (3.3), we find that this can equivalently be formulated as \[\boxed{\nabla_{\dot{\gamma}(s)}v_{\gamma(s)}\overset{!}{=}0} \tag{3.13}\] We call this the **parallel transport equation**. In components, this equation reads \[\boxed{\dot{\gamma^{\mu}}\left(\partial_{\mu}v^{\nu}+\Gamma^{\nu}_{\ \mu\lambda}v^{\lambda}\right)\overset{!}{=}0} \tag{3.14}\] This helps us in understanding how a vector changes when it is being infinitesimally parallel transported. Consider vector \(v\) at the point \(\gamma(s+\epsilon)\). For small \(\epsilon\) we can Taylor expand and obtain \[v^{\nu}_{\gamma(s+\epsilon)} =v^{\nu}_{\gamma(s)}+\epsilon\left.\frac{\operatorname{d}}{ \operatorname{d}\epsilon}v^{\nu}_{\gamma(s+\epsilon)}\right|_{\epsilon=0}+ \mathcal{O}(\epsilon^{2})\] \[=v^{\nu}_{\gamma(s)}+\epsilon\,\dot{\gamma}^{\mu}\partial_{\mu}v ^{\nu}_{\gamma(s)}+\mathcal{O}(\epsilon^{2})\,. \tag{3.15}\] Let us now assume that \(v_{\gamma(s+\epsilon)}\) has been generated by _parallel transporting_\(v_{\gamma(s)}\). Let us also assume that \(\gamma(s)=p\) and \(\gamma(s+\epsilon)=q\) are infinitesimally close. Then, using the parallel transport condition (3.14), we find that the last equation can be written as \[\boxed{v^{\nu}_{q}=v^{\nu}_{p}-\epsilon\,\Gamma^{\nu}_{\ \mu\lambda}(p)\dot{\gamma^{\mu}}v^{\lambda}_{p}} \tag{3.16}\] We can read this equation as saying that starting from \(v_{p}\), parallel transport generates a vector \(v_{q}\) at the infinitesimally close point \(q\) by subtracting a term which depends on the connection at \(p\), the vector \(v_{p}\), and the infinitesimal displacement vector \(\epsilon\,\dot{\gamma^{\mu}}\), also defined at \(p\). As we will see in what follows, this infinitesimal version of parallel transport and its interpretation allow us to better understand metric-affine geometries \((\mathcal{M},g,\Gamma)\). The idea is very simple: Now that we know how to parallel transport vectors, we can ask how the characteristic properties of vectors are affected by the transport. The characteristic properties of vectors are the following. 1. Every vector has a direction. 2. Vectors can be added together "tip to tail". In particular, adding together two vectors which point in different directions results in a new vector which points in yet another direction. 3. Provided the manifold is endowed with a metric, we can assign a magnitude to every vector. Our intuition about vectors is largely rooted in Euclidean geometry, which makes use of a very particular notion of parallel transport and a very particular metric. It is therefore deeply ingrained in our minds that vectors can be moved around at will in without affecting their direction nor their magnitude. Also, it is irrelevant whether we add the tail of to the tip of or vice versa; both operations result in the same vector and we can visualize this using a parallelogram. However, all of this can change when the notion of parallel transport--which is tantamount to a choice of connection, as we remind the reader--is more general than the one used in Euclidean geometry. In fact, a generic connection has an effect on all three properties listed above, when a vector is being parallel transported. In what follows, we look at each property in turn. ### Curvature The first property to be considered is how the direction of a vector changes when it is parallel transported. Clearly, to get a meaningful notion of "the direction of the vector has changed due to parallel transport", we have to somehow compare the vector to itself before and after parallel transport. This can only be achieved if we consider a closed curved. In order to make the computation manageable, we shall consider an _infinitesimal_ loop consisting of four curve segments, as shown in Figure 8. There are four points,, and, all connected by curve segments. Let be connected to via the curve, which has a tangent vector. Let be the vector which we shall parallel transport around the loop shown in Figure 8. To begin with, we transport from to along along. Since the loop is infinitesimal, equation (3.16) applies Figure 8: _Parallel transporting a vector around a closed loop results in a change of direction for that vector. This change is measured by the curvature tensor._ and we can think of this process as displacing \(v_{p}\) by \(\epsilon\,u\), where \(\epsilon\ll 1\). This results in \[v_{q}^{\nu}=v_{p}^{\nu}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta u^{\mu}\,v_{p}^{ \lambda}\,, \tag{3.17}\] where we defined \(\delta u^{\mu}\coloneqq\epsilon\,u^{\mu}\) for ease of notation. The subscript \(p\) shall remind us that everything on the right hand side is defined at \(p\). This becomes important once we parallel transport \(v_{q}\) to \(s\) along the curve \(\gamma_{w}\), which has a tangent vector \(w\). Again, \(q\) and \(s\) are infinitesimally close and we are essentially just displacing \(v_{q}\) by the infinitesimal vector \(\delta w\coloneqq\lambda\,w\), where \(\lambda\ll 1\). By applying again equation (3.16) we obtain \[v_{s}^{\nu} =v_{q}^{\nu}-\Gamma^{\nu}{}_{\mu\lambda}(q)\,\delta w^{\mu}\,v_{ q}^{\lambda}\] \[=\underbrace{v_{p}^{\nu}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta u ^{\mu}\,v_{p}^{\lambda}}_{=v_{q}^{\nu}}-\underbrace{\left[\Gamma^{\nu}{}_{ \mu\lambda}(p)+\partial_{\rho}\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta u_{p}^{ \rho}\right]}_{=\Gamma^{\nu}{}_{\mu\lambda}(q)}\times\underbrace{\left[v_{p}^{ \lambda}-\Gamma^{\lambda}{}_{\alpha\beta}(p)\,\delta u^{\alpha}\,v_{p}^{\beta }\right]}_{=v_{q}^{\lambda}}\,\delta w^{\mu}\,. \tag{3.18}\] In the second line we expressed \(v_{q}\) by quantities defined in \(p\), according to the infinitesimal parallel transport equation. Also, we expanded \(\Gamma(q)\) up to first order around the point \(p\). If we keep only terms which are at most second order in \(\delta u\) and \(\delta v\), the above equation simplifies to \[v_{s}^{\nu} =v_{p}^{\nu}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta u^{\mu}\,v_{ p}^{\lambda}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta w^{\mu}\,v_{p}^{\lambda}\] \[\quad-v_{p}^{\beta}\left[\partial_{\alpha}\Gamma^{\nu}{}_{\mu \beta}(p)-\Gamma^{\nu}{}_{\mu\lambda}(p)\Gamma^{\lambda}{}_{\alpha\beta}(p) \right]\,\delta u^{\alpha}\,\delta w^{\mu}\,. \tag{3.19}\] The next step would be to displace \(v_{s}\) to \(r\) by the infinitesimal amount \(-\delta u\) and then to apply an infinitesimal displacement \(-\delta w\) to arrive back at \(p\). This would require us to perform many more expansions where we then only keep terms which are at most second order in \(\delta u\) and \(\delta w\). Since this would make the computations rather cumbersome, we choose a cleverer route. In fact, we can simply displace \(v_{p}\) from \(p\) to \(r\) along \(\delta w\) and then from \(r\) to \(s\) along \(\delta u\). This results in a vector \(v_{s}^{\prime}\) and the computations are virtually the same as the ones we already did. Thus, we find \[v_{s}^{\prime\prime} =v_{p}^{\nu}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta w^{\mu}\,v_{ p}^{\lambda}-\Gamma^{\nu}{}_{\mu\lambda}(p)\,\delta u^{\mu}\,v_{p}^{\lambda}\] \[\quad-v_{p}^{\beta}\left[\partial_{\mu}\Gamma^{\nu}{}_{\alpha \beta}(p)-\Gamma^{\nu}{}_{\alpha\lambda}(p)\Gamma^{\lambda}{}_{\mu\beta}(p) \right]\,\delta u^{\alpha}\,\delta w^{\mu}\,. \tag{3.20}\] Now we can compare \(v_{s}\) to \(v_{s}^{\prime}\). First of all, we notice that the zeroth and first order terms are all the same. The two vectors only differ in their second order terms and we find \[v_{s}^{\nu}-v_{s}^{\prime\prime} =v_{p}^{\beta}\left[\partial_{\mu}\Gamma^{\nu}{}_{\alpha\beta}(p) -\partial_{\alpha}\Gamma^{\nu}{}_{\mu\beta}(p)+\Gamma^{\nu}{}_{\mu\lambda}(p) \Gamma^{\lambda}{}_{\alpha\beta}(p)-\Gamma^{\nu}{}_{\alpha\lambda}(p)\Gamma^{ \lambda}{}_{\mu\beta}(p)\right]\,\delta u^{\alpha}\,\delta w^{\mu}\] \[\coloneqq:v_{p}^{\beta}R^{\nu}{}_{\beta\alpha\mu}\,\delta u^{ \alpha}\,\delta w^{\mu} \tag{3.21}\] where in the last line we introduced the **curvature tensor** \[\boxed{R^{\alpha}{}_{\mu\nu\rho}\coloneqq 2\partial_{[\nu}\Gamma^{\alpha}{}_{\rho]\mu} +2\Gamma^{\alpha}{}_{[\nu]\lambda}\Gamma^{\lambda}{}_{\rho]\mu}=\partial_{\nu }\Gamma^{\alpha}{}_{\rho\mu}-\partial_{\rho}\Gamma^{\alpha}{}_{\nu\mu}+ \Gamma^{\alpha}{}_{\nu\lambda}\Gamma^{\lambda}{}_{\rho\mu}-\Gamma^{\alpha}{}_{ \rho\lambda}\Gamma^{\lambda}{}_{\nu\mu}} \tag{3.22}\] From the way we obtained this tensor, it is clear that it measures the change in orientation of \(v_{p}\) when we parallel transport it along a closed loop. Furthermore, we observe that the curvature tensor is antisymmetric in its last two lower indices: \[R^{\alpha}{}_{\mu(\nu\rho)}=0\,. \tag{3.23}\] Notice that the curvature tensor is solely constructed from the connection \(\Gamma\). No metric was necessary for its construction. A connection for which the curvature tensor vanishes is called a **flat connection**. Given that the curvature tensor has four indices and given that it is anti-symmetric in its last two lower indices, there are two traces one can build without invoking a metric. The first one is called the **Ricci tensor** \[\boxed{R_{\mu\nu}\coloneqq R^{\lambda}{}_{\mu\lambda\nu}} \tag{3.24}\] The second trace goes by the name of **homothetic tensor** and is given by \[\boxed{H_{\mu\nu}\coloneqq R^{\lambda}{}_{\lambda\mu\nu}} \tag{3.25}\] As we will see in subsection 3.4, the homothetic tensor can be expressed in terms of the Ricci tensor and the torsion tensor. If a metric is present, there are two more traces that can be built. First, we can raise the second index of the curvature tensor and then define the **co-Ricci tensor** \[\boxed{P^{\mu}{}_{\nu}\coloneqq g^{\rho\lambda}R^{\mu}{}_{\rho\nu \lambda}=R^{\mu\lambda}{}_{\nu\lambda}} \tag{3.26}\] The co-Ricci tensor can also be expressed in terms of the Ricci tensor and the non-metricity tensor. Finally, the last trace that can be built (we ignore the traces of the homothetic and co-Ricci tensor since these tensors are not independent) is the **Ricci scalar** \[\boxed{R\coloneqq g^{\mu\nu}R_{\mu\nu}=R^{\mu}{}_{\mu}=R^{\lambda \mu}{}_{\lambda\mu}} \tag{3.27}\] This completes our discussion of the curvature tensor an its various traces. ### Torsion Let us now turn to property (b) listed above. In Euclidean geometry, the sum of two vectors can be visualized as a parallelogram. Let's say we have two vectors at \(p\in\mathcal{M}\), which we call \(u_{p}\in T_{p}\mathcal{M}\) and \(v_{p}\in T_{p}\mathcal{M}\), respectively. Then, moving \(v_{p}\) along \(u_{p}\) until the vectors are tip to tail is the same as moving \(u_{p}\) along \(v_{p}\) until the vectors are tip to tail. More precisely, we can think of both vectors having their tail in \(p\). Vector's \(u_{p}\) tip is pointing to \(q\), while the tip of \(v_{p}\) is at \(r\). Then, moving \(v_{p}\) to \(q\) results in a new vector pointing at \(s\). The same vector is obtained by moving \(u_{p}\) along \(v_{p}\) to \(r\). The total displacement from \(p\) to \(s\) is given by \(w_{p}\coloneqq u_{p}+v_{p}\) (see Figure 9). In non-Euclidean geometry, it is conceivable that this will no longer be the case. To analyze the situation, we restrict ourselves to infinitesimal vectors \(\delta u_{p}\coloneqq\epsilon\,u_{p}\) and \(\delta v_{p}\coloneqq\lambda\,v_{p}\), with \(\epsilon\ll 1\) and \(\lambda\ll 1\). An absolutely necessary assumption is that \(\delta u_{p}\) and \(\delta v_{p}\) are linearly independent. For if they are either parallel or anti-parallel, we would never obtain something resembling the parallelogram shown in Figure 9. According to equation (3.16), the infinitesimal parallel transport of \(\delta v_{p}\) to the tip of \(\delta u_{p}\) (i.e., to the point \(q\)) is given by \[\delta v_{q}^{\alpha}=\delta v_{p}^{\alpha}-\Gamma^{\alpha}{}_{ \mu\nu}(p)\,\delta u_{p}^{\mu}\,\delta v_{p}^{\nu} \tag{3.28}\] and the total displacement from \(p\) to \(s_{1}\) is \[\delta w_{p}^{\alpha}\coloneqq\delta u_{p}^{\alpha}+\delta v_{p}^{\alpha}- \Gamma^{\alpha}{}_{\mu\nu}(p)\,\delta u_{p}^{\mu}\,\delta v_{p}^{\nu}\,. \tag{3.29}\] Conversely, if we first transport \(\delta u_{p}\) to \(r\) (i.e., the tip of \(\delta v_{p}\)) and then consider the total displacement from \(p\) to \(s_{2}\) we obtain \[\delta w_{p}^{\alpha}\coloneqq\delta v_{p}^{\alpha}+\delta u_{p}^{\alpha}- \Gamma^{\alpha}{}_{\nu\mu}(p)\,\delta u_{p}^{\mu}\,\delta v_{p}^{\nu}\,. \tag{3.30}\] To measure whether the parallelogram actually closes, i.e., in order to see whether the total displacement results in \(s_{1}=s_{2}\), we have to compare \(w_{p}^{\alpha}\) to \(w_{p}^{\prime\alpha}\): \[w_{p}^{\prime\alpha}-w_{p}^{\alpha} =(\Gamma^{\alpha}{}_{\mu\nu}(p)-\Gamma^{\alpha}{}_{\nu\mu}(p))\ \delta u_{p}^{\mu}\,\delta v_{p}^{\mu}\] \[\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}:T^{ \alpha}{}_{\mu\nu}\,\delta u_{p}^{\mu}\,\delta v_{p}^{\mu}\,, \tag{3.31}\] where in the last line we introduced the **torsion tensor** \[\boxed{T^{\alpha}{}_{\mu\nu}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{ \scriptsize.}}}=2\Gamma^{\alpha}{}_{[\mu\nu]}=\Gamma^{\alpha}{}_{\mu\nu}- \Gamma^{\alpha}{}_{\nu\mu}} \tag{3.32}\] Notice that the parallelogram closes if and only if \(T^{\alpha}{}_{\mu\nu}=0\). That is, it closes precisely when torsion vanishes. If torsion is not zero, it provides us with a measure for the failure of the infinitesimal parallelogram to close. Also, notice that the torsion is anti-symmetric in its lower two indices. This implies that if \(\delta u_{p}\) and \(\delta v_{p}\) are linearly dependent, i.e., if \(\delta u_{p}=f\,\delta v_{p}\) for some non-zero scalar \(f\), then \(T^{\alpha}{}_{\mu\nu}\,\delta u_{p}\,\delta v_{p}=f\,T^{\alpha}{}_{\mu\nu}\, \delta v_{p}\,\delta v_{p}=0\), since we are contracting something anti-symmetric with something symmetric. This agrees with our intuition that two linearly dependent vectors do not span a parallelogram, hence there is no "failure to close" to be measured. On a more technical note, the torsion tensor is simply the anti-symmetric part of the connection. Recall that the connection does not transform in a tensorial manner under coordinate transformations due to the inhomogeneous piece. However, this inhomogeneity cancels when we compute the difference \(\Gamma^{\alpha}{}_{\mu\nu}-\Gamma^{\alpha}{}_{\nu\mu}\), making \(T^{\alpha}{}_{\mu\nu}\) a genuine tensor. A connection which is symmetric in the lower indices, i.e., a connection for which \(T^{\alpha}{}_{\mu\nu}\) is zero, is called **torsion-free**. Finally, we remark that we can construct the trace of the torsion tensor, even in the absence of a metric, by contracting indices as \[\boxed{T_{\mu}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=T^{ \alpha}{}_{\mu\alpha}} \tag{3.33}\] Figure 9: Left panel: The notion of parallel transport used in Euclidean geometry implies that a vector \(v_{p}\) parallel transported along \(u_{p}\) to its tip at \(q\) has the same effect as parallel transporting \(u_{p}\) along \(v_{p}\) up to the point \(r\). That is, both operations result in a new vector pointing from \(p\) to \(s\). Right panel: When a generic connection is being used to define parallel transport, the same operation described above (at an infinitesimal level) no longer leads to a closed parallelogram. Rather, the two parallel transports end on different points, \(s_{1}\) and \(s_{2}\). The difference between \(\delta w_{p}\) and \(\delta w_{p}^{\prime}\) is measured by the torsion tensor \(T^{\alpha}{}_{\mu\nu}\). One has to be mindful of the order of the contracted indices, since \[T^{\alpha}{}_{\alpha\mu}=-T^{\alpha}{}_{\mu\alpha}=-T_{\mu}\,. \tag{3.34}\] Moreover, this is the only trace that can be built. If there is a metric, one could be tempted to construct \(g^{\mu\nu}T^{\alpha}{}_{\mu\nu}\). However, this contraction is identically zero, since the metric is symmetric in \(\mu\) and \(\nu\), while the torsion tensor is anti-symmetric. ### Non-Metricity Finally, we consider the third property associated with vectors: Their magnitude. Let \(v\in T\mathcal{M}\) with components \(v^{\mu}\) in a given coordinate chart. In oder to define the magnitude of a vector, we need a metric. Let that metric be \(g\) and denote its components in the same coordinate chart as before by \(g_{\mu\nu}\). Then, we define the magnitude4 of the vector \(v\) as Footnote 4: We recall that if the signature of the metric is Lorentzian, the magnitude can be positive, negative, or even zero for \(v\neq 0\). \[\|v\|^{2}\coloneqq g(v,v)=g_{\mu\nu}v^{\mu}v^{\nu}\,. \tag{3.35}\] How does the magnitude of a vector change if we parallel transport it along some curve \(\gamma\), as illustrated in Figure 10? To answer this question, we assume that \(u=u^{\mu}\partial_{\mu}\) is the tangent vector to the curve \(\gamma\). Furthermore, we assume that \(v\) is parallel transported along \(\gamma\), which means it satisfies the parallel transport equation \[u^{\alpha}\nabla_{\alpha}v^{\mu}=0 \tag{3.36}\] with respect to the covariant derivative induced by \(\Gamma^{\alpha}{}_{\mu\nu}\). This allows us to determine how the magnitude changes under parallel transport. Using Leibniz's rule we find \[\frac{\mathrm{d}}{\mathrm{d}t}P(\gamma)_{0}^{t}\|v\|^{2} =u^{\alpha}\nabla_{\alpha}\left(g_{\mu\nu}v^{\mu}v^{\nu}\right)= \left(u^{\alpha}\nabla_{\alpha}g_{\mu\nu}\right)v^{\mu}v^{\nu}+2g_{\mu\nu} \underbrace{\left(u^{\alpha}\nabla_{\alpha}v^{\mu}\right)v^{\nu}}_{=0}\] \[=:Q_{\alpha\mu\nu}u^{\alpha}v^{\mu}v^{\nu}\,, \tag{3.37}\] Figure 10: A vector field \(v\) is being parallel transported along a curve \(\gamma\). The magnitude \(\|v\|^{2}\) changes from point to point along \(\gamma\). For two infinitesimally close points \(p\) and \(q\), the change in magnitude is measured by the non-metricity tensor, \(\|v_{q}\|^{2}-\|v_{p}\|^{2}=Q_{\alpha\mu\nu}(p)u_{p}^{\alpha}v_{p}^{\mu}v_{p}^ {\nu}\), where \(u_{p}\) is the tangent vector of \(\gamma\) at the point \(p\). where the last term on the first line vanishes due to the parallel transport equation and where we have introduced the **non-metricity tensor** \[\boxed{Q_{\alpha\mu\nu}\coloneqq\nabla_{\alpha}g_{\mu\nu}} \tag{3.38}\] If the connection \(\Gamma^{\alpha}{}_{\mu\nu}\) is such that this tensor is not zero, \(Q_{\alpha\mu\nu}\) can be interpreted as a measure for how the magnitude of a vector changes under parallel transport. The condition \[\nabla_{\alpha}g_{\mu\nu}\stackrel{{!}}{{=}}0 \tag{3.39}\] is called the **metricity condition** and a connection \(\Gamma^{\alpha}{}_{\mu\nu}\) which satisfies (3.39) is called **metric-compatible**. A point which is sometimes overlooked or which causes a bit of confusion is that the non-metricity tensor with its last two indices raised is _not_ equal to the covariant derivative of the inverse metric, \[Q_{\alpha}{}^{\mu\nu}\neq\nabla_{\alpha}g^{\mu\nu}\,. \tag{3.40}\] By computing the covariant derivative of the identity \(g_{\mu\lambda}g^{\lambda\nu}=\delta_{\mu}{}^{\nu}\), one can easily show that the correct expression is \[\boxed{Q_{\alpha}{}^{\mu\nu}=-\nabla_{\alpha}g^{\mu\nu}}, \tag{3.41}\] i.e., with a minus sign in front of the derivative. For completeness we also remark that non-metricity as measure for the change of magnitude under parallel transport is the simplest geometric interpretation of \(Q_{\alpha\mu\nu}\). In more generality we can say that the non-metricity tensor measures how quantities which depend on the metric change when they are parallel transported. For instance, \(Q_{\alpha\mu\nu}\) is also a measure for how the angle5 between two vectors change. An other example is the \(n\)-dimensional volume of a region \(\Omega\subset\mathcal{M}\), which is defined as Footnote 5: For a definition of angles in Lorentzian geometries of arbitrary dimension see for instance [79]. \[\mathsf{Vol}(\Omega)\coloneqq\int_{\Omega}\sqrt{|g|}\,\mathrm{d}^{n}x\,. \tag{3.42}\] If we parallel transport \(\mathsf{Vol}(\Omega)\) along \(\gamma\) with tangent vector \(u\), we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\mathsf{Vol}(\Omega) =\int_{\Omega}u^{\alpha}\left(\nabla_{\alpha}\sqrt{|g|}\right)\, \mathrm{d}^{n}x=\frac{1}{2}\int_{\Omega}\sqrt{|g|}\,u^{\alpha}g^{\mu\nu} \nabla_{\alpha}g_{\mu\nu}\,\mathrm{d}^{n}x\] \[=\frac{1}{2}\int_{\Omega}\sqrt{|g|}u^{\alpha}g^{\mu\nu}Q_{\alpha \mu\nu}\,\mathrm{d}^{n}x\,. \tag{3.43}\] In the last line, the trace \(g^{\mu\nu}Q_{\alpha\mu\nu}\) of the non-metricity tensor appears. It is convenient to properly introduce a symbol for this trace, as it will appear quite frequently. Since the non-metricity tensor has three indices and the last two are symmetric, we can define two independent traces: \[\boxed{Q_{\alpha}\coloneqq g^{\mu\nu}Q_{\alpha\mu\nu}=Q_{\alpha}{}^{\mu}{}_{ \mu}}\quad\text{and}\quad\bar{Q}_{\alpha}\coloneqq g^{\mu\nu}Q_{\mu\nu\alpha}= Q^{\mu}{}_{\mu\alpha} \tag{3.44}\] ### Classification of Metric-Affine Geometries and the Decomposition of the Connection Given that the connection is not a tensor, it cannot have an intrinsic geometric meaning. That is to say, the connection \(\Gamma\) by itself cannot be a measure of some geometric property of a metric-affine geometry \((\mathcal{M},g,\Gamma)\). However, we have seen that the connection does give rise to true tensorial objects: Curvature, torsion, and non-metricity. Therefore, the connection and the metric _do_ carry intrinsic geometric information about a given metric-affine geometry. In fact, we can distinguish between different types of geometries: 1. **Bare manifold:** The simplest type consists simply of a manifold \(\mathcal{M}\) without any metric nor connection. This is sufficient to talk about curves, scalar fields, vector fields, and other tensor fields. However, no notion of length or distance or covariant differentiation (except for the scalar field) is defined. From a physics perspective, this is the least useful type of geometry. 2. **Affine geometry:** An affine geometry consists of the couple \((\mathcal{M},\Gamma)\). One can do all the things one can do with a bare manifold and, on top of that, a covariant derivative can be defined. Given a connection \(\Gamma\), one can also compute the curvature and torsion tensors. Thus, affine geometries can have curvature, or torsion, or both. However, because no metric is defined, one lacks a notion of distance or length and, consequently, of geodesics. 3. **Riemannian geometry:** A Riemannian geometry consists of the pair \((\mathcal{M},g)\). This type of geometry has the advantage that it comes equipped with a notion of length and distance. Thus, it is possible to talk about geodesics, magnitudes of vectors, as well as areas and volumes and so on. Given a metric, one can compute its Christoffel symbols, aka its Levi-Civita connection. Thus, a Riemannian manifold comes naturally equipped with a covariant derivative. Namely the derivative \(\mathcal{D}\) induced by the Levi-Civita connection of the metric. It turns out that this connection is torsion-free and metric-compatible. Therefore, Riemannian geometries are characterized by having curvature, but no torsion and no non-metricity. 4. **Metric-affine geometry:** The most general type is the metric-affine geometry, consisting of the triple \((\mathcal{M},g,\Gamma)\). All geometric concepts discussed so far are defined for this type of geometry. Furthermore, one can subdivide metric-affine geometries as follows (see also Figure 1): 1. \(R^{\alpha}{}_{\mu\nu\rho}=0\), \(T^{\alpha}{}_{\mu\nu}=0\), \(Q_{\alpha\mu\nu}=0\): When all three geometric tensors vanish, one is left with Euclidean space or Minkowski space (depending on the metric signature). 2. \(R^{\alpha}{}_{\mu\nu\rho}\neq 0\), \(T^{\alpha}{}_{\mu\nu}=0\), \(Q_{\alpha\mu\nu}=0\): Curvature is the only non-vanishing tensor. This means, unsurprisingly, that Riemannian geometry is a special case of a metric-affine geometry. This is also the mathematical framework within which General Relativity is formulated. 3. \(R^{\alpha}{}_{\mu\nu\rho}=0\), \(T^{\alpha}{}_{\mu\nu}\neq 0\), \(Q_{\alpha\mu\nu}=0\): Torsion is the only non-vanishing tensor. This will be the geometry on which we build TEGR, the Teleparallel Equivalent of General Relativity, and its various extensions. 4. \(R^{\alpha}{}_{\mu\nu\rho}=0\), \(T^{\alpha}{}_{\mu\nu}=0\), \(Q_{\alpha\mu\nu}\neq 0\): Non-metricity is the only non-vanishing tensor. This is the basis on which we construct STEGR, the Symmetric Teleparallel Equivalent of General Relativity, and its extensions. 5. \(R^{\alpha}{}_{\mu\nu\rho}=0\), \(T^{\alpha}{}_{\mu\nu}\neq 0\), \(Q_{\alpha\mu\nu}\neq 0\): Torsion and non-metricity are both non-vanishing. This geometry can also be used to construct theories of gravity, namely the General Teleparallel Equivalent of General Relativity, or GTEGR for short. 6. \(R^{\alpha}{}_{\mu\nu\rho}\neq 0\), \(T^{\alpha}{}_{\mu\nu}\neq 0\), \(Q_{\alpha\mu\nu}=0\): Curvature and torsion are non-zero. This is a possible geometry, but not one that will be further discussed in this review. 7. \(R^{\alpha}{}_{\mu\nu\rho}\neq 0\), \(T^{\alpha}{}_{\mu\nu}=0\), \(Q_{\alpha\mu\nu}\neq 0\): It is also possible to obtain geometries with non-vanishing curvature and non-metricity. This type of geometry will also not be of any interest to us. 8. \(R^{\alpha}{}_{\mu\nu\rho}\neq 0\), \(T^{\alpha}{}_{\mu\nu}\neq 0\), \(Q_{\alpha\mu\nu}\neq 0\): Clearly, the most general type of geometry is the one where none of the characteristic tensors vanish. Even tough the connection does not have a _direct_ geometric meaning which is invariant under changes of coordinates (and thus intrinsic to the geometry), geometric information can nevertheless be extracted from it. This begs the question whether the connection can be decomposed in a form which makes the geometric information it carries more evident. That is, can it be brought into a form which shows us whether it gives rise to non-vanishing torsion or non-metricity? The strategy is as follows: We work in a generic metric-affine geometry \((\mathcal{M},g,\Gamma)\) and we compute the covariant derivative of the metric, perform cyclic permutations of the indices, and finally isolate the connection coefficients \(\Gamma^{\alpha}{}_{\mu\nu}\). The final result should definitely know about torsion (since torsion is the anti-symmetric part of the connection), it should know about the Levi-Civita connection, since the Levi-Civita connection is usually obtained in precisely this fashion, and it should also know about non-metricity, since we never imposed the vanishing of \(\nabla_{\alpha}g_{\mu\nu}\). The covariant derivative and its cyclic permutations read \[\partial_{\alpha}g_{\mu\nu}-\Gamma^{\beta}{}_{\alpha\nu}g_{\mu \beta}-\Gamma^{\beta}{}_{\alpha\mu}g_{\beta\nu}=Q_{\alpha\mu\nu}\] \[\partial_{\mu}g_{\nu\alpha}-\Gamma^{\beta}{}_{\mu\alpha}g_{\nu \beta}-\Gamma^{\beta}{}_{\mu\alpha}g_{\beta\alpha}=Q_{\mu\nu\alpha}\] \[\partial_{\nu}g_{\alpha\mu}-\Gamma^{\beta}{}_{\nu\alpha}g_{\beta \mu}-\Gamma^{\beta}{}_{\nu\mu}g_{\alpha\beta}=Q_{\nu\alpha\mu}\,. \tag{3.45}\] After adding together the first two equations and subtracting the last one, we obtain \[T^{\beta}{}_{\nu\mu}g_{\alpha\beta}+T^{\beta}{}_{\nu\alpha}g_{\mu\beta}+T^{ \beta}{}_{\alpha\mu}g_{\nu\beta}+\partial_{\alpha}g_{\mu\nu}+\partial_{\mu}g _{\nu\alpha}-\partial_{\nu}g_{\alpha\mu}-2\Gamma^{\beta}{}_{\alpha\mu}g_{\nu \beta}=Q_{\alpha\mu\nu}+Q_{\mu\nu\alpha}-Q_{\nu\alpha\mu}\,. \tag{3.46}\] This finally allows us to solve for the connection and we find (after re-labelling some indices) \[\boxed{\Gamma^{\alpha}{}_{\mu\nu}=\left\{\begin{array}{c}\alpha\\ \mu\nu\end{array}\right\}+K^{\alpha}{}_{\mu\nu}+L^{\alpha}{}_{\mu\nu}} \tag{3.47}\] In this compact form of the decomposed connection we have introduced the **contorsion tensor**\(K^{\alpha}{}_{\mu\nu}\) and the **deformation tensor**\(L^{\alpha}{}_{\mu\nu}\) \[\boxed{K^{\alpha}{}_{\mu\nu}\coloneqq\frac{1}{2}T^{\alpha}{}_{\mu\nu}+T_{(\mu }{}^{\alpha}{}_{\nu)}} \tag{3.48}\] Observe that the contorsion tensor is constructed from the torsion tensor alone, while the deformation tensor only depends on the non-metricity. From this decomposition, one recovers very quickly the well-known fact that a torsion-free (i.e., \(T^{\alpha}{}_{\mu\nu}=0\)) and metric-compatible (i.e., \(Q_{\alpha\mu\nu}=0\)) connection is _uniquely_ given by the Levi-Civita connection: \[\boxed{T^{\alpha}{}_{\mu\nu}=0\,\,\,\text{and}\,\,\,Q_{\alpha\mu\nu}=0\,\,\, \,\,\,\,\,\,\,\,\,\Longrightarrow\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, From the first to the second line we replaced \(\partial_{\mu}\) by \(\mathcal{D}_{\mu}\), since this does not alter the result. Then we used the fact that \(\mathcal{D}_{\mu}\) is metric-compatible, \(\mathcal{D}_{\lambda}g_{\mu\nu}=0\), in order to eliminate the first derivative and commute the metric past \(\mathcal{D}_{\mu}\) in order to lower the index of the vector field. This gives us the compact result on the last line. It is sometimes useful, for instance when discussing symmetries of metric-affine geometries, to compute the Lie derivative in terms of a general affine connection. Of particular interest are the Lie derivatives of the metric and the connection, which can be written as [80] \[\begin{array}{|l}\hline\mathcal{L}_{v}g_{\mu\nu}=2g_{\lambda(\mu}\nabla_{\nu )}v^{\lambda}+\left(Q_{\lambda\mu\nu}-2T_{(\mu\nu)\lambda}\right)v^{\lambda}=2 \mathcal{D}_{(\mu}v_{\nu)}\\ \mathcal{L}_{v}\Gamma^{\alpha}{}_{\mu\nu}=\nabla_{\mu}\nabla_{\nu}v^{\alpha}- T^{\alpha}{}_{\nu\lambda}\nabla_{\mu}v^{\lambda}-\left(R^{\alpha}{}_{\nu \mu\lambda}+\nabla_{\mu}T^{\alpha}{}_{\nu\lambda}\right)v^{\lambda}\,,\end{array} \tag{3.51}\] where \(\nabla\) is a general affine connection. When computing the Lie derivative of the connection, one has to make sure to use the correct formula. Namely, \[\mathcal{L}_{v}\Gamma^{\alpha}{}_{\mu\nu}=v^{\lambda}\partial_{\lambda}\Gamma ^{\alpha}{}_{\mu\nu}-\Gamma^{\lambda}{}_{\mu\nu}\partial_{\lambda}v^{\alpha}+ \Gamma^{\alpha}{}_{\lambda\nu}\partial_{\mu}v^{\lambda}+\Gamma^{\alpha}{}_{ \mu\lambda}\partial_{\nu}v^{\lambda}+\mathbf{\partial}_{\mu}\mathbf{\partial}_{\nu} \mathbf{v}^{\alpha}\,. \tag{3.52}\] This follows directly from the fact that the connection does _not_ transform like a tensor and instead obeys the inhomogeneous transformation law (2.64). Indeed, one recognizes the first four terms to be related to the homogeneous piece in the coordinate transformation of the connection, while the last term is produced by the inhomogeneous piece. Also, one should note that even tough the connection is not a tensor, its Lie derivative _is_ a \((1,2)\) tensor! This is also nicely evident from equation (3.51), where the right hand side is completely constructed from tensorial quantities. The formulas (3.51) play a role in characterizing symmetries of metric-affine geometries, as alluded to before. In the works [81, 82, 83, 84, 85, 86], symmetries of metric-affine geometries were defined as follows: Let \(\phi_{s}:\mathbb{R}\times\mathcal{M}\rightarrow\mathcal{M}\) be a \(1\)-parameter family of diffeomorphisms which satisfies \(\phi_{0}=\mathsf{id}\), \(\phi_{s}\circ\phi_{t}=\phi_{s+t}\), and which is smooth in the parameter \(s\). This \(1\)-parameter family is a **symmetry of a metric-affine geometry (\(\mathbf{\mathcal{M}},\mathbf{g},\mathbf{\Gamma}\))** if \[\begin{cases}\phi_{s}^{*}g_{\mu\nu}&\stackrel{{!}}{{=}}&g_{\mu\nu }\\ \phi_{s}^{*}\Gamma^{\alpha}{}_{\mu\nu}&\stackrel{{!}}{{=}}&\Gamma^ {\alpha}{}_{\mu\nu}\end{cases} \tag{3.53}\] This is called the **symmetry condition**. It demands that neither the metric nor the connection change under the action of the diffeomorphism. It is important that the connection appears in this definition. This ensures that all objects constructed from the connection, such as curvature, torsion, and non-metricity, respect the symmetry generated by \(\phi_{s}\). Since \(\phi_{s}\) is smooth in \(s\), we can also consider the **infinitesimal symmetry condition**, obtained by expanding the original symmetry condition to first order in \(s\) around \(s=0\). It reads \[\begin{cases}\mathcal{L}_{\xi}g_{\mu\nu}&\stackrel{{!}}{{=}}&0 \\ \mathcal{L}_{\xi}\Gamma^{\alpha}{}_{\mu\nu}&\stackrel{{!}}{{=}}&0 \end{cases}\qquad\qquad\qquad\qquad\text{with}\qquad\qquad\qquad\xi\coloneqq \frac{\mathrm{d}}{\mathrm{d}s}\phi_{s}\bigg{|}_{s=0}\, \tag{3.54}\] where \(\xi\) is the vector field which generates the flow \(\phi_{s}\). It is often called the **generating vector field**. These symmetry conditions and the Lie derivatives of metric and connection will reappear when we discuss cosmology and black holes in subsections 6.1 and 6.2. ### Integration in the Presence of Torsion and Non-Metricity: The Generalized Gauss Theorem Integration on manifolds \(\mathcal{M}\) is a subject usually covered in courses on calculus of several variables or differential geometry. The theorems of Gauss and Stokes, into which this subject culminates, can be assumed to be familiar to all readers due to their widespread use in physics. What concerns us here, however, is how non-trivial geometric features, characterized by torsion and non-metricity, affect Gauss' theorem. The importance of this investigation lies in the fact that Gauss' theorem appears in variational principles or discussions of conserved charges [87]. Let us begin by recalling that on a Riemannian manifold \((\mathcal{M},g)\), where torsion and non-metricity both vanish, we are uniquely left with the Levi-Civita connection \(\left\{\begin{smallmatrix}\alpha\\ \mu\nu\end{smallmatrix}\right\}\) and the covariant derivative operator \(\mathcal{D}_{\mu}\) it induces. Using the easy to verify identity \[\left\{\begin{matrix}\lambda\\ \lambda\mu\end{matrix}\right\}=\partial_{\mu}\log\sqrt{|g|}\,, \tag{3.55}\] one can re-express the divergence of a vector field \(v^{\mu}\) as \[\mathcal{D}_{\mu}v^{\mu}=\frac{1}{\sqrt{|g|}}\partial_{\mu}\left(\sqrt{|g|}v^{ \mu}\right)\,. \tag{3.56}\] Gauss' theorem, in its familiar form, emerges from this simple identity [88] \[\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\mathcal{D}_{\mu}v^{\mu}=\int_ {\mathcal{M}}\mathrm{d}^{4}x\,\partial_{\mu}\left(\sqrt{|g|}v^{\mu}\right)= \oint_{\partial\mathcal{M}}\mathrm{d}^{3}y\,\sqrt{|h|}\,\varepsilon\,n_{\mu}v ^{\mu}\,. \tag{3.57}\] Here, \(\partial\mathcal{M}\) denotes the boundary of \(\mathcal{M}\), \(n^{\mu}\) is the outward-pointing normal vector to \(\partial\mathcal{M}\), \(\varepsilon:=n_{\mu}n^{\mu}=\pm 1\), and \(h\) is the determinant of the induced metric on \(\partial\mathcal{M}\) (obtained by pulling back \(g_{\mu\nu}\) to \(\partial\mathcal{M}\)). Clearly, this result crucially depends on the identity (3.55) for the Levi-Civita connection. Hence, if we change the connection, we can no longer expect to find the same form of Gauss' theorem as the one given in (3.57). However, an analogous theorem holds for a generic metric-affine geometry \((\mathcal{M},g,\Gamma)\). To see that, we make use of the decomposition (3.47) of the connection which we encountered in the previous subsection: \[\Gamma^{\alpha}{}_{\mu\nu}=\left\{\begin{matrix}\alpha\\ \mu\nu\end{matrix}\right\}+L^{\alpha}{}_{\mu\nu}+K^{\alpha}{}_{\mu\nu}\,. \tag{3.58}\] This allows us to write the divergence of the vector field \(v^{\mu}\) as \[\nabla_{\mu}v^{\mu}=\mathcal{D}_{\mu}v^{\mu}+L^{\lambda}{}_{\lambda\mu}v^{\mu }+K^{\lambda}{}_{\lambda\mu}v^{\mu}\,. \tag{3.59}\] Next, we use the easy provable relations \[L^{\lambda}{}_{\lambda\mu}=-\frac{1}{2}Q_{\mu}\] and \[K^{\lambda}{}_{\lambda\mu}=-T_{\mu}\,, \tag{3.60}\] and combine these with the identity (3.56) to finally obtain \[\nabla_{\mu}v^{\mu}=\frac{1}{\sqrt{|g|}}\partial_{\mu}\left(\sqrt{|g|}v^{\mu} \right)-\left(\frac{1}{2}Q_{\mu}+T_{\mu}\right)v^{\mu}\,. \tag{3.61}\] This form of the divergence of a vector field lends itself to formulating the analogue of Gauss' theorem for a generic metric-affine geometry. We find that the generalized Gauss' theorem for a generic metric-affine geometry \((\mathcal{M},g,\Gamma)\) takes the form \[\boxed{\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\nabla_{\mu}v^{\mu}= \oint_{\partial\mathcal{M}}\mathrm{d}^{3}y\,\sqrt{|h|}\,\varepsilon\,n_{\mu}v ^{\mu}-\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\left(\frac{1}{2}Q_{\mu }+T_{\mu}\right)v^{\mu}} \tag{3.62}\] This can also be re-cast into a slightly different form by noticing that the covariant derivative of the square root of the metric determinant is given by \[\nabla_{\mu}\sqrt{|g|}=\frac{1}{2}\sqrt{|g|}\,Q_{\mu}\,. \tag{3.63}\] This allows us to phrase Gauss' theorem as a statement about the divergence of a vector _density_, \(\nabla_{\mu}\left(\sqrt{|g|}v^{\mu}\right)\), rather than about the divergence of a vector field, \(\nabla_{\mu}v^{\mu}\). Concretely, the generalized Gauss theorem can be equivalently stated as \[\boxed{\int_{\mathcal{M}}\mathrm{d}^{4}x\,\nabla_{\mu}\left(\sqrt{|g|}\,v^{\mu }\right)=\oint_{\partial\mathcal{M}}\mathrm{d}^{3}y\,\sqrt{|h|}\,\varepsilon \,n_{\mu}v^{\mu}-\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,T_{\mu}v^{\mu}} \tag{3.64}\] This concludes our discussion of the generalized Gauss theorem and we proceed with presenting important and useful identities for metric-affine geometries. ### Collection of Geometric Identities In concrete computations involving the covariant derivative \(\nabla_{\mu}\) with respect to a generic affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\), it is often necessary to commute two such operators in order to obtain simpler expressions or expressions with a more transparent geometric meaning. Recall that the covariant derivative can act on scalar, vector, and tensor fields (or densities). Thus, the simplest case to consider is the action of the commutator \([\nabla_{\mu},\nabla_{\nu}]\coloneqq\nabla_{\mu}\nabla_{\nu}-\nabla_{\nu} \nabla_{\mu}\) on a scalar field \(f\). Simply by using the basic definitions given in (3.32), one finds \[\boxed{[\nabla_{\mu},\nabla_{\nu}]f=-T^{\lambda}{}_{\mu\nu}\partial_{\lambda} f} \tag{3.65}\] Thus, the commutator acting on scalar fields vanishes if and only if the torsion tensor vanishes. Next, we consider the commutator \([\nabla_{\mu},\nabla_{\nu}]\) acting on a vector field \(v^{\alpha}\). It is again only necessary to use the basic definitions given in (3.22) and (3.32), but the computations become longer. What they boil down to is the identity \[\boxed{[\nabla_{\mu},\nabla_{\nu}]v^{\alpha}=R^{\alpha}{}_{\lambda\mu\nu}v^{ \lambda}-T^{\lambda}{}_{\mu\nu}\nabla_{\lambda}v^{\alpha}} \tag{3.66}\] Observe that when torsion vanishes, the above identity reduces to the form familiar from Riemannian geometry: \[[\nabla_{\mu},\nabla_{\nu}]v^{\alpha}=R^{\alpha}{}_{\lambda\mu\nu}v^{\lambda}\,. \tag{3.67}\] However, \(R^{\alpha}{}_{\lambda\mu\nu}\) is _not_ the curvature tensor with respect to the Levi-Civita connection, since the connection could be metric-incompatible, i.e., it could have a non-zero non-metricity tensor. It is also useful to prove an analogous identity for the commutator acting on a \(1\)-form \(\omega_{\alpha}\). In this case, one finds \[\boxed{[\nabla_{\mu},\nabla_{\nu}]\omega_{\alpha}=-R_{\alpha}{}^{\lambda}{}_{ \mu\nu}\omega_{\lambda}-T^{\lambda}{}_{\mu\nu}\nabla_{\lambda}\omega_{\alpha}} \tag{3.68}\] Note the appearance of a minus sign in front of the curvature tensor! With the identities (3.66) and (3.68) at our disposal, we can easily prove that the commutator \([\nabla_{\mu},\nabla_{\nu}]\) acting on a tensor field of type \((p,q)\) is given by \[\boxed{\begin{array}{ll}\boxed{[\nabla_{\mu},\nabla_{\nu}]S^{\mu_{1}\dots\mu_{p} \nu_{1}\dots\nu_{q}}=&R^{\mu_{1}}{}_{\lambda\mu\nu}S^{\lambda\dots\mu_{p}}{}_{ \lambda\dots\nu_{q}}+\dots+R^{\mu_{p}}{}_{\lambda\mu\nu}S^{\mu_{1}\dots\lambda}{}_ {\lambda\dots\nu_{q}}\\ &-R_{\nu_{1}}{}^{\lambda}{}_{\mu\nu}S^{\mu_{1}\dots\mu_{p}}{}_{\lambda\dots\nu_ {q}}-\dots-R_{\nu_{q}}{}^{\lambda}{}_{\mu\nu}S^{\mu_{1}\dots\mu_{p}}{}_{\nu_{1} \dots\lambda}\\ &-T^{\lambda}{}_{\mu\nu}\nabla_{\lambda}S^{\mu_{1}\dots\mu_{p}}{}_{\nu_{1} \dots\nu_{q}}\end{array}} \tag{3.69}\] This identity follows from the previous two mentioned above together with the fact that a \((p,q)\) tensor lives in the tensor product space \(T\mathcal{M}^{\otimes p}\otimes T^{*}\mathcal{M}^{\otimes q}\). Identity (3.69) is very general and covers almost every case of interest. The cases not covered by identity (3.69) only involve tensor _densities_. To remedy that, we need to understand how the commutator acts on the metric tensor and, more importantly, on its determinant. For the metric tensor, it follows from the definition of the non-metricity tensor and from identity (3.69) that we can express the commutator as \[\boxed{[\nabla_{\mu},\nabla_{\nu}]g_{\alpha\beta}=2\nabla_{[\mu}Q_{\nu]\alpha \beta}=-2R_{(\alpha\beta)\mu\nu}-T^{\lambda}{}_{\mu\nu}Q_{\lambda\alpha\beta}} \tag{3.70}\] Using \(\nabla_{\mu}|g|^{\frac{w}{2}}=\frac{w}{2}|g|^{\frac{w}{2}}g^{\alpha\beta} \nabla_{\mu}g_{\alpha\beta}\), where the integer \(w\geq 0\) is the density weight introduced in 2.2, together with the above identity, one finds that the commutator acting on \(|g|^{\frac{w}{2}}\) is given by \[\boxed{[\nabla_{\mu},\nabla_{\nu}]|g|^{\frac{w}{2}}=\frac{w}{2}|g|^{\frac{w}{ 2}}g^{\alpha\beta}[\nabla_{\mu},\nabla_{\nu}]g_{\alpha\beta}=w\,|g|^{\frac{w}{ 2}}\nabla_{[\mu}Q_{\nu]\alpha\beta}=w\,|g|^{\frac{w}{2}}\nabla_{[\mu}Q_{\nu]}} \tag{3.71}\] In the last step we used \(Q_{\nu}:=g^{\alpha\beta}Q_{\nu\alpha\beta}\). This finally allows us to determine the action of the commutator on a tensor density of type \((p,q)\) and weight \(w\): \[\boxed{[\nabla_{\mu},\nabla_{\nu}]\left(|g|^{\frac{w}{2}}S^{\mu_{1}\dots\mu_{ p}}{}_{\nu_{1}\dots\nu_{q}}\right)=|g|^{\frac{w}{2}}\left([\nabla_{\mu}, \nabla_{\nu}]S^{\mu_{1}\dots\mu_{p}}{}_{\nu_{1}\dots\nu_{q}}+w\,\nabla_{[\mu }Q_{\nu]}\right)} \tag{3.72}\] where the first commutator on the right hand side is of course given by identity (3.69). Observe that all previous identities follow from this one. By setting \(w=0\), we recover the identities for tensor fields (as opposed to tensor densities), including the one for scalar fields, which corresponds to \(w=0\) together with \(p=q=0\). Finally, we remark that the covariant derivative \(\nabla_{\mu}\) with respect to a generic affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\) satisfies the Jacobi identity: \[\boxed{[\nabla_{\alpha},[\nabla_{\beta},\nabla_{\gamma}]]+[\nabla_{\beta},[ \nabla_{\gamma},\nabla_{\alpha}]]+[\nabla_{\gamma},[\nabla_{\alpha},\nabla_{ \beta}]]=0} \tag{3.73}\] We now turn to important identities involving the curvature tensor. Some of these identities will play an important role in subsections 4.2 and 4.3, where they greatly simplify and illuminate the definition of the Teleparallel Equivalent of GR (TEGR) and the Symmetric Teleparallel Equivalent of GR (STEGR), respectively. To begin with, we remark that it follows from the definitions of the curvature tensor, the torsion tensor, and the covariant derivative, that the following identities hold \[\boxed{\begin{array}{c}R^{\alpha}{}_{\mu(\nu\rho)}=0\\ R^{\mu}{}_{[\alpha\beta\gamma]}-\nabla_{[\alpha}T^{\mu}{}_{\beta\gamma]}+T^{ \lambda}{}_{[\alpha\beta}T^{\mu}{}_{\gamma]\lambda}=0\\ \nabla_{[\alpha}R^{\mu}{}_{\nu|\beta\gamma]}-T^{\lambda}{}_{[\alpha\beta}R^{ \mu}{}_{|\nu|\gamma]\lambda}=0\end{array}} \tag{3.74}\] When we discuss teleparallel theories of gravity, it will prove to be useful to know how to relate the curvature tensor \(R^{\alpha}{}_{\mu\nu\rho}(\Gamma)\) of the affine connection to the curvature tensor \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}(g)\) of the Levi-Civita connection. To establish a relationship between the two, we point out that adding a \((1,2)\) tensor to a given connection \(\Gamma^{\alpha}{}_{\mu\nu}\) results in a new and equally valid connection \[\hat{\Gamma}^{\alpha}{}_{\mu\nu}=\Gamma^{\alpha}{}_{\mu\nu}+\Omega^{\alpha}{}_{ \mu\nu}\,. \tag{3.75}\] This follows directly from the transformation behaviour of a connection under changes of coordinates (cf. equation (2.64)). We can then compute the curvature tensor of the connection \(\hat{\Gamma}^{\alpha}{}_{\mu\nu}\) and express it in terms of the curvature of \(\Gamma^{\alpha}{}_{\mu\nu}\) as well as contributions coming from the tensor \(\Omega^{\alpha}{}_{\mu\nu}\). One finds \[\boxed{\hat{R}^{\alpha}{}_{\beta\mu\nu}=R^{\alpha}{}_{\beta\mu\nu}+T^{\lambda }{}_{\mu\nu}\Omega^{\alpha}{}_{\lambda\beta}+2\mathcal{D}_{[\mu}\Omega^{\alpha }{}_{\nu]\beta}+2\Omega^{\alpha}{}_{[\mu|\lambda|}\Omega^{\lambda}{}_{\nu] \beta}} \tag{3.76}\] where \(\mathcal{D}\) is the covariant derivative with respect to the Levi-Civita connection, as usual. This identity allows us to easily find a relation between \(R^{\alpha}{}_{\mu\nu\rho}(\Gamma)\) and \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}(g)\). We simply assume that the original connection was the Levi-Civita one, i.e., \(\Gamma^{\alpha}{}_{\mu\nu}=\left\{{\alpha\atop\mu\nu}\right\}\), while the tensor \(\Omega^{\alpha}{}_{\mu\nu}\) is the sum of contortion and deformation tensor, \(\Omega^{\alpha}{}_{\mu\nu}=K^{\alpha}{}_{\mu\nu}+L^{\alpha}{}_{\mu\nu}\). Thus, we find that the two curvature tensors are related to each other via \[\boxed{R^{\alpha}{}_{\mu\nu\rho}(\Gamma)=\mathcal{R}^{\alpha}{}_{ \mu\nu\rho}(g)+T^{\lambda}{}_{\nu\rho}K^{\alpha}{}_{\lambda\mu}+2\mathcal{D}_{ [\nu}K^{\alpha}{}_{\rho]\mu}+T^{\lambda}{}_{\nu\rho}L^{\alpha}{}_{\lambda\mu} +2\mathcal{D}_{[\nu}L^{\alpha}{}_{\rho]\mu}}\] \[\qquad\qquad\qquad\qquad\qquad+2K^{\alpha}{}_{[\nu|\lambda}K^{ \lambda}{}_{\rho]\mu}+2L^{\alpha}{}_{[\nu|\lambda}K^{\lambda}{}_{\rho]\mu}+2K ^{\alpha}{}_{[\nu|\lambda}L^{\lambda}{}_{\rho]\mu}+2L^{\alpha}{}_{\nu|\lambda }L^{\lambda}{}_{\rho]\mu}} \tag{3.77}\] The actual identity which will prove to be useful in teleparallel theories of gravity is the one which relates the Ricci scalars of the two connections. It reads \[\boxed{R(\Gamma)=\mathcal{R}(g)+\mathbb{T}+\mathbb{Q}+T^{\rho\mu\nu}Q_{\mu \nu\rho}-T^{\mu}Q_{\mu}+T^{\mu}\bar{Q}_{\mu}+\mathcal{D}_{\alpha}\left(Q^{ \alpha}-\bar{Q}^{\alpha}+2T^{\alpha}\right)} \tag{3.78}\] where we have introduced the **torsion scalar** and **non-metricity scalar**, respectively defined by \[\mathbb{T} \coloneqq\frac{1}{2}\left(\frac{1}{4}T_{\alpha\mu\nu}+\frac{1}{2} T_{\mu\alpha\nu}-g_{\alpha\mu}T_{\nu}\right)T^{\alpha\mu\nu}\] \[\mathbb{Q} \coloneqq\frac{1}{4}Q_{\alpha\mu\nu}Q^{\alpha\mu\nu}-\frac{1}{2} Q_{\alpha\mu\nu}Q^{\mu\alpha\nu}-\frac{1}{4}Q_{\alpha}Q^{\alpha}+\frac{1}{2}Q_{ \alpha}\bar{Q}^{\alpha}\,. \tag{3.79}\] Two special cases of this identity which play a role in TEGR and STEGR, respectively, are \[\boxed{R(\Gamma)=\mathcal{R}(g)+\mathbb{T}+2\mathcal{D}_{\alpha}T^{\alpha}} \tag{3.80}\] where we set non-metricity to zero, and \[\boxed{R(\Gamma)=\mathcal{R}(g)+\mathbb{Q}+\mathcal{D}_{\alpha}\left(Q^{ \alpha}-\bar{Q}^{\alpha}\right)} \tag{3.81}\] where torsion vanishes. Finally, we recall that for a general connections the curvature tensor is not antisymmetric in the first two indices, so one can form the non-zero homothetic tensor \(H_{\mu\nu}=R^{\lambda}{}_{\lambda\mu\nu}\). However, by taking traces of the second Bianchi identity above, one can show \[H_{\mu\nu}=2R_{[\mu\nu]}+2\nabla_{[\mu}T_{\nu]}+\nabla_{\lambda}T^{\lambda}{}_ {\mu\nu}+T_{\lambda}T^{\lambda}{}_{\mu\nu}\,.\] It thus follows that the homothetic tensor is _not_ an independent trace of the curvature tensor. It can be expressed with the help of other, already defined tensors. Another trace of the curvature tensor is the co-Ricci tensor \(P^{\mu}{}_{\nu}=R^{\mu\lambda}{}_{\nu\lambda}\). However, using the straightforward identity \(\nabla_{[\mu}Q_{\nu]\rho\sigma}=-R_{(\rho\sigma)\mu\nu}\), one can show \[P^{\mu}{}_{\nu}=R^{\mu}{}_{\nu}-2\nabla_{[\nu}Q_{\lambda]}{}^{\mu\lambda}\,.\] So this trace is also not independent. For the Levi-Civita connection one has by metric-compatibility that \(P_{\mu\nu}=R_{\mu\nu}\) as well as \(H_{\mu\nu}=0\); from the latter it follows then that the Ricci tensor is symmetric, as one is used to from Riemannian geometry. ## 4 The Geometrical Trinity of General Relativity In 1915, Einstein completed his General Theory of Relativity and he based it on Riemannian geometry. He found this at the time relatively new branch of mathematics to be an adequate language to _(a)_ develop a field theoretic description of gravity which cures the action-at-a-distance problem of Newtonian gravity, _(b)_ fully explore the consequences of the equivalence principle, and _(c)_ implement the idea that the laws of Nature do not depend on our arbitrary choice of coordinate systems. The latter one was an idea which, at the time, was unheard of and revolutionary. Today, we call this the principle of general covariance. Even though it was never Einstein's intention to "geometrize gravity" [89], as it is sometimes phrased, the theory he developed lends itself to an interpretation of the phenomena of gravity as the manifestation of the curvature of spacetime. This has been the prevalent interpretation of gravity for the past 100 years. However, as we saw in sections 2 and 3, Riemannian geometry is a special case of the much more general theory of metric-affine geometry. There is no physical principle that we know of which unequivocally selects Riemannian geometry as the only viable description of gravity. In fact, there are three distinct and yet physically equivalent descriptions of gravity, which are rooted in the mathematical framework of metric-affine geometry. These formulations ascribe gravitational phenomena either to non-vanishing curvature, torsion, or non-metricity. These descriptions form the **geometric trinity of General Relativity**[5, 6]. The next three subsections are dedicated to the corners of the triangle shown in Figure 12: GR, TEGR, and STEGR. We also discuss CGR as a gauge fixed version of STEGR, as well as GTEGR from which TEGR and STEGR emerge and which can be thought of as the lower edge of the triangle in Figure 12. Figure 12: Three different but equivalent representations of gravity: General Relativity (based on curvature), the Symmetric Teleparallel Equivalent of GR (based on non-metricity), and the Teleparallel Equivalent of GR (based on torsion). First, we review Einstein's original formulation in order to establish some basic facts and notations. This will also facilitate the comparison with the other two formulations, which we present in subsections 4.2 (TEGR) and 4.3 (STEGR). In all three formulations, the starting point is a generic metric-affine geometry and we follow a strict structure in order to construct the theory and work out its main features: \(a\). The Geometric Postulates; _b. Form of the Connection; _c. Construction of the Action Functional; _d. The Metric and Connection Field Equations; _e. The Palatini Formulation of the Action Principle; _f. The Bianchi Identities_; and finally _g. Counting Degrees of Freedom_. We deviate from this basic structure when we discuss CGR as the gauge fixed version of STEGR in subsection 4.4. Similarly, a different approach is used in subsection 4.5, where the focus is on GTEGR, its special properties, and how TEGR and STEGR emerge from this more general theory. ### Einstein's Original Formulation of General Relativity #### 4.1.1 The Geometric Postulates We start with a metric-affine geometry \((\mathcal{M},g,\Gamma)\) and stress that at this stage, neither \(\mathcal{M}\) nor \(g\) are fixed. This means that we do not choose a particular manifold nor do we choose a particular metric on that manifold. Both entities, \(\mathcal{M}\) and \(g\), will be determined later as solutions of Einstein's field equations. However, we need to select a connection \(\Gamma\) (or, as we will see later, select at least a class of connections) in order to formulate the theory. Thus, we postulate that \(\Gamma\) satisfies \[T^{\alpha}{}_{\mu\nu} \overset{!}{=}0 \text{and} Q_{\alpha\mu\nu} \overset{!}{=}0\,. \tag{4.1}\] These two postulates leave the curvature tensor \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}\) as the only non-zero tensor which characterizes the spacetime geometry. It will be the main building block for GR. #### 4.1.2 Form of a Torsionless, Metric-Compatible Connection It is well-known, and it can also be checked using (3.47), that these two geometric postulates are satisfied if and only if the connection is given by the Levi-Civita connection, \[\Gamma^{\alpha}{}_{\mu\nu}\equiv\left\{\begin{aligned} \alpha \\ \mu\nu\end{aligned}\right\}=\frac{1}{2}g^{\alpha\lambda}\left( \partial_{\mu}g_{\nu\lambda}+\partial_{\nu}g_{\mu\lambda}-\partial_{\lambda} g_{\mu\nu}\right)\,. \tag{4.2}\] Because the connection is completely determined by the metric, we will omit \(\Gamma\) from the triple \((\mathcal{M},g,\Gamma)\) and simply say that spacetime is modelled by the pair \((\mathcal{M},g)\), where it is silently understood that \(\Gamma\) is given by (4.2). #### 4.1.3 Construction of the Action Functional The action functional which defines GR is the famous Einstein-Hilbert (EH) action plus the equally famous Gibbons-Hawking-York (GHY) boundary term [90, 91]. Including a cosmological term and a matter action for completeness, we can define GR by the functional \[\mathcal{S}_{\mathrm{GR}}[g,\Psi] \coloneqq\mathcal{S}_{\mathrm{EH}}[g]+\mathcal{S}_{\mathrm{GHY}}[ h]+\mathcal{S}_{\mathrm{matter}}[g,\Psi]\] \[\coloneqq\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\, \sqrt{|g|}\,(\mathcal{R}-2\Lambda)+\frac{1}{\kappa}\oint_{\partial\mathcal{M} }\mathrm{d}^{3}y\,\sqrt{|h|}\,\varepsilon\,\mathcal{K}+\mathcal{S}_{\mathrm{ matter}}[g,\Psi]\,, \tag{4.3}\] with \(\kappa\coloneqq 8\pi G\). The fist integral on the second line is the aforementioned EH action (including a cosmological constant \(\Lambda\)), the second integral is the GHY boundary term, and \(\mathcal{S}_{\mathrm{matter}}[g,\Psi]\) is the action of (tensorial) matter fields \(\Psi\) which are minimally coupled to the gravitational field \(g_{\mu\nu}\). Spinorial fields are not described by the action (4.3). To describe Fermions, we would have to describe gravity in terms of a tetrad field, rather than a metric tensor. As we will discuss shortly, the GHY term is necessary whenever the manifold \(\mathcal{M}\) has a boundary \(\partial\mathcal{M}\), otherwise the variational principle is ill-defined and would not yield any field equations. In the above boundary integral, \(\varepsilon\) is defined as \(\varepsilon\coloneqq n_{\mu}n^{\mu}=\pm 1\), where \(n^{\mu}\) is the normal vector to \(\partial\mathcal{M}\). This vector is normalized to either \(\varepsilon=+1\) (when \(\partial\mathcal{M}\) is timelike) or \(\varepsilon=-1\) (when \(\partial\mathcal{M}\) is spacelike). Furthermore, \(h\) denotes the determinant of the metric intrinsic to \(\partial\mathcal{M}\), while \(\mathcal{K}\) is the trace of the extrinsic curvature of \(\partial\mathcal{M}\) viewed as hypersurface embedded into \(\mathcal{M}\). For a didactical discussion of hypersurfaces, embeddings, and the concept of extrinsic curvature, we refer the reader to Poisson's book [88]. #### The Field Equations Our next task is to find field equations for the metric which contain at most second order derivatives of \(g\) and which are of the form \[\mathcal{E}(g)_{\mu\nu} =8\pi G\,\mathcal{T}_{\mu\nu} \text{with} \mathcal{D}_{\mu}\mathcal{E}^{\mu}{}_{\nu} =0\,, \tag{4.4}\] and where \(\mathcal{T}_{\mu\nu}\) stands for the energy-momentum tensor of matter fields. This form of the field equations can be motivated by considering the Newtonian limit. The requirement of second order field equations is necessary for having a well-posed initial value problem and the divergence-freeness of the tensor \(\mathcal{E}_{\mu\nu}\) ensures that the covariant conservation of the matter energy-momentum tensor is a consequence of the field equations. Given only these requirements, Lovelock showed [92] that the left hand side of the field equations has to be \[\mathcal{E}(g)_{\mu\nu} =a\,G_{\mu\nu}+\Lambda\,g_{\mu\nu}\,, \tag{4.5}\] where \(a\) and \(\Lambda\) are real constants and \(G_{\mu\nu}\) is the so-called Einstein tensor, which is explicitly given by \[G_{\mu\nu} \coloneqq\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}\,g_{\mu\nu}\,. \tag{4.6}\] This tensor indeed only contains first and second order derivatives of the metric. Taking again into consideration the Newtonian limit, we find that the constant \(a\) is equal to \(1\) and \(\Lambda\), the so-called cosmological constant, has to be very small. Measurements performed by the Planck collaboration [93, 94] revealed that the cosmological constant is positive and of the order of \(\Lambda\sim 10^{-52}\mathrm{m}^{-2}\), in SI units. Thus, the Einstein field equations take on the form \[\boxed{\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}\,g_{\mu\nu}+ \Lambda\,g_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}} \tag{4.7}\] Given a set of initial data on a three-dimensional Cauchy surface, these equations determine the metric and the manifold, \((\mathcal{M},g)\), up to diffeomorphisms [73, 74]. This is analogous to Maxwell's equations, which determine the vector potential \(A^{\mu}\) up to gauge transformations. For more details and a mathematically robust formulation of the initial value problem of GR, see for instance [73, 74]. The field equations (4.7) follow from the action functional (4.3) by taking a variation with respect to the inverse metric \(g^{\mu\nu}\) and demanding that this variation vanishes. As mentioned above, if \(\mathcal{M}\) has a boundary the variational principle is ill-defined unless we add the GHY boundary term. The necessity of this term was first realized by York [90] and shortly afterwards also by Gibbons and Hawking [91]. Its origin is easy to understand if we consider the variation of the Einstein-Hilbert action with respect to \(g^{\mu\nu}\), which results in \[\delta_{g}\mathcal{S}_{\rm EH}[g]= \frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\, \left(\mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}\,g_{\mu\nu}+\Lambda\,g_{\mu \nu}\right)\delta g^{\mu\nu}\] \[-\frac{1}{2\kappa}\oint_{\partial\mathcal{M}}\mathrm{d}^{3}y\, \sqrt{|h|}\,\varepsilon\,\delta h^{\mu\nu}n^{\alpha}\partial_{\alpha}\left( \delta g_{\mu\nu}\right)\,. \tag{4.8}\] Recall that in the calculus of variation it is assumed that the variation \(\delta g^{\mu\nu}\) is fixed at the boundary, \(\left.\delta g^{\mu\nu}\right|_{\partial\mathcal{M}}=0\), but otherwise arbitrary. The condition \(\left.\delta g^{\mu\nu}\right|_{\partial\mathcal{M}}=0\) implies that derivatives of \(\delta g^{\mu\nu}\) in directions tangential to \(\partial\mathcal{M}\) vanish, but it does _not_ imply that derivatives in the direction normal to \(\partial\mathcal{M}\) vanish. In particular, one can conclude that \[n^{\alpha}\partial_{\alpha}\left(\delta g^{\mu\nu}\right)|_{ \partial\mathcal{M}}\neq 0\,, \tag{4.9}\] which in turn implies that the boundary integral in (4.8) does not vanish in general. Hence, the variation of the EH action with respect to \(g^{\mu\nu}\) under the boundary condition \(\left.\delta g^{\mu\nu}\right|_{\partial\mathcal{M}}=0\) does _not_ imply the Einstein field equations, unless one also imposes the _additional_ boundary condition \(\left.n^{\alpha}\partial_{\alpha}\left(\delta g^{\mu\nu}\right)\right|_{ \partial\mathcal{M}}=0\). Gibbons, Hawking, and York realized that this problem can be circumvented by the introduction of a boundary integral, whose variation precisely cancels the boundary integral in (4.8). Indeed, the variation of the GHY functional reads \[\delta_{g}\mathcal{S}_{\rm GHY}[h]=\frac{1}{2\kappa}\oint_{ \partial\mathcal{M}}\mathrm{d}^{3}y\,\sqrt{|h|}\,\varepsilon\,\delta h^{\mu \nu}n^{\alpha}\partial_{\alpha}\left(\delta g_{\mu\nu}\right)\,, \tag{4.10}\] which then implies that the total variation of the GR action is given by \[\delta_{g}\mathcal{S}_{\rm GR}[g,\Psi]=\frac{1}{2\kappa}\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\left(\mathcal{R}_{\mu\nu}-\frac{1}{ 2}\mathcal{R}\,g_{\mu\nu}+\Lambda\,g_{\mu\nu}\right)\delta g^{\mu\nu}-\frac{ 1}{2}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\mathcal{T}_{\mu\nu} \delta g^{\mu\nu}\,, \tag{4.11}\] where we have defined the energy-momentum tensor of matter fields as \[\boxed{\mathcal{T}_{\mu\nu}\coloneqq-\frac{2}{\sqrt{|g|}}\frac{ \delta\mathcal{S}_{\rm matter}}{\delta g^{\mu\nu}}} \tag{4.12}\] Thus, only once one has supplemented the action by an appropriate boundary term does one reproduce the celebrated Einstein field equations. As we will see, neither in TEGR nor STEGR are boundary terms needed. ### The Palatini Formulation of the Action Principle Einstein's General Relativity can also be formulated in the framework of a _general_ metric-affine geometry \((\mathcal{M},g,\Gamma)\), where \(\Gamma\) a priori possesses non-trivial curvature, torsion, and non-metricity. What is needed is a slight adaptation of the action principle. In the so-called **Palatini formalism**, metric and connection are regarded as two independent fields and the action is varied with respect to both. As we will see below, even if we start with a completely general \(\Gamma\), the connection field equations turn out to be purely algebraic equations which fix \(\Gamma\) to be the Levi-Civita connection up to a projective symmetry. This gives us back Einstein's original connection and its original field equations for the metric. The Palatini action functional for GR in absence of a cosmological constant is defined as \[\mathcal{S}_{\rm GR}[g,\Gamma]\coloneqq\frac{1}{2\kappa}\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}g^{\mu\nu}R_{\mu\nu}(\Gamma)+\mathcal{S }_{\rm matter}\,, \tag{4.13}\] where we recognize the first integral as being the EH action but written in terms of the Ricci scalar of the general affine connection \(\Gamma\), rather than the Ricci scalar \(\mathcal{R}\) of the Levi-Civita connection. Notice also that it is not necessary to include a boundary term a la Gibbons-Hawking-York, since there are no second order derivatives. The variational principle is thus well-defined. In fact, the metric field equations are determined along the same lines as in the standard GR case, but without the complication of boundary terms. By performing the variation of with respect to the inverse metric--while keeping the connection fixed--we find (4.14) The Ricci tensor is not varied with respect to the metric, since it is constructed exclusively from the affine connection. Also, note that the Ricci tensor is in general symmetric but that, due to the contraction with, only its symmetric part contributes. Using the well-known identity (4.15) together with definition (4.12) we find that the metric field equations can be written as (4.16) These equations have the same form as Einstein's field equations, but the Ricci tensor and Ricci scalar depend on the affine connection, rather than on the Levi-Civita connection. Next, we turn our attention to the variation with respect to, while keeping the metric fixed: (4.17) The variation of the Ricci tensor is given by the Palatini identity, (4.18) With the help of the Palatini identity and an integration by parts in order to move the covariant derivative off, we find that the variation with respect to the connection can be written as (4.19) where we have kept the total divergences. We cannot simply drop these, as we would usually do, since in a general metric-affine geometry they do _not_ give rise to pure boundary terms. In fact, the generalized Gauss theorem of subsection 3.6 tells us that (4.20) Both boundary integrals vanish because of the standard boundary condition. That is, the boundary integrals vanish because the variations are being kept fixed at the boundary of the integration region. However, the bulk integrals on the right side contribute to the variation of the action and we therefore find \[\mathcal{S}_{\rm GR}[g,\Gamma]|_{g} =-\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\left[\nabla_{ \alpha}\left(\sqrt{|g|}\,g^{\mu\nu}\right)\delta\Gamma^{\alpha}{}_{(\mu\nu)}- \nabla_{(\mu}\left(\sqrt{|g|}\,g^{\mu\nu}\right)\delta\Gamma^{\alpha}{}_{\alpha |\nu)}-\sqrt{|g|}\,g^{\mu\nu}T^{\alpha}{}_{\beta(\mu}\delta\Gamma^{\beta}{}_{ \alpha|\nu)}\right]\] \[\quad+\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\sqrt{|g|} \,T_{(\mu}g^{\mu\nu}\,\delta\Gamma^{\alpha}{}_{\alpha|\nu)}-\frac{1}{2\kappa} \int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,T_{\alpha}g^{\mu\nu}\delta \Gamma^{\alpha}{}_{(\mu\nu)}+\delta_{\Gamma}\mathcal{S}_{\rm matter}\big{|}_{g} \tag{4.21}\] After some index reshuffling, we can factor out the common factor \(\delta\Gamma\) and read off the connection field equations \[\nabla_{\alpha}\left(\sqrt{|g|}\,g^{\mu\nu}\right)-\delta^{\mu}{}_{\alpha} \nabla_{\beta}\left(\sqrt{|g|}\,g^{\beta\nu}\right)=\sqrt{|g|}\left[g^{\mu\nu} T_{\alpha}+g^{\beta\nu}T^{\mu}{}_{\alpha\beta}-\delta^{\mu}{}_{\alpha}g^{\beta \nu}T_{\beta}\right]+\tilde{\mathcal{H}}_{\alpha}{}^{\mu\nu}\,, \tag{4.22}\] where we have also introduced the **hypermomentum of matter6**, Footnote 6: Notice that it follows from the definition that \(\tilde{\mathcal{H}}^{\alpha}{}_{\mu\nu}\) is a tensor density of weight \(w=+1\) and that equation (4.22) is thus self-consistent. \[\tilde{\mathcal{H}}_{\alpha}{}^{\mu\nu}\coloneqq 2\kappa\frac{\delta \mathcal{S}_{\rm matter}}{\delta\Gamma^{\alpha}{}_{\mu\nu}}\,. \tag{4.23}\] These are the connection field equations of GR. Fermionic fields naturally couple to torsion and do therefore contribute to the hypermomentum. Against expectation, if torsion is present, bosonic fields also do contribute to the hypermomentum. A detailed discussion of matter coupling in metric-affine geometries can be found in [80]. For simplicity, we shall first assume that torsion and hypermomentum both vanish. Then, the connection field equations (4.22) reduce to \[\nabla_{\alpha}\left(\sqrt{|g|}\,g^{\mu\nu}\right)-\delta^{\mu}{}_{\alpha} \nabla_{\beta}\left(\sqrt{|g|}\,g^{\beta\nu}\right)=0\,. \tag{4.24}\] By taking the \(\alpha=\mu\) trace of this equation, we obtain \[\nabla_{\beta}\left(\sqrt{|g|}\,g^{\beta\nu}\right)=0\qquad \Longleftrightarrow\qquad Q^{\nu}-2\bar{Q}^{\nu}=0\,, \tag{4.25}\] where we have written the equation also in terms of the non-metricity tensor and its two traces, \(Q_{\nu}=g^{\alpha\beta}Q_{\nu\alpha\beta}\) and \(\bar{Q}_{\nu}=g^{\alpha\beta}Q_{\alpha\beta\nu}\). Plugging this result back into equation (4.24) yields \[\nabla_{\alpha}\left(\sqrt{|g|}\,g^{\mu\nu}\right)=0\qquad \Longleftrightarrow\qquad g^{\mu\nu}\,Q_{\alpha}-2Q_{\alpha}{}^{\mu\nu}=0\,. \tag{4.26}\] After contracting this equation with \(g_{\mu\nu}\), it follows that \[4Q_{\alpha}-2g_{\mu\nu}Q_{\alpha}{}^{\mu\nu}=2Q_{\alpha}=0\,. \tag{4.27}\] Hence, equation (4.26) finally tells us that \[Q_{\alpha}{}^{\mu\nu}=0\,. \tag{4.28}\] Recall that we assumed \(T^{\alpha}{}_{\mu\nu}=0\) and this simplifying assumption led us to uncover that the connection field equation reduces to \(Q_{\alpha\mu\nu}=0\). We can therefore conclude that the connection is torsionless and metric-compatible, which uniquely fixes it to be the Levi-Civita connection. This means that the metric field equations (4.16) become the standard Einstein equations. The simplifying assumptions that torsion and hypermomentum vanish can be lifted and even in full generality it is found that the connection field equations are purely algebraic equations for the connection which, at the end of the day, can be completely solved. It is found that the connection is given by the Levi-Civita connection, up to a projective transformation \(\Gamma^{\alpha}{}_{\mu\nu}\mapsto\Gamma^{\alpha}{}_{\mu\nu}+\delta^{\alpha}{}_{ \nu}\xi_{\mu}\). For the general case, we refer the reader to [95]. The key lesson here is the following: As long as the action has the same form as the EH action, changing the geometric framework will _not_ lead to a new formulation of GR. General Relativity arises naturally if the dynamics is described by an action of the EH form, even if the variational principle is formulated a la Palatini. Hence, if we wish to develop teleparallel theories of gravity, we do not only have to change the geometric framework, we also have to change the action such that it has a genuinely _different_ form, but is nevertheless equivalent to the EH action. Equivalent means that the field equations for the metric are the same and that the theories all propagate the same number of physical degrees of freedom. #### The Bianchi Identities The integrand of the Einstein-Hilbert action is \(\sqrt{|g|}\,\mathcal{R}\), which is a scalar density of weight \(w=+1\). Recall that under a change of coordinates \(x^{\mu}\mapsto x^{\prime\mu}(x)\) a scalar density of weight one transforms as (see equation (28)) \[\sqrt{|g(x)|}\,\mathcal{R}(x)=\det\left(J\right)\,\sqrt{|g(x^{ \prime})|}\,\mathcal{R}(x^{\prime})\,, \tag{4.29}\] where \(J\) is the Jacobian matrix with components \(J^{\mu}{}_{\nu}=\frac{\partial x^{\prime\mu}}{\partial x^{\nu}}\). It therefore follows from the change of integration variables formula of calculus that \[\int_{\mathcal{M}}\mathrm{d}^{4}x^{\prime}\,\sqrt{|g(x^{\prime})| }\,\mathcal{R}(x^{\prime})\det\left(J\right)=\int_{\mathcal{M}}\mathrm{d}^{4} x\sqrt{|g(x)|}\,\mathcal{R}(x)\,. \tag{4.30}\] In other words, the Einstein-Hilbert action is invariant under diffeomorphisms. Since this is true for _any_ diffeomorphism, we can just as well consider a 1-parameter family of diffeomorphisms: Let \(\phi_{s}:\mathbb{R}\times\mathcal{M}\to\mathcal{M}\) be such a 1-parameter family of diffeomorphisms with \(\phi_{s=0}=\mathrm{id}\) and with generating vector field \(v\coloneqq\frac{\mathrm{d}\phi_{s}}{\mathrm{d}s}\Big{|}_{s=0}\). This family of diffeomorphisms can be read as a family of changes of coordinates, i.e, \(x^{\mu}\mapsto\phi_{s}^{\mu}(x)\) for every value of \(s\). The EH action is invariant under all these diffeomorphisms. Recall from subsection 2.3 that a 1-parameter family of diffeomorphisms generates a flow and that the infinitesimal change of a tensor under a flow is measured by the Lie derivative. In the case of the metric we have \[\phi_{s}^{*}g_{\mu\nu}=g_{\mu\nu}+s\,\mathcal{L}_{v}g_{\mu\nu}\,, \tag{4.31}\] where the parameter \(|s|\ll 1\) is infinitesimally small and \(\phi_{s}^{*}g_{\mu\nu}\) shall be understood as saying "we applied the diffeomorphism to the metric". Applying this 1-parameter family of diffeomorphisms to the EH action is tantamount to considering \[\mathcal{S}_{\mathrm{EH}}[\phi_{s}^{*}g]\,. \tag{4.32}\] Due to the invariance we have \[\mathcal{S}_{\mathrm{EH}}[\phi_{s}^{*}g]=\mathcal{S}_{\mathrm{EH }}[g] \tag{4.33}\] and if we expand this equation in \(s\) we find \[\mathcal{S}_{\mathrm{EH}}[g]+s\,\delta_{v}\mathcal{S}_{\mathrm{EH }}[g]=\mathcal{S}_{\mathrm{EH}}[g]\qquad\Longrightarrow\qquad\delta_{v} \mathcal{S}_{\mathrm{EH}}[g]=0\,, \tag{4.34}\] where the variation \(\delta_{v}\mathcal{S}_{\rm EH}[g]\) is defined as \[2\kappa\,\delta_{v}\mathcal{S}_{\rm EH}[g] =\int_{\mathcal{M}}\frac{\delta}{\delta g_{\mu\nu}}\left(\sqrt{|g|} \,\mathcal{R}\right)\,\delta_{v}g_{\mu\nu}\mathrm{d}^{4}x\] \[=\int_{\mathcal{M}}\frac{\delta}{\delta g_{\mu\nu}}\left(\sqrt{|g| }\,\mathcal{R}\right)\,\mathcal{L}_{v}g_{\mu\nu}\mathrm{d}^{4}x\,. \tag{4.35}\] Of course we already know the variation of \(\sqrt{|g|}\,\mathcal{R}\) with respect to \(g_{\mu\nu}\). Up to boundary terms, this is simply the Einstein tensor with raised indices multiplied by the square root of \(|g|\), i.e., \(\sqrt{|g|}\,G^{\mu\nu}\). By recalling from equation (3.50) that \[\mathcal{L}_{v}g_{\mu\nu}=2\mathcal{D}_{(\mu}v_{\nu)}\,, \tag{4.36}\] we can further simplify the form of the variation \(\delta_{v}\mathcal{S}_{\rm EH}[g]\) and we find \[2\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,G^{\mu\nu} \mathcal{D}_{(\mu}v_{\nu)}=2\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\, \mathcal{D}_{\mu}G^{\mu\nu}v_{\nu}=0\,, \tag{4.37}\] where we integrated by parts and dropped the boundary term. Since this has to hold for _any_ diffeomorphism (i.e., for any generating vector field \(v^{\mu}\)), we finally find \[\boxed{\mathcal{S}_{\rm EH}[g]\text{ invariant under diffeomorphisms}}\quad \Longrightarrow\quad\mathcal{D}_{\mu}G^{\mu\nu}=0 \tag{4.38}\] These are the **Bianchi identities** and they are a consequence of the diffeomorphism invariance of the theory. These equations imply that not all of Einstein's field equations are dynamical. This affects the counting of degrees of freedom, as we will now see. #### 4.2.6 Counting Degrees of Freedom The basic variable considered in GR is the metric. It has a total of ten independent components and thus we have an upper bound of ten physical degrees of freedom for the gravitational field. There is an equal number of second order partial differential equations, namely \(G_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\). However, since Einstein's equations are generally covariant, we are free to perform diffeomorphisms. Each diffeomorphism provides us with four choices, which in turn grants us the freedom to fix four components of the metric. Moreover, the Bianchi identities tell us that four of Einstein's field equations are actually constraints, rather than dynamical equations. This follows from expanding the Bianchi identities as \[\mathcal{D}_{\mu}G^{\mu\nu} =\mathcal{D}_{0}G^{0\nu}+\mathcal{D}_{i}G^{i\nu}\] \[=\partial_{0}G^{0\nu}+\left\{\begin{array}{c}\mu\\ \mu\lambda\end{array}\right\}G^{\lambda\nu}+\left\{\begin{array}{c}\nu\\ \mu\lambda\end{array}\right\}G^{\mu\lambda}+\partial_{i}G^{i\nu}=0\,, \tag{4.39}\] where the index \(i\) stands for spatial derivatives and is summed over the numbers \(1,2,3\). Let us now determine the order of spatial and temporal derivatives of the metric in each term. To that end, we recall that \(G_{\mu\nu}\) is second order in all derivatives, while the Levi-Civita connection is only first order in all derivatives. Hence, it follows that \[\left\{\begin{array}{c}\mu\\ \mu\lambda\end{array}\right\}G^{\lambda\nu}+\left\{\begin{array}{c}\nu\\ \mu\lambda\end{array}\right\}G^{\mu\lambda}\qquad\leadsto\qquad\text{Highest derivatives contained: }\partial_{0}^{2}g_{\mu\nu},\,\partial_{0}\partial_{i}g_{\mu\nu},\,\text{and }\partial_{i}\partial_{j}g_{\mu\nu}\,. \tag{4.40}\] Furthermore, since \(\partial_{i}\) increases the order of spatial derivatives, we find that \[\partial_{i}G^{i\nu}\qquad\leadsto\qquad\text{Highest derivatives contained: }\partial_{0}^{2}\partial_{i}g_{\mu\nu},\,\partial_{0}\partial_{i}\partial_{j}g_{ \mu\nu},\,\partial_{i}\partial_{j}\partial_{k}g_{\mu\nu},\,\text{etc.} \tag{4.41}\] In other words, these two terms contain third order derivatives. However, the temporal derivatives are at most second order! This is an important realization because \(\partial_{0}G^{0\nu}\) contains third order time derivatives, _provided that \(G^{0\nu}\) contains second order time derivatives_. However, the Bianchi identities tell us that this cannot be the case. None of the other terms we analyzed contains third order time derivatives. The highest order is two. Hence, there is nothing which could cancel the presumed third order time derivatives in \(\partial_{0}G^{0\nu}\), which is necessary for the Bianchi identities to hold. It follows that the assumption that \(G^{0\nu}\) contains second order time derivatives is wrong! At most, it can contain first order time derivatives (and indeed it does). The important conclusion is that the four equations \(G^{0\nu}=\kappa\,{\cal T}^{0\nu}\) constitute constraints on the initial data, rather than dynamical equations for the metric. So we are finally left with \(10\) metric components \(-4\) diffeomorphisms \(-4\) constraints \(=2\) physical degrees of freedom. ### The Teleparallel Equivalent of General Relativity (TEGR) In the previous subsection we saw how GR emerges from an action principle in conjunction with the geometric postulates of vanishing torsion and vanishing non-metricity. We also saw that the postulates can be dropped and that GR emerges from a Palatini variational principle, where the connection \(\Gamma\) is assumed to be an independent field, but which is ultimately fixed by the field equations to be precisely the Levi-Civita connection. This fact hinges on the form of the action: As long as the action has the EH form, GR emerges naturally. It follows that in order to obtain an equivalent but _different_ geometric formulation of GR, we need to change the geometric framework as well as the action principle. In the following, we show how the so-called Teleparallel Equivalent of GR, or TEGR for short, achieves this. #### The Geometric Postulates TEGR attributes the effects of gravity to a non-vanishing torsion tensor. The starting point is a metric-affine geometry \(({\cal M},g,\Gamma)\), where the connection is postulated to satisfy \[R^{\alpha}{}_{\mu\nu\rho} \stackrel{{!}}{{=}}0 \text{and} Q_{\alpha\mu\nu} \stackrel{{!}}{{=}}0\,. \tag{4.42}\] This may raise the question, in what sense GR and TEGR could even be equivalent to each other, given that curvature is postulated to vanish in the latter one. The key observation to resolve this apparent tension is the following: What is postulated to vanish in TEGR is the curvature tensor \(R^{\alpha}{}_{\mu\nu\rho}\) with respect to the affine connection \(\Gamma\), not the curvature tensor \(\mathcal{R}^{\alpha}{}_{\mu\nu\rho}\) with respect to the Levi-Civita connection. Moreover, recall that the two curvature tensors are related to each other by the identity (3.77). Starting from this identity, we have seen that an important special case emerges when non-metricity is set to zero. Namely equation (3.80), which relates the two Ricci scalars and which we repeat here for convenience: \[R(\Gamma)=\mathcal{R}(g)+\mathbb{T}+2\mathcal{D}_{\alpha}T^{\alpha}\,. \tag{4.43}\] The torsion scalar \(\mathbb{T}\) is explicitly given by \[\mathbb{T}\coloneqq\frac{1}{2}\left(\frac{1}{4}T_{\alpha\mu\nu}+\frac{1}{2}T_ {\mu\alpha\nu}-g_{\alpha\mu}T_{\nu}\right)T^{\alpha\mu\nu}\,, \tag{4.44}\] and \(\mathcal{D}_{\mu}\) still denotes the covariant derivative with respect to the Levi-Civita connection, _not_ the covariant derivative \(\nabla_{\mu}\) with respect to the more general connection \(\Gamma\). This equation will prove to be the key to formulate GR in terms of torsion, rather than curvature. But before that, we study the form of the connection more closely. Form of a Flat, Metric-Compatible ConnectionThe geoemtric postulates demand that the connection be flat and metric-compatible. These requirements do not completely fix the connection, unlike the postulates used in GR. Rather, we end up with a whole class of connections which satisfy the postulates of flatness and metric-compatibility. To see how this comes about, we start with the observation that the trivial connection, i.e., the connection \(\hat{\Gamma}^{\alpha}{}_{\mu\nu}=0\) is obviously flat. Now recall that under a change of coordinates a connection transforms inhomogeneously (cf. equation (2.64)). For our trivial connection, we find that a change of coordinates from \(\hat{x}^{\mu}\) to \(x^{\mu}(\hat{x})\) leads to \[\hat{\Gamma}^{\alpha}{}_{\mu\nu}\qquad\mapsto\qquad\Gamma^{\alpha}{}_{\mu\nu}= \frac{\partial x^{\alpha}}{\partial\hat{x}^{\beta}}\frac{\partial\hat{x}^{ \rho}}{\partial x^{\mu}}\frac{\partial\hat{x}^{\sigma}}{\partial x^{\nu}} \underbrace{\hat{\Gamma}^{\beta}{}_{\rho\sigma}}_{=0}+\frac{\partial x^{ \alpha}}{\partial\hat{x}^{\lambda}}\frac{\partial^{2}\hat{x}^{\lambda}}{ \partial x^{\mu}\partial x^{\nu}}=\frac{\partial x^{\alpha}}{\partial\hat{x}^ {\lambda}}\partial_{\mu}\frac{\partial\hat{x}^{\lambda}}{\partial x^{\nu}}\,. \tag{4.45}\] If we read \(\frac{\partial\hat{x}^{\mu}}{\partial x^{\nu}}\) as the components of a matrix \(\Lambda\), we can write the last equation as \[\boxed{\Gamma^{\alpha}{}_{\mu\nu}=\left(\Lambda^{-1}\right)^{\alpha}{}_{\lambda }\partial_{\mu}\Lambda^{\lambda}{}_{\nu}} \tag{4.46}\] This is a key equation for all teleparallel theories of gravity for the following two facts: 1. If the curvature tensor is zero in one coordinate system, it is zero in any other coordinate system. Since it is zero for the trivial connection \(\hat{\Gamma}^{\alpha}{}_{\mu\nu}=0\), it is also zero for the connection \(\Gamma^{\alpha}{}_{\mu\nu}\) in equation (4.46), since this connection has been obtained by a change of coordinates. 2. The change of coordinates is completely arbitrary and the vanishing of \(R^{\alpha}{}_{\mu\nu\rho}\) for the connection in (4.46) does _not_ depend on the details of the transformation. Thus, we may as well "forget" the origin of (4.46). That is to say, we can conclude that any connection of the form (4.46), where \(\Lambda^{\mu}{}_{\nu}\) is a matrix belonging to the general linear group \(GL(4,\mathbb{R})\) is flat. Now we turn to the second postulate, which demands metric-compatibility. By plugging the flat connection (4.46) into \(Q_{\alpha}{}^{\mu\nu}=0\), we find \[g^{\lambda(\mu}\partial_{\alpha}\Lambda^{\nu)}{}_{\rho}(\Lambda^{-1})^{\rho}{ }_{\lambda}=\frac{1}{2}\partial_{\alpha}g^{\mu\nu}\,. \tag{4.47}\] This equation allows us to eliminate the metric and express it in terms of the \(\Lambda^{\mu}{}_{\nu}\). Observe that the metric has ten components while the matrix \(\Lambda^{\mu}{}_{\nu}\) has \(4\times 4=16\) components. This redundancy in the description is well-understood [6, 96, 97]: Six of the components of \(\Lambda^{\mu}{}_{\nu}\) reflect the freedom to perform local Lorentz transformations. This is a symmetry of the theory. We reach the following conclusions: Choose a matrix \(\Lambda\in GL(4,\mathbb{R})\) and write the connection as in (4.46). The so defined connection is guaranteed to be flat. Furthermore, impose (4.47) in order to obtain a metric-compatible connection. This turns the metric into an auxiliary field. For completeness, we remark that the torsion tensor is given by \[T^{\alpha}{}_{\mu\nu}=2\left(\Lambda^{-1}\right)^{\alpha}{}_{\lambda}\,\partial _{[\mu}\Lambda^{\lambda}{}_{\nu]}\,. \tag{4.48}\] for any flat connection parametrized by \(\Lambda\in GL(4,\mathbb{R})\). Construction of the Action FunctionalThe construction of an action functional which is equivalent but not equal to the one of GR is fairly straightforward. As alluded to above, the key observation is that the Ricci scalar of a metric-compatible connection is related to the Ricci scalar of the Levi-Civita connection via equation (4.43). If we also impose flatness, which amounts to \(R(\Gamma)=0\), we find \[\mathcal{R}(g)=-\mathbb{T}(\Lambda)-2\mathcal{D}_{\alpha}T^{\alpha}(\Lambda)\,. \tag{4.49}\] The notation \(\mathbb{T}(\Lambda)\) emphasizes that \(\mathbb{T}\) depends on \(\Lambda\) and that the metric has been integrated out from the metricity condition. This equation now allows us to simply replace the Ricci scalar in the EH action by the right hand side of equation (4.49). However, such an action would be strictly _equal_ to the original EH action, because the connection carries a Levi-Civita piece and a torsion piece. The scalars \(\mathbb{T}\) and \(\mathcal{D}_{\alpha}T^{\alpha}\) conspire in such a way, that the torsion piece drops out and only the Levi-Civita part contributes to the action. However, by dropping \(\mathcal{D}_{\alpha}T^{\alpha}\), which amounts to a mere boundary term, we obtain an action which is genuinely _different_ from the EH action, but which leads to the same field equations. Thus, we define the action of TEGR as \[\boxed{\mathcal{S}_{\text{TEGR}}[\Lambda]\coloneqq-\frac{1}{2\kappa}\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,\mathbb{T}(\Lambda)+\mathcal{S}_{ \text{matter}}} \tag{4.50}\] It is silently understood that the metric can be expressed in terms of \(\Lambda\). However, observe that \(\Lambda\) has \(16\) components, while the metric has only ten. Consequently, we find _more_ field equations than in GR. As we will see later, ten of these equations are precisely the Einstein field equations. The remaining six are Bianchi identities related to the local Lorentz symmetry expressed through six of the components of \(\Lambda\). A further consequence of dropping \(\mathcal{D}_{\alpha}\mathbb{T}^{\alpha}\) is that the action functional only depends on first order derivatives. Thus, the variational principle is well-defined without having to add boundary terms a la Gibbons-Hawking-York. This is one of the features we had anticipated in the previous subsection. The action for TEGR could also have been constructed using a different strategy, which does not rely on the geometric identity (4.49). Rather, the strategy which we shall briefly sketch relies on counting degrees of freedom: Given the postulates of vanishing curvature and vanishing non-metricity, the only remaining tensor which can play a fundamental role is the torsion tensor. Thus, our task is to construct a scalar from the torsion tensor, which is then used to define the action. Clearly, this scalar cannot be linear in the torsion tensor. It has to be at least quadratic. As it turns out (we will discuss this point in more detail in subsection 5.1), there are precisely three independent scalars one can build from contractions of the torsion tensor with itself and with the help of the metric. Thus, the most general scalar assumes the form \[\hat{\mathbb{T}}\coloneqq c_{1}\,T_{\alpha\mu\nu}T^{\alpha\mu\nu}+c_{2}\,T_{ \mu\alpha\nu}T^{\alpha\mu\nu}+c_{3}\,T_{\mu}T^{\mu}\,, \tag{4.51}\] where \(c_{1}\), \(c_{2}\), and \(c_{3}\) are arbitrary, real constants. Using this scalar, it is easy to derive field equations and perform a counting of degrees of freedom around a Minkowski background. In order to obtain precisely two degrees of freedom, as in GR, one finds that the parameters have to be chosen as \[c_{2}=2c_{1}\hskip 72.27pt\text{and}\hskip 72.27ptc_{3}=-4c_{1}\,. \tag{4.52}\] Up to an over all normalization, this reproduces the torsion scalar (4.44) and thus the action (4.50). #### 4.4 The Palatini Formulation of the Action Principle This action can also be written in a manifestly covariant form, which highlights which type of metric-affine geometry is being considered, i.e., which geometric postulates are being implemented. This action functional reads \[\mathcal{S}_{\text{TEGR}}[g,\Gamma;\tilde{\Pi},\tilde{\chi}]\coloneqq-\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\left(\frac{1}{2\kappa}\sqrt{|g|}\,\mathbb{T}+ \tilde{\Pi}_{\alpha}{}^{\mu\nu\rho}\,R^{\alpha}{}_{\mu\nu\rho}+\tilde{\chi}^{ \alpha}{}_{\mu\nu}\,Q_{\alpha}{}^{\mu\nu}\right)+\mathcal{S}_{\text{matter}}\,, \tag{4.53}\] where \(\tilde{\Pi}_{\alpha}{}^{\mu\nu\rho}\) and \(\tilde{\chi}^{\alpha}{}_{\mu\nu}\) are tensor densities of weight \(w=+1\) which act as Lagrange multipliers. These multipliers enforce the postulates of vanishing curvature and vanishing non-metricity. It should also be noted that the Lagrange multipliers possess the symmetries \(\tilde{\Pi}_{\alpha}{}^{\mu\nu\rho}=\tilde{\Pi}_{\alpha}{}^{\mu[\nu\rho]}\) and \(\tilde{\chi}^{\alpha}{}_{\mu\nu}=\tilde{\chi}^{\alpha}{}_{(\mu\nu)}\), which they inherit from the curvature tensor and the non-metricity tensor, respectively. Just as in the Palatini formulation of GR, the connection \(\Gamma\) refers to a generic affine connection. A priori, it has nothing to do with the previous connection which is parametrized by the matrix \(\Lambda\). When it comes to working out the field equations, the Palatini formalism offers some advantages. #### The Metric and Connection Field Equations Based on the Palatini action given in (4.53), we can perform four independent variations. These are \[\frac{\delta\mathcal{S}_{\rm TEGR}}{\delta g^{\mu\nu}} \stackrel{{!}}{{=}}0, \frac{\delta\mathcal{S}_{\rm TEGR}}{\delta\tilde{\Pi}_{\alpha}{}^{ \mu\nu\rho}}\stackrel{{!}}{{=}}0\] \[\frac{\delta\mathcal{S}_{\rm TEGR}}{\delta\Gamma^{\alpha}{}_{ \mu\nu}}\stackrel{{!}}{{=}}0 \frac{\delta\mathcal{S}_{\rm TEGR}}{\delta\tilde{\chi}^{\alpha}{}_{ \mu\nu}}\stackrel{{!}}{{=}}0,. \tag{4.54}\] The variations with respect to the Lagrange multipliers are the most straightforward ones. The variations with respect to the metric and the connection require more work. Also, the field equations that follow from the variational principle are highly coupled in the sense that the Lagrange multipliers appear with covariant derivatives acting on them. Untangling the field equations, cleanly implementing the conditions of vanishing curvature and vanishing non-metricity, and bringing the equations into a simple form requires some effort. For a detailed derivation we refer the reader to [14]. The end result is \[\boxed{\text{Metric field equations:}\qquad\qquad\left(\nabla_{\alpha}+T_{ \alpha}\right)S_{(\mu\nu)}{}^{\alpha}+t_{\mu\nu}-\frac{1}{2}\mathbb{T}\,g_{ \mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}}\] \[\boxed{\text{Connection field equations:}\qquad\qquad\qquad\left( \nabla_{\alpha}+T_{\alpha}\right)\left[\sqrt{|g|}S_{[\mu}{}^{\alpha}{}_{\nu]} \right]=0}\] where we have introduced the **torsion conjugate**\(\boldsymbol{S}_{\alpha}{}^{\mu\nu}\) and the symmetric tensor \(t_{\mu\nu}\), \[S_{\alpha}{}^{\mu\nu} \stackrel{{!}}{{=}}\frac{\partial\mathbb{T}}{ \partial T^{\alpha}{}_{\mu\nu}}=\frac{1}{4}\,T_{\alpha}{}^{\mu\nu}+\frac{1}{2} \,T_{\alpha}{}^{[\mu}{}_{\alpha}{}^{\nu]}-\delta_{\alpha}{}^{[\mu}T^{\nu]}\] \[t_{\mu\nu} \stackrel{{!}}{{=}}\frac{\partial\mathbb{T}}{ \partial g^{\mu\nu}}=\frac{1}{2}S_{(\mu]}{}^{\lambda\kappa}\,T_{\nu)\lambda \kappa}-T^{\lambda\kappa}{}_{(\mu}\,S_{\lambda\kappa[\nu]})\,. \tag{4.55}\] It is also assumed that the hypermomentum density, which enters as \((\nabla_{\alpha}+T_{\alpha})\tilde{\mathcal{H}}_{[\mu}{}^{\alpha}{}_{\nu]}\) in the field equations is either identically zero (i.e., the matter content and the matter couplings have been chosen such that there is no contribution to this tensor density) or that it is conserved in the sense that \[(\nabla_{\alpha}+T_{\alpha})\,\tilde{\mathcal{H}}_{\mu}{}^{\alpha}{}_{\nu}=0\,. \tag{4.56}\] This conservation law holds by virtue of the gauge invariance of the matter sector and is confirmed in all standard cases where a non-trivial hypermomentum arises from coupling matter fields to the connection. For more details, see [14, 80]. As one can show, the metric field equations are simply the Einstein field equations in disguise, while the connection field equations arise as Bianchi identities. As shown in [14], this is a consequence of the curvature tensor being the curvature of a \(GL(4,\mathbb{R})\) connection. This has important consequences: The connection field equations carry no dynamical information. Put differently, these equations do not determine the metric nor the connection. In fact, they are just trivially satisfied. The dynamics of the theory is solely determined by the metric field equations, which are the Einstein equations. The Bianchi Identities Bianchi identities arise quite naturally whenever an action is invariant under a certain local symmetry. Since the actions (4.53) and (4.53) are generally covariant, it is no surprise that Bianchi identities can be found. Starting from an action of the form \(\mathcal{S}[g,\Gamma]\), where \(\Gamma\) is a generic affine connection, and assuming that the action is diffeomorphism invariant, it was shown in [80] that the Bianchi identities in a metric-affine geometry take the form \[\boxed{2\mathcal{D}_{\mu}\tilde{\mathcal{M}}^{\mu}{}_{\lambda}-\hat{\nabla}_{ \nu}\hat{\nabla}_{\mu}\tilde{\mathcal{C}}_{\lambda}{}^{\mu\nu}+T^{\alpha}{}_{ \lambda\nu}\hat{\nabla}_{\mu}\tilde{\mathcal{C}}_{\alpha}{}^{\mu\nu}+\left(R^ {\alpha}{}_{\nu\mu\lambda}-T_{\mu}T^{\alpha}{}_{\nu\lambda}\right)\tilde{ \mathcal{C}}_{\alpha}{}^{\mu\nu}\equiv 0} \tag{4.57}\] where \[\tilde{\mathcal{M}}^{\mu\nu} \coloneqq\frac{\delta\mathcal{S}[g,\Gamma]}{\delta g_{\mu\nu}} \text{and} \tilde{\mathcal{C}}_{\alpha}{}^{\mu\nu} \coloneqq\frac{\delta\mathcal{S}[g,\Gamma]}{\delta\Gamma^{\alpha}{}_{\mu \nu}}\,. \tag{4.58}\] are placeholders for the the metric and connection field equations (these are tensor densities of weight one) and where \[\hat{\nabla}_{\mu} \coloneqq\nabla_{\mu}+T_{\mu}\,. \tag{4.59}\] The Bianchi identities of GR follow from this general identity as a special case. Moreover, if we fix the connection to be flat and metric-compatible, we find \[2\mathcal{D}_{\mu}\tilde{\mathcal{M}}^{\mu}{}_{\lambda}-\hat{\nabla}_{\nu}\hat {\nabla}_{\mu}\tilde{\mathcal{C}}_{\lambda}{}^{\mu\nu}+T^{\alpha}{}_{\lambda \nu}\hat{\nabla}_{\mu}\tilde{\mathcal{C}}_{\alpha}{}^{\mu\nu}-T_{\mu}T^{ \alpha}{}_{\nu\lambda}\tilde{\mathcal{C}}_{\alpha}{}^{\mu\nu}\equiv 0\,. \tag{4.60}\] Since in TEGR the metric field equations are the same as Einstein's, i.e., since \(\tilde{\mathcal{M}}_{\mu\nu}=\sqrt{|g|}\,G_{\mu\nu}\), this simplifies further to \[\boxed{\hat{\nabla}_{\nu}\hat{\nabla}_{\mu}\tilde{\mathcal{C}}_{\lambda}{}^{ \mu\nu}-T^{\alpha}{}_{\lambda\nu}\hat{\nabla}_{\mu}\tilde{\mathcal{C}}_{ \alpha}{}^{\mu\nu}+T_{\mu}T^{\alpha}{}_{\nu\lambda}\tilde{\mathcal{C}}_{ \alpha}{}^{\mu\nu}\equiv 0} \tag{4.61}\] due to \(\mathcal{D}_{\mu}(\sqrt{|g|}\,G^{\mu\nu})=0\). Thus, only the connection field equations remain and they have to satisfy the above Bianchi identity. #### 4.6.2 Counting Degrees of Freedom If we start with a metric \(g_{\mu\nu}\) and a general affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\) we have a total of \(10+64\) fields. The flatness condition \(R^{\alpha}{}_{\mu\nu\rho}=0\) drastically reduces this number. Since any flat connection can be written as \(\Gamma^{\alpha}{}_{\mu\nu}=(\Lambda^{-1})^{\alpha}{}_{\lambda}\partial_{\mu} \Lambda^{\lambda}{}_{\nu}\), where \(\Lambda^{\mu}{}_{\nu}\) is a \(GL(4,\mathbb{R})\) matrix, we find that the connection carries at most \(4\times 4=16\) degrees of freedom, rather than \(64\). Finally, we also have to take into account the postulate of vanishing non-metricity, \[2(\Lambda^{-1})^{\lambda}{}_{\kappa}\partial_{\alpha}\Lambda^{\kappa}{}_{(\mu }g_{\nu)\lambda}=\partial_{\alpha}g_{\mu\nu}\,, \tag{4.62}\] which relates the metric and the matrix \(\Lambda\). This equation is solved by \[g_{\mu\nu}=\Lambda^{\alpha}{}_{\mu}\Lambda^{\beta}{}_{\nu}c_{\alpha\beta}\,, \tag{4.63}\] where \(c_{\alpha\beta}\) is a symmetric, constant tensor. It is only natural to choose \(c_{\alpha\beta}=\eta_{\mu\nu}\), where the latter denotes the Minkowski metric, since we are interested in metrics with Lorentzian signature. Notice that this also means that instead of potentially \(10+16\) degrees of freedom, we now have at most \(16\), since the connection as well as the metric can be parametrized with the \(16\) components of \(\Lambda^{\mu}{}_{\nu}\). At this point it should be noted that flat connections possess a gauge symmetry. Namely, transformations of the form \(\Lambda\mapsto\Lambda\mathcal{U}\), where \(\mathcal{U}\in GL(4,\mathbb{R})\), leave the curvature tensor and thus the flatness condition invariant. If it also has to respect the postulate of vanishing non-metricity, then it has to leave the metric invariant, which means \[\mathcal{U}^{\alpha}{}_{\kappa}\mathcal{U}^{\beta}{}_{\lambda} \Lambda^{\kappa}{}_{\mu}\Lambda^{\lambda}{}_{\nu}\eta_{\alpha\beta}\stackrel{{!}}{{=}}\Lambda^{\alpha}{}_{\mu}\Lambda^{\beta}{}_{\nu}\eta_{\alpha\beta}\,. \tag{4.64}\] This implies that \(\mathcal{U}\) belong to the proper orthochronous Lorentz group, since this guarantees that the Minkowski metric is left invariant and it is the part of the Lorentz group which is connected to the identity. We therefore learn that six components of \(\Lambda^{\mu}{}_{\nu}\) simply represent Lorentz transformations and that these transformations are pure gauge. That is, they do not change the form of the metric, nor do they affect the flatness postulate. We have therefore a maximal number of \(16-6=10\) degrees of freedom. However, TEGR is a generally covariant theory and diffeomorphisms remove \(2\times 4\) degrees of freedom. Hence, we are finally left with only two degrees of freedom, as we expected. ### The Symmetric Teleparallel Equivalent of General Relativity (STEGR) #### 4.3.1 The Geometric Postulates We now turn to the third geometric formulation of GR, which ascribes gravitational phenomena to non-metricity [4]: The so-called Symmetric Teleparallel Equivalent of GR (STEGR) [98]. The starting point is again a metric-affine geometry \((\mathcal{M},g,\Gamma)\), but this time restricted by the geometric postulates \[R^{\alpha}{}_{\mu\nu\rho}\stackrel{{!}}{{=}}0 \text{and} T^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\,. \tag{4.65}\] The postulate of vanishing curvature may, just as in TEGR, raise the question of how the theory we seek to construct can possibly be equivalent to standard GR, where curvature plays an essential role. However, the resolution to this apparent tension is the same as in TEGR: What is postulated to vanish is the curvature of the affine connection \(\Gamma\), not the curvature of the Levi-Civita connection on which GR is based. #### 4.3.2 Form of a Flat, Torsionless Connection and the Coincident Gauge Before constructing an action functional for STEGR, let us work out what a flat and torsionless connection looks like. From the previous subsection, we recall that a flat connection can always be written as \[\Gamma^{\alpha}{}_{\mu\nu}=\left(\Lambda^{-1}\right)^{\alpha}{}_{ \lambda}\partial_{\mu}\Lambda^{\lambda}{}_{\nu}\,, \tag{4.66}\] where \(\Lambda^{\mu}{}_{\nu}\) are the components of a matrix belonging to the general linear group \(GL(4,\mathbb{R})\). The postulate of vanishing torsion can then be rephrased as \[T^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0 \implies \partial_{[\mu}\Lambda^{\alpha}{}_{\nu]}\stackrel{{!}}{{= }}0\,. \tag{4.67}\] This last condition implies that the matrix \(\Lambda^{\mu}{}_{\nu}\) can be written as \(\Lambda^{\mu}{}_{\nu}=\partial_{\nu}\xi^{\mu}\), where \(\xi^{\mu}\) denotes a collection of four arbitrary functions of the coordinates \(x^{\mu}\), _not a vector field_! We conclude that a flat, torsionless connection can be written as \[\Gamma^{\alpha}{}_{\mu\nu}=\frac{\partial x^{\alpha}}{\partial \xi^{\lambda}}\partial_{\mu}\partial_{\nu}\xi^{\lambda}\,, \tag{4.68}\] where \(\partial x^{\alpha}/\partial\xi^{\lambda}\) should be understood as the inverse of the Jacobian matrix \(\partial\xi^{\lambda}/\partial x^{\alpha}\). This result means that in any given coordinate system \(\{x^{0},x^{1},x^{2},x^{3}\}\) we can choose four independent functions \(\{\xi^{0},\xi^{1},\xi^{2},\xi^{3}\}\), such that the Jacobian matrix \(\partial\xi^{\mu}/\partial x^{\nu}\) is invertible (i.e., has a non-zero determinant) and this allows us then to construct a flat and torsionless connection via equation (4.68). Moreover, equation (4.68) reveals that flat and torsionless connections have a remarkable property: They can be set to zero globally by an appropriate choice of coordinates. In fact, given any flat and torsionless connection, it necessarily has the form (4.68) with some functions \(\xi^{\mu}\). Therefore, if we choose our coordinates such that \(x^{\mu}=\xi^{\mu}\), the connection is exactly equal to zero because \(\partial_{\mu}\partial_{\nu}\xi^{\lambda}=0\). This is known as the **coincident gauge7**. Footnote 7: More generally, one could also choose the functions \(\xi^{\mu}\) to be of the form \(\xi^{\mu}=M^{\mu},x^{\nu}+\xi^{\mu}_{0}\), where \(M^{\mu}\), is a non-degenerate matrix with constant entries and \(\xi^{\mu}_{0}\) are constants [14]. This is also known as **coincident gauge**. We emphasize that the coincident gauge can always be chosen and that it has nothing to do with an action principle. It is available as long as the postulates of vanishing curvature and vanishing torsion are in place. However, we also stress that there are caveats one has to be aware of when it comes to working in a fixed coordinate system and wanting to use the coincident gauge. We will discuss these caveats in subsections 6.1 and 6.2. #### Construction of the Action Functional To construct an action functional for STEGR, we follow the same strategy as in the case of TEGR. The key observation is that the curvature tensor of the affine connection is related to the curvature tensor of the Levi-Civita connection by the identity (3.8) we discussed in 3.1. We rewrite this identity here for convenience: \[R(\Gamma)=\mathcal{R}(g)+\mathbb{Q}+\mathcal{D}_{\alpha}\left(Q^{\alpha}- \bar{Q}^{\alpha}\right)\,, \tag{4.69}\] where \(\mathcal{D}\) is the covariant derivative with respect to the Levi-Civita connection, the two traces of the non-metricity tensor are given by \[Q_{\alpha} \coloneqq Q_{\alpha\lambda}{}^{\lambda} \text{and} \bar{Q}_{\alpha} \coloneqq Q^{\lambda}{}_{\lambda\alpha}\,, \tag{4.70}\] and the non-metricity scalar \(\mathbb{Q}\) is defined as \[\mathbb{Q} \coloneqq\frac{1}{4}Q_{\alpha\mu\nu}Q^{\alpha\mu\nu}-\frac{1}{2}Q _{\alpha\mu\nu}Q^{\mu\alpha\nu}-\frac{1}{4}Q_{\alpha}Q^{\alpha}+\frac{1}{2}Q_{ \alpha}\bar{Q}^{\alpha}\,. \tag{4.71}\] The latter can also be expressed in terms of the disformation tensor \(L^{\alpha}{}_{\mu\nu}\coloneqq\frac{1}{2}Q^{\alpha}{}_{\mu\nu}-Q_{(\mu}{}^{ \alpha}{}_{\nu)}\) as \[\mathbb{Q} =g^{\mu\nu}\left(L^{\alpha}{}_{\alpha\beta}L^{\beta}{}_{\mu\nu}-L ^{\alpha}{}_{\beta\mu}L^{\beta}{}_{\nu\alpha}\right)\,. \tag{4.72}\] Recall that the identity (4.69) is valid only when torsion vanishes. Thus, one of the geometric postulates is already implemented. The second postulate of STEGR, which demands that the curvature of the affine connection vanishes, then implies that \[\mathcal{R}(g) =-\mathbb{Q}-\mathcal{D}_{\alpha}\left(Q^{\alpha}-\bar{Q}^{\alpha }\right)\,. \tag{4.73}\] In other words, the Ricci scalar of the Levi-Civita connection can be expressed in terms of the non-metricity scalar and a divergence term. This allows us to replace \(\mathcal{R}(g)\) in the Einstein-Hilbert action by the right hand side of the identity (4.73). Thus, in the Symmetric Teleparallel Equivalent of GR gravity is described by the action functional \[\mathcal{S}_{\rm STEGR}[g,\xi] =-\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|} \,\mathbb{Q}(g,\xi)+\mathcal{S}_{\rm matter}\,. \tag{4.74}\] We have dropped the divergence \(\mathcal{D}_{\alpha}\left(Q^{\alpha}-\bar{Q}^{\alpha}\right)\) since, by the generalized Gauss theorem which we discussed in subsection 3.2, this term amounts to a mere boundary term which can thus have no influence on the field equations. Moreover, as we have discussed in the GR and TEGR sections, changing the action is a necessary step in order to arrive at a genuinely new formulation. Notice that the action is a functional of the metric _and_ the four functions which parametrize the flat, torsionless connection (4.68). The candidate for the STEGR action could also have been constructed without knowing the geometric identity (4.73). Just as in TEGR, one can start with the most general Lagrangian which is quadratic in the non-metricity tensor. Due to the symmetry of the non-metricity tensor, one finds that there are precisely five independent scalars one can construct from contractions of the non-metricity tensor (this will be discussed in more details in subsection 5.2). The most general Lagrangian then reads (4.75) where,,,, and are arbitrary, real constants. By expanding the theory around a Minkowski background and demanding that it propagates two degrees of freedom, which is tantamount to demanding that the linearized theory is invariant under linearized diffeomorphisms, one finds that the parameters have to satisfy (4.76) These relations are satisfied by the parameter values which reproduce the STEGR action and they leave and free. The linearized theory cannot fix these parameters, but other considerations can reproduce the STEGR action up to an overall normalization. For instance, demanding that the full, non-linear theory satisfies the contracted Bianchi identity, where stands for the metric field equations, requires to vanish. Thus, we find (4.77) which indeed reproduces the action (4.74) up to an overall normalization constant. #### 4.4.2 The Palatini Formulation of the Action Principle Just as in TEGR, we can employ the Palatini formalism in order to express the action principle in a manifestly covariant way which also highlights which type of metric-affine geometry is being considered. This action is defined as (4.78) where the Lagrange multipliers and are tensor densities of weight. These Lagrange multipliers inherit the symmetries and from the curvature and torsion tensor, respectively. Notice that the action is a functional of the metric and a generic affine connection. By varying the above action with respect to the Lagrange multiplier densities, one obtains two constraints on the connection. Namely, the connection is restricted to be flat and torsionless. Since these constraints do not completely fix the connection, we still have the freedom to choose four arbitrary functions in order to parametrize the connection in agreement with equation (4.68). As a final comment we add that the curvature tensor measures the change in direction of a vector which is being parallel transported around a closed loop (cf. subsection 3.2). Hence, when curvature vanishes, there is no change in direction and the vector remains, in this sense, parallel to itself. This justifies the use of the term **teleparallel**. Moreover, the vanishing of torsion implies that the connection is symmetric in its lower two indices. Hence the use of the word **symmetric** in _Symmetric Teleparallel Equivalent of GR_. The Metric and Connection Field Equations To obtain the field equations of STEGR we can either take the action (4.74) as starting point or the action (4.78). In either case we make the observation that the non-metricity tensor is linear in first order derivatives of the metric and that the non-metricity scalar is consequently quadratic in first order derivatives. Due to the absence of second order derivatives in either action principle, there is no need to add boundary terms a la Gibbons-Hawking-York. Both variational principles are well-defined as they stand. If we choose to work with the Palatini formalism, we have to vary the action (4.78) with respect to the metric, the general affine connection, as well as the Lagrange multiplier densities and. If instead we work with the action (4.74), we only need to perform variations with respect to and. The first approach turns out to be simpler, despite the additional variations one has to perform. The computations have been carried out in great detail in [14]. For the variation with respect to the inverse metric, one obtains the metric field equations (4.79) As always, denotes the energy-momentum tensor of matter fields and we have introduced the **non-metricity conjugate** and the symmetric tensor, respectively defined by (4.80) It should be noted that the non-metricity scalar can be expressed with the help of the non-metricity tensor and its conjugate as (4.81) The variations with respect to the Lagrange multipliers and the general affine connection boil down to a connection field equation of the form (4.82) Here, just as in TEGR, we have assumed that the hypermomentum density (4.23) vanishes. Alternatively, we could have demanded that it is conserved,. In both field equations the connection is flat and torsionless, as required by the geometric postulates. It is thus parametrized by the four functions. Moreover, observe that we have ten metric field equations and four connection field equations. These numbers match the number of fields in the theory, namely ten metric components and four's. However, just as in the case of GR and TEGR, not all equations are independent due to the diffeomorphism invariance of the theory. The Bianchi Identities The action (4.78) is manifestly invariant under diffeomorphisms. This follows from the fact that is a scalar density of weight. Also, since the Lagrange multipliers have density weight and they are fully contracted, the curvature and torsion constraints are also scalar densities with the correct weight. Correct means that the integrand transforms in such a way under diffeomorphisms that the integral remains invariant. Following the same considerations as in GR, but now also taking the transformation behaviour of the connection into account, we find the following identities for STEGR: \[\mathcal{D}_{\mu}\mathcal{M}^{\mu}{}_{\nu}+\mathcal{C}_{\nu}\equiv 0\,, \tag{4.83}\] where we have defined \[\mathcal{M}_{\mu\nu} \coloneqq\frac{2}{\sqrt{|g|}}\nabla_{\alpha}\left[\sqrt{|g|}\,P^{ \alpha}{}_{\mu\nu}\right]+q_{\mu\nu}-\frac{1}{2}\mathbb{Q}\,g_{\mu\nu}\] \[\mathcal{C}_{\alpha} \coloneqq\nabla_{\mu}\nabla_{\nu}\left(\sqrt{|g|}\,P^{\mu\nu}{}_ {\alpha}\right) \tag{4.84}\] The tensor \(\mathcal{M}_{\mu\nu}\) is simply the expression that appears on the left side of the metric field equations, while \(\mathcal{C}_{\alpha}\) represents the left side of the connection field equations. As a consequence of these Bianchi identities, it follows that if the metric field equations are satisfied, i.e., if \(\mathcal{M}_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\), then \[\kappa\,\mathcal{D}_{\mu}\mathcal{T}^{\mu}{}_{\nu}+\mathcal{C}_{\nu}\equiv 0\,. \tag{4.85}\] By invoking the covariant conservation of energy-momentum of matter fields, i.e., \(\mathcal{D}_{\mu}\mathcal{T}^{\mu}{}_{\nu}\), we find \[\text{If }\mathcal{M}_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\text{ is satisfied, then }\mathcal{C}_{\nu}\equiv 0\,. \tag{4.86}\] In other words, if the metric field equations are satisfied, then the connection field equations become mere identities. That is, the connection field equations are trivially satisfied and carry no dynamical information. Since one can show that \(\mathcal{M}_{\mu\nu}=G_{\mu\nu}\), where the right hand side is the Einstein tensor without cosmological constant, one can reach an even stronger conclusion: The Einstein tensor satisfies the Bianchi identity \(\nabla_{\mu}G^{\mu}{}_{\nu}\) also off-shell, i.e., when the Einstein equations are not satisfied. By combining this fact with the Bianchi identity of STEGR, one reaches the conclusion that \[\boxed{\mathcal{C}_{\nu}\equiv 0} \tag{4.87}\] is _always_ true! Since \(\mathcal{M}_{\mu\nu}=G_{\mu\nu}\) implies that \(\mathcal{M}_{\mu\nu}\) is _independent_ of \(\xi^{\alpha}\) (it only knows about the Levi-Civita part of the connection and nothing else) it follows that \(\mathcal{M}_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\) are purely equations for the metric. Furthermore, since \(\mathcal{C}_{\nu}=0\) is always identically satisfied, there are no equations which determine the four functions \(\xi^{\alpha}\)! They remain completely arbitrary. What these considerations show is the following: * STEGR is equivalent to GR in the sense that both theories possess the same field equations and consequently the same solution space. They are nevertheless rooted in different mathematical frameworks, they used different fields in their formulation, and this opens the door to conceptual and philosophical differences between the two theories. * There is a sense in which STEGR is invariant under two copies of the diffeomorphism group. First its action is manifestly diffeomorphism invariant and its field equations are manifestly general covariant. Thus, performing a diffeomorphism which changes the metric and the connection simultaneously does not affect the theory. Secondly, we have the freedom to choose the four functions \(\xi^{\alpha}\) at will. The metric field equations do not depend upon this choice and there are no dynamical equations which determine the \(\xi\)'s. Thus, this constitutes a second freedom. As we will see later, the independence of \(\mathcal{M}_{\mu\nu}\) from the \(\xi\)'s hinges on a carefully balanced cancellation. If one considers the most general non-metricity scalar \(\hat{\mathbb{Q}}\), as we do in 5.2, this independence is lost unless one chooses certain parameters in the theory in a careful way. Also, we will see that in \(f(\mathbb{Q})\) gravity the \(\xi\)'s are no longer arbitrary. They come with their own dynamical field equations. #### 4.4.2 Counting Degrees of Freedom After the discussion of the Bianchi identities it comes as no surprise that STEGR propagates two physical degrees of freedom. Concretely, the counting goes as follows: The theory is formulated in terms of a metric \(g_{\mu\nu}\) with ten components and a general affine connection \(\Gamma^{\alpha}{}_{\mu\nu}\) with \(4\times 4\times 4=64\) components. By either postulating the vanishing of curvature and torsion or by solving the constraints that arise from the Palatini formulation of the theory, one finds that the connection carries four potential degrees of freedom (the \(\xi\)'s). This leaves us with \(10+4=14\) variables and an equal number of field equations. However, the metric field equations only contain the metric and no \(\xi\)'s. Also, there are no dynamical equation for the \(\xi\)'s. They remain completely arbitrary and do not constitute anything physical. In fact, as we will see in the next subsection, they play the role of Stuckelberg fields, which ensure that the theory is generally covariant. This leaves us with at most ten dynamical variables, namely the metric components. However, since the metric field equations are simply the Einstein field equations, the same counting as in GR assures us that only two of these components represent physical degrees of freedom. Alternatively, one could also argue as follows: STEGR is a diffeomorphism invariant theory and it is also invariant under the replacement \(\xi^{\alpha}\mapsto\hat{\xi}^{\alpha}\), where \(\hat{\xi}^{\alpha}\) is a new set of four functions which parametrize the flat, torsionless connection. Thus, the \(\xi\)'s play no dynamical role and since diffeomorphisms remove \(2\times 4\) degrees of freedom one finds again \(14-4-2\times 4=2\) physical degrees of freedom. Thus, in either case, we conclude that STEGR propagates the same two degrees of freedom as GR, as had to be expected. This will no longer be true when we consider generalizations of STEGR in subsections 5.2 and 5.3 and in particular in section 6. ### Coincident General Relativity (CGR) Coincident General Relativity, or CGR for short, refers to a special case of STEGR. In fact, CGR is simply STEGR in coincident gauge. Even tough this might seem trivial, CGR has played an important role in applications such as \(f(\mathbb{Q})\) cosmology [23, 24, 30, 32, 99, 100, 101, 102]. Furthermore, by comparing and contrasting CGR with full STEGR and what is nowadays called the Einstein action, one is lead to a deeper understanding of the role played by the flat and torsionless connection (or, equivalently, by the \(\xi\)'s). Let us begin by introducing the action of CGR. As mentioned above, CGR makes use of the coincident gauge, which means the flat and torsionless connection \(\Gamma^{\alpha}{}_{\mu\nu}\) vanishes globally. Upon using the decomposition (4.47), we find that this implies \[\Gamma^{\alpha}{}_{\mu\nu}=\left\{\begin{aligned} \alpha \\ \mu\nu\end{aligned}\right\}+L^{\alpha}{}_{\mu\nu}\stackrel{{ \mathclap{\text{CGR}}}}{{=}}0\qquad\qquad\qquad\implies\qquad\qquad L^{ \alpha}{}_{\mu\nu}\stackrel{{\mathclap{\text{$\ast$}}}}{{=}}- \left\{\begin{aligned} \alpha\\ \mu\nu\end{aligned}\right\}\,. \tag{4.88}\] The star on top of the equal sign shall remind us that this relation only holds in the coincident gauge. This last equality is particularly useful if we recall that the STEGR action can be written in terms of the deformation tensor \(L^{\alpha}{}_{\mu\nu}\) alone (cf. equation 4.72). Hence, the CGR action takes the form \[\mathcal{S}_{\text{CGR}}[g]\equiv\mathcal{S}_{\text{STGR}}[g, \Gamma=0] =\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,g^{ \mu\nu}\left(L^{\alpha}{}_{\alpha\beta}L^{\beta}{}_{\mu\nu}-L^{\alpha}{}_{ \beta\mu}L^{\beta}{}_{\nu\alpha}\right)\] \[\stackrel{{\mathclap{\text{$\ast$}}}}{{=}}\frac{1}{2 \kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,g^{\mu\nu}\left(\left\{ \begin{aligned} \alpha\\ \alpha\beta\end{aligned}\right\}\left\{\begin{aligned} \beta\\ \mu\nu\end{aligned}\right\}-\left\{\begin{aligned} \alpha\\ \beta\mu\end{aligned}\right\}\left\{\begin{aligned} \beta\\ \nu\alpha\end{aligned}\right\}\right)\,. \tag{4.89}\] The first integral is valid in complete generality, while the second one holds only in the coincident gauge. Observe further that the action is a functional of the metric alone. This is one of the features which make CGR attractive for studying applications: The connection has been dealt with and even globally trivialized, such that one only has to work with the metric. However, when using CGR as the starting point for defining non-linear modifications such as in \(f(\mathbb{Q})\) gravity, one is prone to encounter subtleties related to having fixed the coincident gauge. These subtleties have to do with assuming some background symmetries (spherical symmetry, homogeneity and isotropy,...) and we will discuss them in more detail in subsections 6.1 and 6.2. Here, we wish to highlight an other feature of CGR. Namely that its action is exactly equal to the so-called Einstein action, which in turn is simply the Einstein-Hilbert action _without_ the second order derivatives. This can easily be seen by recalling that the Ricci tensor is given by \[\mathcal{R}_{\mu\nu}=\partial_{\alpha}\left\{\begin{matrix}\alpha \\ \nu\mu\end{matrix}\right\}-\partial_{\nu}\left\{\begin{matrix}\alpha\\ \alpha\mu\end{matrix}\right\}+\left\{\begin{matrix}\alpha\\ \alpha\beta\end{matrix}\right\}\left\{\begin{matrix}\beta\\ \mu\nu\end{matrix}\right\}-\left\{\begin{matrix}\alpha\\ \beta\mu\end{matrix}\right\}\left\{\begin{matrix}\beta\\ \nu\alpha\end{matrix}\right\}\,. \tag{4.90}\] By comparing (4.90) to the action (4.89) it follows that the Einstein action is given by the Ricci scalar \(\mathcal{R}=g^{\mu\nu}\mathcal{R}_{\mu\nu}\) minus the term \[g^{\mu\nu}\left(\partial_{\alpha}\left\{\begin{matrix}\alpha\\ \nu\mu\end{matrix}\right\}-\partial_{\nu}\left\{\begin{matrix}\alpha\\ \alpha\mu\end{matrix}\right\}\right) \tag{4.91}\] which contains the second order derivatives of the metric. Hence, neither the CGR nor the Einstein action require the GHY boundary term. Both of them give rise to a well-defined variational principle. However, despite looking the same, there is a crucial difference between the Einstein action and the CGR action. The former is _not_ diffeomorphism invariant. This follows from the fact that the connection does not transform like a tensor under coordinate transformations (see equation (2.64)) and one can check that the Einstein action picks up boundary terms under such transformations. The CGR action, on the other hand, is the gauge-fixed version of a perfectly covariant functional, namely the STEGR action. Hence, we can interpret the connection as a Stuckelberg field which restores the general covariance of the Einstein action. ### The General Teleparallel Equivalent of General Relativity (GTEGR) So far we have seen that gravity can be described from three different perspectives: Following Einstein's original path, we can encode gravity in the curvature of spacetime while setting torsion and non-metricity to zero. Or we can describe gravity using torsion in a flat and metric-compatible spacetime. The third option is to work in a flat and torsionless spacetime, but with non-zero non-metricity. We can think of these three descriptions as the three corners of a triangle, as illustrated in Figure 12. We can also give meaning to the edges of the triangle. Of particular interest to us is the lower edge which connects TEGR and STEGR. In fact, there exists yet another teleparallel theory of gravity, called the General Teleparallel Equivalent of GR (GTEGR) [7, 85, 103], which subsumes TEGR and STEGR in the sense that these two theories are the gauge-fixed offsprings of a more general parent theory. To construct the theory, we start again with a general metric-affine geometry \((\mathcal{M},g,\Gamma)\) and a single geometric postulate, \[R^{\alpha}{}_{\mu\nu\rho}\overset{!}{=}0\,. \tag{4.92}\] The geometric identity (3.78) we encountered in subsection 3.2 then allows us to write the Ricci scalar of the Levi-Civita connection as \[-\mathcal{R}(g) =\mathbb{T}+\mathbb{Q}+T^{\rho\mu\nu}Q_{\mu\nu\rho}-T^{\mu}Q_{\mu }+T^{\mu}\bar{Q}_{\mu}+\mathcal{D}_{\alpha}\left(Q^{\alpha}-\bar{Q}^{\alpha}+2 T^{\alpha}\right)\] \[=\mathbb{G}+\mathcal{D}_{\alpha}\left(Q^{\alpha}-\bar{Q}^{\alpha} +2T^{\alpha}\right)\,, \tag{4.93}\] where in the last equation we have introduced the scalar \[\mathbb{G}\coloneqq\mathbb{T}+\mathbb{Q}+T^{\rho\mu\nu}Q_{\mu\nu\rho}-T^{\mu} Q_{\mu}+T^{\mu}\bar{Q}_{\mu}\,. \tag{4.94}\] We then define the following action \[\mathcal{S}_{\rm GTEGR}[g,\Lambda]\coloneqq-\frac{1}{2\kappa}\int_{\mathcal{M}} \mathrm{d}^{4}x\,\sqrt{|g|}\,\mathbb{G}(g,\Lambda)+\mathcal{S}_{\rm matter}\,, \tag{111}\] where \(\Lambda\in GL(4,\mathbb{R})\) is the matrix used to parametrize the flat connection. Since the EH action and the action of GTEGR only differ by a total derivative, it comes as no surprise that both actions describe the same theory. However, notice that while GR only makes use of a metric, GTEGR also involves the matrix \(\Lambda\in GL(4,\mathbb{R})\) in its definition. This mismatch in the number of fields is no reason for concern, since GTEGR enjoys an additional symmetry. In fact, \(\delta_{\Lambda}\mathcal{S}_{\rm GTEGR}=0\) is satisfied off-shell, which means that the connection is not dynamical [7]. Put differently, this means that only the metric carries physical degrees of freedom while the connection is pure gauge. Furthermore, since the metric field equations obtained from (111) have to be the Einstein field equations, the metric propagates exactly the same two degrees of freedom as GR. Observe further that in the absence of non-metricity the scalar \(\mathbb{G}\) reduces to \(\mathbb{T}\) and TEGR is recovered. Similarly, when torsion is absent, \(\mathbb{G}\) reduces to the non-metricity scalar \(\mathbb{Q}\) and STEGR emerges. Demanding either the vanishing of torsion or the vanishing of non-metricity amounts to imposing additional conditions on the connection, as we have seen in previous subsections. It is in this sense that we can think of TEGR and STEGR as partially gauge-fixed versions of the more general theory GTEGR: The pure gauge connection of GTEGR can be partially fixed by either imposing \(Q_{\alpha\mu\nu}=0\) or \(T^{\alpha}{}_{\mu\nu}=0\), which simply amounts to working with TEGR or STEGR, respectively. ### Non-flat combinations in the edges and in the dot So far we have discussed the three corners of Figure 12 as well as one edge. This corresponds to four different formulations of General Relativity: Standard GR based on curvature and the teleparallel theories TEGR, STEGR, and GTEGR which are all based on the postulate of vanishing curvature. It is only natural to ask whether other equivalent formulations are possible. In particular, there are two more edges present in Figure 12. These would correspond to non-flat geometries with either torsion or non-metricity (but not both at the same time). Finally, we can also imagine a dot in the center of the triangle, which represents a theory based on non-vanishing curvature, torsion, and non-metricity. A modified version of Figure 12 could look like 13. Notice that in all three new cases we wish to discuss, the postulate of teleparallelism (i.e., the condition \(R^{\alpha}{}_{\mu\nu\rho}=0\)) is _not_ imposed. This has far-reaching consequences. Recall that in TEGR, STEGR, and GTEGR the crucial step was to impose \(R^{\alpha}{}_{\mu\nu\rho}=0\), which immediately implies that the connection has the form \(\Gamma=(\Lambda)^{-1}\partial\Lambda\). Since the Lagrangians which define these three theories are all quadratic in \(T\) and \(Q\), we find that they all possess something akin to a "kinetic term", \(T^{2}\sim(\partial\Lambda)^{2}\) and \(Q^{2}\sim(\partial\Lambda)^{2}\). However, if the flatness condition is _not_ imposed, we lose this "kinetic term". In particular, the actions \[\mathcal{S}_{\rm Einstein-Cartan}[g,\Gamma] =\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|} \,(R+\mathbb{T})+\mathcal{S}_{\rm matter}[g,\Gamma,\Psi]\] \[\mathcal{S}[g,\Gamma] =\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|} \,(R+\mathbb{Q})+\mathcal{S}_{\rm matter}[g,\Gamma,\Psi]\] \[\mathcal{S}[g,\Gamma] =\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|} \,(R+\mathbb{T}+\mathbb{Q})+\mathcal{S}_{\rm matter}[g,\Gamma,\Psi] \tag{112}\] are all deprived of this "kinetic term". The first one corresponds to the left edge in Figure 13 and is also known as **the Einstein-Cartan action**. The action in the middle represents the theory living on the right edge, while the action on the bottom corresponds to the dot in Figure 13. As it turns out, in all three cases the connection is a mere auxiliary field which can be integrated out. After integrating out the connection, the resulting actions are _not_ equivalent to GR! Rather, one obtains three modified gravity theories. Furthermore, one can also show that integrating out the connection changes the way matter fields couple in these theories, leading again to non-GR behaviour. This is in the same spirit as what was shown in [95] for more general Lagrangians based on the Palatini formalism. ### Matter Coupling Our discussion of the geometric trinity and the equivalence between teleparallel theories of gravity and GR was so far limited to the pure gravity sector. Does the equivalence between teleparallel theories and GR also hold in the presence of matter fields? In GR, the coupling of the gravitational field to matter fields follows the so-called **minimal coupling principle**. It states that a matter theory formulated in Minkowski space is promoted to a matter theory coupled to the gravitational field \(g_{\mu\nu}\) by replacing \(\eta_{\mu\nu}\mapsto g_{\mu\nu}\) and \(\partial_{\mu}\mapsto\mathcal{D}_{\mu}\), provided that the matter fields only couple to \(g_{\mu\nu}\), \(g^{\mu\nu}\), and \(\sqrt{|g|}\), but not derivatives of the metric. Is the minimal coupling principle preserved in TEGR and STEGR? Let us first consider TEGR and naively apply the minimal coupling principle in the form \(\eta_{\mu\nu}\mapsto g_{\mu\nu}\) and \(\partial_{\mu}\mapsto\nabla_{\mu}\), where \(\nabla_{\mu}\) is the covariant derivative operator with respect to the connection \(\Gamma^{\alpha}{}_{\mu\nu}\). As a specific example, we consider the electromagnetic potential \(A_{\mu}\) and its associated Maxwell \(2\)-form \(F_{\mu\nu}\coloneqq\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\). According to the minimal coupling principle, the Maxwell \(2\)-form becomes \[F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}-T^{\alpha}{}_{\mu\nu}A_{\alpha}\,. \tag{4.97}\] We immediately conclude that the minimal coupling principle fails, since the Maxwell action picks up terms Figure 13: Of the metric-affine theories of gravity which live on the edges and in the center of the triangle, only the General Teleparallel theory is equivalent to GR. proportional to the torsion tensor, thus spoiling the equivalence between TEGR and GR. For fermionic fields, one obtains a similar failure of the minimal coupling principle. The Dirac Lagrangian is directly affected by the connection in the presence of an axial torsion. In STEGR the situation is quite different: The minimal coupling principle is preserved even in the presence of non-metricity. In the case of the electromagnetic field \(A_{\mu}\) it is straightforward to verify that non-metricity does not contribute to \(F_{\mu\nu}\) due to its symmetry and thus one finds \[F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}=\partial_{\mu}A_{\nu}- \partial_{\nu}A_{\mu}\,, \tag{4.98}\] just as in GR. For fermions, this property remains unchanged. The non-metricity drops out completely from the Dirac Lagrangian due to the symmetry of the non-metricity tensor. For a more detailed analysis of matter couplings in TEGR and STEGR we refer the reader to [80]. The key message here is that in the presence of matter fields, the equivalence is only maintained between STEGR and GR. TEGR coupled to matter fields is no longer equivalent to GR. ## 5 The Geometrical Trinity of Modified Gravity Theories In section 4 we introduced three different geometric approaches to formulate the theory of General Relativity. This so-called geometric trinity of GR has conceptual advantages. For instance, the teleparallel theories TEGR and STEGR possess well-defined variational principles [6, 14] without the need of adding a GHY boundary term. Furthermore, STEGR and CGR have inspired new approaches to define the elusive gravitational energy-momentum [10, 98, 104, 98], it is possible to compute black hole entropy without adding counter terms to the action [10, 14, 5], and the coincident gauge might open a new avenue toward the quantization of the gravitational field. However, since the field equations of TEGR and STEGR are identical to the Einstein field equations, these theories cannot address any phenomenological questions which elude GR--such as the accelerated expansion of the Universe or the shape of galactic rotation curves. Such questions are typically addressed by theories of **modified gravity** and the geometric trinity of GR presented in the previous section can be used as starting point for developing such modifications. There are two approaches which are commonly considered in the literature: 1. The actions of TEGR and STEGR are quadratic in the torsion and the non-metricity tensors, respectively. One can thus try to construct the most general scalar which is quadratic in the torsion or non-metricity tensor and take this scalar to define an action functional. In the case of torsion, one finds a three-parameter family of theories described by an action which is quadratic in the torsion tensor. In the case of non-metricity, one finds a five-parameter family of quadratic Lagrangians. These generalizations are discussed in subsections 5.1 and 5.2, respectively. 2. An other popular direction is to consider non-linear extensions of the form \(f(\mathbb{T})\) and \(f(\mathbb{Q})\), where \(f\) is some function which is only subjected to the condition that its first derivative does not vanish. Non-linear extensions of this type are the subject of subsection 5.3. In section 6 we will have a closer look at \(f(\mathbb{Q})\), its application to cosmology, black hole physics, and the question of how many degrees of freedom the theory propagates. Since these modifications of GR are based on the framework of metric-affine geometry, we will sometimes refer to them as the **geometrical trinity of modified gravity theories**. ### Quadratic Actions for Torsion Theories Recall from subsection 4.2 that the action of TEGR is constructed solely from quadratic contractions of the torsion tensor. Concretely, we defined the so-called torsion scalar as \[\mathbb{T}\coloneqq\frac{1}{2}\left(\frac{1}{4}T_{\alpha\mu\nu}+ \frac{1}{2}T_{\mu\alpha\nu}-g_{\alpha\mu}T_{\nu}\right)T^{\alpha\mu\nu}\,. \tag{5.1}\] Now we are interested in constructing the _most general scalar_ which is quadratic in the torsion tensor. To that end, we need to consider the symmetries of \(T^{\alpha}{}_{\mu\nu}\). A priori, a tensor with three indices can be contracted in six different ways with itself; one just has to perform all possible permutations of indices. However, because \(T^{\alpha}{}_{\mu\nu}\) is antisymmetric in its lower indices, only two of these contractions are independent: \[T_{\alpha\mu\nu}T^{\alpha\mu\nu}\] and \[T_{\mu\alpha\nu}T^{\alpha\mu\nu}\,. \tag{5.2}\] The next thing to consider is the trace of the torsion tensor. Due to its antisymmetry, the torsion tensor possesses only one trace; \(T_{\mu}\coloneqq T^{\alpha}{}_{\mu\alpha}\). Thus, the only other quadratic contraction we can build out of the torsion tensor is \[T_{\mu}T^{\mu}\,. \tag{5.3}\] With this we have exhausted all options and we conclude that the most general scalar which is quadratic in the torsion tensor is a linear combination of the three terms discussed above: \[\hat{\mathbb{T}}\coloneqq c_{1}\,T_{\alpha\mu\nu}T^{\alpha\mu\nu}+c_{2}\,T_{\mu \alpha\nu}T^{\alpha\mu\nu}+c_{3}\,T_{\mu}T^{\mu}\,, \tag{5.4}\] where \(c_{1}\), \(c_{2}\), and \(c_{3}\) are _arbitrary_, real constants. It is easy to see that the scalar \(\hat{\mathbb{T}}\) reduces to \(\mathbb{T}\) for the parameter choice \(c_{1}=\frac{1}{4}\), \(c_{2}=\frac{1}{2}\), \(c_{3}=-1\). Using the general torsion scalar defined in (5.4), we can now write the action functional of **Teleparallel Gravity (TG)8** as Footnote 8: The theory defined by this action is sometimes referred to as New General Relativity in the literature (for instance in [98, 105, 106]). \[\mathcal{S}_{\text{TG}}[g,\Gamma,\Psi]\coloneqq-\int_{\mathcal{M}}\mathrm{d}^ {4}x\,\left(\frac{1}{\kappa}\sqrt{|g|}\,\hat{\mathbb{T}}+\tilde{\Pi}_{\alpha} {}^{\mu\nu\rho}\,R^{\alpha}_{\ \mu\nu\rho}+\tilde{\chi}^{\alpha}_{\ \mu\nu}\,Q_{\alpha}{}^{\mu\nu}\right)+ \mathcal{S}_{\text{matter}}[g,\Psi]\,, \tag{5.5}\] where the matter fields \(\Psi\) are assumed to be minimally coupled and having a vanishing hypermomentum. This action looks deceptively similar to the action of TEGR, since we have only replaced \(\mathbb{T}\) by the more general \(\hat{\mathbb{T}}\). Indeed, even the field equations look very similar: \[\text{Metric field equations:}\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad analysis of these primary constraints revealed that the three-parameter family of theories described by the action (5.5) compartmentalizes into nine different sectors (which we dub the **primary sectors**, following the nomenclature of [109]). Each sector is characterized by a different number of primary constraints (cf. Table 1). Primary constraints reduce the number of degrees of freedom. Hence, the more primary constraints, the fewer degrees of freedom there are. However, the exact number of physical degrees of freedom within each sector has not yet been determined. In fact, the Hamiltonian analysis could not be carried out to completion and it is not even known whether there are secondary constraints. In [72] it was argued that the standard Hamiltonian method for constrained systems (the so-called Dirac-Bergmann algorithm) is in general not applicable to teleparallel theories of gravity. We briefly touch upon this subject in subsection 6.3 and refer the reader to [72] for more details on this important open question. Before concluding this subsection, we emphasize that TEGR has a special place among the theories described by the action (5.5). In fact, one has to ask what distinguishes the particular choice of parameters which turns TG into TEGR from all other possible choices. The answer: Enhanced symmetries. Perturbation theory around Minkowski space shows [6, 110] that a self-consistent theory requires \(2c_{1}+c_{2}+c_{3}=0\). If this condition is satisfied, one is left with a \(1\)-parameter family of theories (up to an overall normalization) which propagate one additional degrees of freedom besides the graviton. Among this \(1\)-parameter family of theories, the one which satisfies \(2c_{1}-c_{2}\) enjoys an additional symmetry and it looses the additional degree of freedom. One is then left with TEGR. Removing either one of these parameter conditions leads to a loss of symmetry accompanied by an increase in degrees of freedom, not all of which are healthy. ### Quadratic Actions for Non-Metricity Theories In subsection 4.4 we constructed STEGR's action functional from the non-metricity scalar \[\mathbb{Q}=\frac{1}{4}Q_{\alpha\mu\nu}Q^{\alpha\mu\nu}-\frac{1}{2}Q_{\alpha\mu \nu}Q^{\mu\alpha\nu}-\frac{1}{4}Q_{\alpha}Q^{\alpha}+\frac{1}{2}Q_{\alpha} \bar{Q}^{\alpha}\,. \tag{5.8}\] Now we want to define a new class of theories, which we subsume under the umbrella term **Symmetric Teleparallel Gravity (STG)**, by using the most general scalar which is quadratic in the non-metricity tensor. To that end, we need to consider all possible independent contractions of \(Q_{\alpha\mu\nu}\) with itself. There are six \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Primary** & \multicolumn{4}{|c|}{**Parameter combinations**} & **\# of primary** \\ **sector** & \(\mathbf{2c_{1}+c_{2}+c_{3}}\) & \(\mathbf{2c_{1}-c_{2}}\) & \(\mathbf{2c_{1}+c_{2}}\) & \(\mathbf{2c_{1}+c_{2}+3c_{3}}\) & **constraints** \\ \hline \hline **0** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(0\) \\ \hline **I** & \(=0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(3\) \\ \hline **II** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(\neq 0\) & \(3\) \\ \hline **III** & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(5\) \\ \hline **IV** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(1\) \\ \hline **V** (TEGR) & \(=0\) & \(=0\) & \(\neq 0\) & \(\neq 0\) & \(6\) \\ \hline **VI** & \(\neq 0\) & \(=0\) & \(=0\) & \(\neq 0\) & \(8\) \\ \hline **VII** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(=0\) & \(4\) \\ \hline **VIII** & \(=0\) & \(\neq 0\) & \(=0\) & \(=0\) & \(9\) \\ \hline \end{tabular} \end{table} Table 1: Each primary sector is defined by the vanishing of certain parameter combinations. The vanishing of one or more of these parameter combinations leads to the appearance of primary constraints. The number of independent primary constraints is listed in the last column. TEGR is contained in sector V. This table was inspired by [108], but we adopt a slightly different nomenclature in order to be consistent with [109]. contractions one can build this way, since we have six possible index permutations. However, because \(Q_{\alpha\mu\nu}\) is symmetric in its last two indices, this cuts down the number to three. Using again the symmetry of \(Q_{\alpha\mu\nu}\), one can then show that only the contractions \[Q_{\alpha\mu\nu}Q^{\alpha\mu\nu}\] and \[Q_{\mu\alpha\nu}Q^{\alpha\mu\nu} \tag{5.9}\] are independent. Next, we consider the traces of the non-metricity tensor. Because of its symmetry, there are two such traces: \[Q_{\alpha} \coloneqq Q_{\alpha\lambda}{}^{\lambda}\] and \[\bar{Q}_{\alpha} \coloneqq Q^{\lambda}{}_{\lambda\alpha}\,. \tag{5.10}\] Using these traces, we can build three more contractions which are quadratic in the non-metricity tensor: \[Q_{\mu}Q^{\mu}\,, \bar{Q}_{\mu}\bar{Q}^{\mu}\,,\] and \[Q_{\mu}\bar{Q}^{\mu}\,. \tag{5.11}\] With this we have exhausted all possibilities and we finally conclude that the most general scalar which is quadratic in the non-metricity tensor is \[\hat{\mathbb{Q}} \coloneqq c_{1}\,Q_{\alpha\mu\nu}Q^{\alpha\mu\nu}+c_{2}\,Q_{\mu \alpha\nu}Q^{\alpha\mu\nu}+c_{3}\,Q_{\mu}Q^{\mu}+c_{4}\,\bar{Q}_{\mu}\bar{Q}^{ \mu}+c_{5}\,Q_{\mu}\bar{Q}^{\mu}\,, \tag{5.12}\] where \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), and \(c_{5}\) are arbitrary, real constants. Just as in the case of TG, the action of STG is obtained by replacing \(\mathbb{Q}\) with \(\hat{\mathbb{Q}}\) in the action of STEGR. This results in the functional9 Footnote 9: The theory described by this action is sometimes referred to as Newer General Relativity. See for instance [98]. \[\mathcal{S}_{\text{STG}}[g,\Gamma,\Psi] \coloneqq-\int_{\mathcal{M}}\mathrm{d}^{4}x\,\left(\frac{1}{ \kappa}\sqrt{|g|}\,\hat{\mathbb{Q}}+\tilde{\Pi}_{\alpha}{}^{\mu\nu\rho}\,R^{ \alpha}{}_{\mu\nu\rho}+\tilde{\chi}_{\alpha}{}^{\mu\nu}\,T^{\alpha}{}_{\mu \nu}\right)+\mathcal{S}_{\text{matter}}[g,\Psi]\,, \tag{5.13}\] where eventual matter fields \(\Psi\) are assumed to be minimally coupled and having a vanishing hypermomentum \(\mathcal{H}_{\alpha}{}^{\mu\nu}\). Not only the action looks deceptively similar to the one of STEGR, also the field equations have virtually the same form: Metric field equations: \[\frac{2}{\sqrt{|g|}}\nabla_{\alpha}\left[\sqrt{|g|}\,\hat{P}^{ \alpha}{}_{\mu\nu}\right]+\hat{q}_{\mu\nu}-\frac{1}{2}\hat{\mathbb{Q}}\,g_{ \mu\nu} =\kappa\,\mathcal{T}_{\mu\nu}\] \[\text{Connection field equations:} \nabla_{\mu}\nabla_{\nu}\left(\sqrt{|g|}\hat{P}^{\mu\nu}{}_{ \alpha}\right) =0\,.\] (5.14) The hatted non-metricity conjugate \(\hat{P}^{\alpha}{}_{\mu\nu}\) and the symmetric tensor \(\hat{q}_{\mu\nu}\) are defined as \[\hat{P}^{\alpha}{}_{\mu\nu} \coloneqq\frac{1}{2}\frac{\partial\hat{\mathbb{Q}}}{\partial Q_{ \alpha}{}^{\mu\nu}}=c_{1}\,Q^{\alpha}{}_{\mu\nu}+c_{2}\,Q_{(\mu}{}^{\alpha}{} _{\nu)}+c_{3}\,g_{\mu\nu}Q^{\alpha}+c_{4}\,\delta^{\alpha}{}_{(\mu}\bar{Q}_{ \nu)}+\frac{1}{2}c_{5}\,\left(g_{\mu\nu}\bar{Q}^{\alpha}+\delta^{\alpha}{}_{( \mu}Q_{\nu)}\right)\] \[\hat{q}_{\mu\nu} \coloneqq\frac{\partial\hat{\mathbb{Q}}}{\partial g^{\mu\nu}}=P _{(\mu|\lambda\kappa}Q_{\nu)}{}^{\lambda\kappa}-2P^{\lambda\kappa}{}_{(\mu}Q _{\lambda\kappa|\nu)} \tag{5.15}\] and it is still true that \[\hat{\mathbb{Q}}=\hat{P}_{\alpha\mu\nu}Q^{\alpha\mu\nu}\,. \tag{5.16}\] Moreover, the Bianchi identities, which derive from the diffeomorphism invariance of the action (5.13), read \[\mathcal{D}_{\nu}\hat{\mathcal{M}}^{\nu}{}_{\mu}+\hat{\mathcal{C}}_{\mu} \equiv 0\,, \tag{5.17}\] where \(\hat{\mathcal{M}}_{\mu\nu}\), and \(\hat{\mathcal{C}}_{\alpha}\) represent the left hand side of the field equations (5.14), i.e., \[\hat{\mathcal{M}}_{\mu\nu} \coloneqq\frac{2}{\sqrt{|g|}}\nabla_{\alpha}\left[\sqrt{|g|}\hat{ P}^{\alpha}{}_{\mu\nu}\right]+\hat{q}_{\mu\nu}-\frac{1}{2}\hat{\mathbb{Q}}\,g_{\mu\nu}\] \[\hat{\mathcal{C}}_{\alpha} \coloneqq\nabla_{\mu}\nabla_{\nu}\left(\sqrt{|g|}\hat{P}^{\mu\nu }{}_{\alpha}\right)\,. \tag{5.18}\] Thus, it follows that when the metric field equations are satisfied, the connection field equations are identically satisfied as a consequence of the Bianchi identities: \[\hat{\mathcal{M}}_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\,\,\,\text{satisfied} \quad\Longrightarrow\quad\mathcal{D}_{\nu}\hat{\mathcal{M}}^{\nu}{}_{\mu}=0 \quad\Longrightarrow\quad\hat{\mathcal{C}}_{\mu}\equiv 0\,. \tag{5.19}\] What distinguishes STG from STEGR is the number of physical degrees of freedom. As we know, STEGR propagates the same two degrees of freedom as GR. When it comes to STG, the number of degrees of freedom depends on how one chooses the parameters \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), and \(c_{5}\). The reason is the same as in the case of TG: It is possible to tune the parameters such that certain second order time-derivatives of the metric drop out from the field equations, thus turning some of the equations into constraints. The more independent constraints there are, the lower the number of degrees of freedom. Conversely, it is also possible that equations which appear as constraints in STEGR are turned into dynamical equations because the parameters are no longer finely tuned to lead to certain cancellations. This has the effect of increasing the number of degrees of freedom and it can also lead to pathologies. In [109], the first steps of a Hamiltonian analysis were carried out. After performing an ADM decomposition of the metric and after applying the coincident gauge, the momenta conjugate to lapse, shift, and intrinsic metric were studied. The full expressions, which can be found in [109] are quite long. However, if we only consider the kinetic part of the Lagrangian of STG, which reads \[\mathcal{L}_{\text{kinetic}}=-\sqrt{h}\left(\frac{2\tilde{c}}{N ^{3}}\dot{N}^{2}+\frac{c_{35}}{N^{2}}\,\dot{N}\,h^{ab}\dot{h}_{ab}-\frac{ \hat{c}}{2N^{3}}h_{ab}\dot{N}^{a}\dot{N}^{b}+\frac{1}{2N}\left\{c_{1}h^{ac}h^ {db}\dot{h}_{ad}\dot{h}_{cb}+c_{3}h^{ac}h^{bd}\dot{h}_{ac}\dot{h}_{bd}\right\}\right)\] \[\text{with}\,\,\,\tilde{c}\coloneqq c_{1}+c_{2}+c_{3}+c_{4}+c_{5 },\qquad\hat{c}\coloneqq 2c_{1}+c_{2}+c_{4},\qquad\text{and}\qquad c_{35} \coloneqq 2c_{3}+c_{5}\,, \tag{5.20}\] we can already gain important insights. In fact, the momenta conjugate to lapse, shift, and intrinsic metric have the form \[\tilde{\pi} =-\frac{\sqrt{h}}{N^{2}}\left(4\,\hat{c}\,\dot{N}+c_{35}\,N\,h^{ ab}\dot{h}_{ab}\right)+\text{terms without time derivatives}\] \[\tilde{\pi}_{a} =\frac{\sqrt{h}}{N^{3}}\,\hat{c}\,h_{ab}\dot{N}^{b}+\text{terms without time derivatives}\] \[\tilde{\pi}^{ab} =-\frac{\sqrt{h}}{N^{2}}\left(c_{1}h^{ac}h^{bd}h_{cd}N+h^{ab} \left\{c_{2}h^{cd}\dot{h}_{cd}N+c_{35}\dot{N}\right\}\,\right)+\text{terms without time derivatives}\,. \tag{5.21}\] Evidently, the momentum conjugate to lapse is turned into a so-called primary constraint if the parameters are chosen such that \(\tilde{c}=\text{and}\,\,c_{35}=0\). Similarly, the momentum conjugate to shift becomes a constraint when \(\hat{c}=0\). These choices correspond precisely to the so-called primary sectors I and II shown in Table 2. More constraints can be identified through a systematic analysis based on the kinetic matrix, which is composed of the following submatrices: \[\frac{\delta^{2}\mathcal{L}}{\delta\dot{N}\delta\dot{N}} =-4\frac{\sqrt{h}}{N^{3}}\tilde{c} \frac{\delta^{2}\mathcal{L}}{\delta\dot{N}^{a}\delta\dot{N}^{b}} =\frac{\sqrt{h}}{N^{3}}\hat{c}\,h_{ab}\] \[\frac{\delta^{2}\mathcal{L}}{\delta\dot{N}^{a}\delta\dot{N}} =0 \frac{\delta^{2}\mathcal{L}}{\delta\dot{h}_{bc}\delta\dot{N}^{a}} =0\] \[\frac{\delta^{2}\mathcal{L}}{\delta\dot{h}_{ab}\delta\dot{N}} =-\frac{\sqrt{h}}{N^{2}}c_{35}h^{ab} \frac{\delta^{2}\mathcal{L}}{\delta\dot{h}_{cd}\delta\dot{h}_{ab}} =-\frac{\sqrt{h}}{2N}\left(c_{1}\,h^{ad}h^{bc}+c_{1}h^{ac}h^{bd} +2c_{3}h^{ab}h^{cd}\right)\,. \tag{5.22}\] It is found that the determinant of the kinetic matrix \(\mathcal{K}\) is given by \[\det\mathcal{K}=8\frac{h^{2}}{N^{18}}\,c_{1}^{5}\,\hat{c}^{3}\left(3c_{35}^{2}-4 \left(c_{1}+3c_{3}\right)\hat{c}\right). \tag{5.23}\] By demanding that the determinant vanishes, i.e., demanding that the matrix is degenerate, one finds additional primary sectors. In fact, one finds that there are four independent solutions to the above equations. These solutions are \[\text{\small Sector I:} \tilde{c}=0\text{ and }c_{35}=0\] \[\text{\small Sector II:} \hat{c}=0\] \[\text{\small Sector III:} c_{1}=0\] \[\text{\small Sector IV:} c_{3}=-\frac{c_{1}}{3}+\frac{c_{35}^{2}}{4\tilde{c}} \tag{5.24}\] To determine the number of constraints in each sector, we only need to compute \(10-\text{rank}(\mathcal{K})\) in each sector. For the first four, we find \(1\), \(3\), \(5\), and again \(1\) primary constraints, respectively. Even more sectors can be identified by combining the different parameter conditions in the different sectors, so as to create new and independent sectors with more constraints. This process is described in detail in [109] and ultimately leads to Table 2. Notice that in sector V, which harbours STEGR as a special case, the number of primary constraints matches the one of GR. However, just as in TG, it is currently unknown in which sector secondary constraints occur and how many there are. Hence, the exact number of degrees of freedom is not known for most sectors. At most, we can currently say that sector \(0\) propagates ten degrees of freedom, but it is also a highly pathological theory. Sector X propagates no degrees of freedom, while sector XI has less degrees of freedom than GR. Both sectors are therefore uninteresting. Finally, sector V contains STEGR, which has two degrees of freedom, but it is unclear whether other theories with a different number of degrees of freedom can co-habitate that sector. The reason these questions have remained unanswered thus far is because of challenges posed by the Dirac-Bergmann algorithm, as mentioned in the previous subsection. These challenges seem to afflict all \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Primary** & \multicolumn{6}{c|}{**Parameter combinations**} & **\# of primary** \\ **sector** & \(\mathbf{\tilde{c}}\) & \(\mathbf{\hat{c}}\) & \(\mathbf{c_{35}}\) & \(\mathbf{c_{1}}\) & \(\mathbf{\frac{c_{1}}{3}+c_{3}-\frac{c_{35}^{2}}{4\tilde{c}}}\) & **constraints** \\ \hline **0** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(0\) \\ \hline **I** & \(=0\) & \(\neq 0\) & \(=0\) & \(\neq 0\) & N/A & \(1\) \\ \hline **II** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(3\) \\ \hline **III** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(5\) \\ \hline **IV** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(1\) \\ \hline **V** (STEGR) & \(=0\) & \(=0\) & \(=0\) & \(\neq 0\) & N/A & \(4\) \\ \hline **VI** & \(=0\) & \(\neq 0\) & \(=0\) & \(=0\) & N/A & \(6\) \\ \hline **VII** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(8\) \\ \hline **VIII** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(4\) \\ \hline **IX** & \(\neq 0\) & \(\neq 0\) & \(\neq 0\) & \(=0\) & \(=0\) & \(6\) \\ \hline **X** & \(=0\) & \(=0\) & \(=0\) & \(=0\) & N/A & \(10\) \\ \hline **XI** & \(\neq 0\) & \(=0\) & \(\neq 0\) & \(=0\) & \(=0\) & \(9\) \\ \hline \end{tabular} \end{table} Table 2: Each primary sector (first column) is defined by the vanishing of certain combinations of the parameters \(c_{1}\), \(c_{2}\), \(c_{3}\), \(c_{4}\), and \(c_{5}\) (columns two through six). For brevity, and following [109], we have defined \(\tilde{c}:=c_{1}+c_{2}+c_{3}+c_{4}+c_{5}\), \(\tilde{c}:=2c_{1}+c_{2}+c_{4}\), and \(c_{35}:=2c_{3}+c_{5}\). The vanishing of these parameter combinations (or the vanishing of combinations thereof) correspond to the appearance of one or more primary constraints (the number of independent primary constraints is shown in the last column). STEGR is contained in sector V. teleparallel theories of gravity, as has recently been argued in [72], and the development of new methods-- or at least the exploration of other known methods but applied to teleparallel theories--seems to be necessary. Finally, we remark that STEGR distinguishes itself from the other possible theories within the five-parameter family by having an enhanced set of symmetries. In [6], perturbations around Minkowski space were studied. The perturbative ansatz \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\), with \(|h_{\mu\nu}|\ll 1\), leads to the quadratic Lagrangian \[L=c_{1}\partial_{\alpha}h_{\mu\nu}\partial^{\alpha}h^{\mu\nu}+ \left(c_{2}+c_{4}\right)\partial_{\alpha}h_{\mu\nu}\partial^{\mu}h^{\alpha\nu }+c_{3}\partial_{\alpha}h\partial^{\alpha}h+c_{5}\partial_{\mu}h^{\mu}{}_{\nu} \partial^{\nu}h\,, \tag{5.25}\] where \(h\coloneqq\eta^{\mu\nu}h_{\mu\nu}=h^{\mu}{}_{\mu}\) is the trace of the perturbations. This is nothing but the most general Lagrangian for a spin-2 field. As is well known from the Fierz-Pauli analysis of this Lagrangian, symmetries have to be imposed in order to remove ghostly degrees of freedom. Demanding that the theory is invariant under \[h_{\mu\nu}\qquad\mapsto\qquad h_{\mu\nu}+2\partial_{(\mu}\xi_{ \nu)}\qquad\text{ for some vector $\xi^{\mu}$ which satisfies $\partial_{\mu}\xi^{\mu}=0$}\,, \tag{5.26}\] so-called **transversal diffeomorphisms**, leads to the condition \[2c_{1}+c_{2}+c_{4}=0\,, \tag{5.27}\] which is indeed satisfied by the STEGR parameters10. In order to recover the two propagating degrees of freedom of a massless spin-2 field, one can further impose **linearized diffeomorphisms**, Footnote 10: This is simply the condition \(\hat{c}=0\), which defines Sector II and which is also a part of Sector V. \[h_{\mu\nu}\qquad\mapsto\qquad h_{\mu\nu}+2\partial_{(\mu}\xi_{ \nu)}\,, \tag{5.28}\] where the vector field \(\xi^{\mu}\) is now unrestricted. This leads to \[2c_{1}=-2c_{3}=c_{5}\,. \tag{5.29}\] Both conditions taken together then imply \[c_{3}=-c_{1}, c_{4}=-2c_{1}-c_{2}, c_{5}=2c_{1}\,, \tag{5.30}\] which is equivalent to \[\tilde{c}=0, c_{35}=0\,. \tag{5.31}\] These are precisely the defining equations of sector V in Table 2. STEGR, which inhabits sector V, is therefore distinguish through its symmetries and the healthy degrees of freedom it propagates. Instead of imposing linearized diffeomorphisms, one could have also imposed the **linearized Weyl symmetry** \[h_{\mu\nu}\qquad\mapsto\qquad h_{\mu\nu}+\phi\,\eta_{\mu\nu}\,, \tag{5.32}\] for some arbitrary scalar field and in addition to the transverse diffeomorphisms (5.26). Demanding this symmetry implies \[c_{3}=-\frac{3}{8}c_{1}, c_{5}=2c_{1}\,. \tag{5.33}\] This describes a linearized version of unimodular gravity, which is essentially GR plus the constraint \(\sqrt{|g|}=1\). As a consequence, in unimodular gravity the cosmological constant emerges from an integration constant [III]. Notice that Sector V does not respect the linearized Weyl symmetry. This symmetry only seems to be respected by Sector IX, which has nothing to do with STEGR or GR. However, it should be pointed out that the classification was obtained _without_ restricting the metric through the condition \(\sqrt{|g|}=1\). Non-Linear Extensions: \(f(\mathcal{R})\), \(f(\mathbb{T})\), \(f(\mathbb{Q})\) and \(f(\mathbb{G})\) Theories As we discussed in section 4, one can set up a geometric trinity to describe gravity. Einstein's original formulation based on non-vanishing curvature is equivalent to TEGR, which is based on non-vanishing torsion, and both theories are in turn equivalent to STEGR, which is built on a non-vanishing non-metricity tensor. The General Teleparallel Equivalent of GR unifies the torsion and non-metricity description of gravity and is also equivalent to GR. These four formulations are equivalent in the sense that they posses the same field equations, propagate the same degrees of freedom, and therefore ultimately possess the same solution space. Each formulation of the trinity can be derived from an action principle. We recall that these actions are given by \[\mathcal{S}_{\text{EH}}[g] = \frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,\mathcal{R}\, \mathrm{d}^{4}x\] \[\mathcal{S}_{\text{TEGR}}[\Lambda] = -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,\mathbb{T}\, \mathrm{d}^{4}x\] \[\mathcal{S}_{\text{STEGR}}[g,\xi] = -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,\mathbb{Q}\, \mathrm{d}^{4}x\,.\] \[\mathcal{S}_{\text{GTEGR}}[g,\Lambda] = -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,\mathbb{G}\, \mathrm{d}^{4}x\,. \tag{5.34}\] The actions are _equivalent_, in the sense spelled out above, but they are _not equal_. In fact, they depend on different fields and they all differ by boundary terms. This opens the door for yet another generalization of the geometrical trinity of gravity. Namely, we can replace the scalars \(\mathcal{R}\), \(\mathbb{T}\), \(\mathbb{Q}\), and \(\mathbb{G}\) by arbitrary functions and obtain the following action functionals: \[\mathcal{S}_{f(\mathbb{R})}[g] \coloneqq \frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,f(\mathcal{R})\, \mathrm{d}^{4}x\] \[\mathcal{S}_{f(\mathbb{T})}[\Lambda] \coloneqq -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,f(\mathbb{T})\, \mathrm{d}^{4}x\] \[\mathcal{S}_{f(\mathbb{Q})}[g,\xi] \coloneqq -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,f(\mathbb{Q})\, \mathrm{d}^{4}x\] \[\mathcal{S}_{f(\mathbb{G})}[g,\Lambda] \coloneqq -\frac{1}{2\kappa}\int_{\mathcal{M}}\sqrt{|g|}\,f(\mathbb{G})\, \mathrm{d}^{4}x \tag{5.35}\] The motivation for this non-linear extension is that the added freedom in choosing a function \(f\) may help in explaining the accelerated expansion of the universe, structure formation, and other phenomena which in the trinity of GR require the introduction of dark energy and dark matter. Indeed, given that the original functionals differed by boundary terms, one has to conclude that the resulting non-linear extensions are _no longer equivalent to each other_! In particular this means that each one of the above functionals gives rise to its own peculiar field equations with its own number of propagating degrees of freedom. Probably the most studied and best understood theory among these three is \(f(\mathcal{R})\) gravity, since it was first proposed by Buchdahl in 1970 [II2]. Given the extensive literature and the fact that our focus is on \(f(\mathbb{Q})\) gravity, we shall just discuss some basic aspects of \(f(\mathcal{R})\) gravity and refer the reader to the extensive review articles [II3, II4] and references therein. #### 5.3.4 \(f(\mathcal{R})\) Gravity Following the same route that led to Einstein's field equations, it is straightforward to deduce the equations of \(f(\mathcal{R})\) gravity. They are \[f^{\prime}(\mathcal{R})\,\mathcal{R}_{\mu\nu}-\frac{1}{2}f(\mathcal{R})\,g_{\mu \nu}+\left(g_{\mu\nu}\square-\mathcal{D}_{\mu}\mathcal{D}_{\nu}\right)f^{ \prime}(\mathcal{R})=\kappa\,\mathcal{T}_{\mu\nu}\,, \tag{5.36}\] where we have defined \(f^{\prime}(\mathcal{R}):=\frac{\mathrm{d}f(\mathcal{R})}{\mathrm{d}\mathcal{R}}\) and \(\square:=g^{\mu\nu}\mathcal{D}_{\mu}\mathcal{D}_{\nu}\). If we choose \(f(\mathcal{R})=\mathcal{R}\), the equations reproduce to Einstein's field equations, as they should. In order to avoid this trivial case, we shall now assume \(f^{\prime\prime}(\mathcal{R})\neq 0\). Then one sees that the above field equations are actually fourth order non-linear equations for the metric, due to the second order differential operator \(g_{\mu\nu}\square-\mathcal{D}_{\mu}\mathcal{D}_{\nu}\) acting on \(f^{\prime}(\mathcal{R})\) (which itself already contains second order derivatives of the metric). What may seem alarming at first sight is actually not that troublesome. One can show [13, 14] that the theory propagates three healthy degrees of freedom: Two degrees of freedom corresponding to a massless graviton and one scalar degree of freedom. #### 5.3.5 \(f(\mathbb{T})\) Gravity Starting from the \(f(\mathbb{T})\) action coupled to matter fields \(\Psi\), \[\mathcal{S}_{f(\mathbb{T})}[g,\Gamma]\coloneqq-\frac{1}{2\kappa}\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,f(\mathbb{T})+\mathcal{S}_{\mathrm{ matter}}[g,\Psi]\,, \tag{5.37}\] one finds a set of metric and connection field equations \[\left(\nabla_{\alpha}+T_{\alpha}\right)\left[f^{\prime}(\mathbb{ T})\,S_{(\mu\nu)}{}^{\alpha}\right]+f^{\prime}(\mathbb{T})\,t_{\mu\nu}- \frac{1}{2}f(\mathbb{T})g_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\] \[\left(\nabla_{\mu}+T_{\mu}\right)\left[f^{\prime}(\mathbb{T})\,S_ {[\alpha}{}^{\mu}{}_{\beta]}\right]=0\,. \tag{5.38}\] It should be noted that in contrast to the \(f(\mathcal{R})\) field equations, the metric field equations of \(f(\mathbb{T})\) gravity are second order. Furthermore, in the case \(f(\mathbb{T})=\mathbb{T}\) the field equations reduce to the equations of TEGR, as had to be expected. In practice it is often helpful to re-write the metric field equations in the form \[f^{\prime}(\mathbb{T})\,G_{\mu\nu}-\frac{1}{2}\left(f(\mathbb{T})-f^{\prime}( \mathbb{T})\,\mathbb{T}\right)+f^{\prime\prime}(\mathbb{T})S_{(\mu\nu)}{}^{ \alpha}\partial_{\alpha}\mathbb{T}=\kappa\,\mathcal{T}_{\mu\nu}\,. \tag{5.39}\] In this form it is evident that the case \(f^{\prime\prime}(\mathbb{T})=0\) with \(f^{\prime}(\mathbb{T})=1\), which is equivalent to \(f(\mathbb{T})=\mathbb{T}+\)const, simply reproduces Einstein's equations with a cosmological constant \(\Lambda=-\frac{\mathrm{const}}{2}\). This form of the equations also highlights that the dynamics will be modified whenever \(f^{\prime\prime}(\mathbb{T})\neq 0\). However, despite some efforts, it has so far not been possible to determine the precise number of degrees of freedom propagated by the theory. The number ranges between three [15] and five [16, 17]. #### 5.3.6 \(f(\mathbb{Q})\) Gravity The \(f(\mathbb{Q})\) action, which includes minimally coupled matter fields \(\Psi\), reads \[\mathcal{S}_{f(\mathbb{Q})}[g,\Gamma]\coloneqq-\frac{1}{2\kappa}\int_{ \mathcal{M}}\mathrm{d}^{4}x\,\sqrt{|g|}\,f(\mathbb{Q})+\mathcal{S}_{\mathrm{ matter}}[g,\Psi] \tag{5.40}\] and it gives rise to the following metric and connection field equations: \[\frac{2}{\sqrt{|g|}}\nabla_{\alpha}\left[\sqrt{|g|}\,f^{\prime}( \mathbb{Q})\,P^{\alpha}{}_{\mu\nu}\right]+f^{\prime}(\mathbb{Q})\,q_{\mu\nu} -\frac{1}{2}f(\mathbb{Q})\,g_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\] \[\nabla_{\mu}\nabla_{\nu}\left(\sqrt{|g|}\,f^{\prime}(\mathbb{Q}) \,P^{\mu\nu}{}_{\alpha}\right)=0 \tag{5.41}\] These field equations are structurally very similar to the field equations of STEGR. However, it is possible to re-write the metric field equations as [82, 84] \[f^{\prime}(\mathbb{Q})\,G_{\mu\nu}-\frac{1}{2}\left(f(\mathbb{Q})-f^{\prime}( \mathbb{Q})\,\mathbb{Q}\right)g_{\mu\nu}+2f^{\prime\prime}(\mathbb{Q})P^{ \alpha}{}_{\mu\nu}\partial_{\alpha}\mathbb{Q}=\kappa\,\mathcal{T}_{\mu\nu}\,. \tag{5.42}\] In this form, it is evident that \(f^{\prime\prime}(\mathbb{Q})=0\) with \(f^{\prime}(\mathbb{Q})=\text{const}.\) reproduces the Einstein field equations with a cosmological constant. It is also clear that the dynamics will be considerably modified by the last term on the left hand side. In fact, we will see in subsection 6.3 that this term has an effect on the counting of primary constraints and thus also impacts the number of physical degrees of freedom. It is however important to emphasize that the question how many degrees of freedom \(f(\mathbb{Q})\) propagates has not yet been answered satisfactorily. The current state will be discussed in more details in subsection 6.3. Luckily, not knowing the number of physical degrees of freedom does not constitute an obstacle when it comes to applying the theory to cosmology or black holes physics. In this context, or more generally whenever we want to study specific spacetimes, it can be useful to notice that \(f(\mathbb{Q})\) gravity can contain the GR solutions as special cases. In fact, if we impose the condition \[\mathbb{Q}=\mathbb{Q}_{0}=\text{const}. \tag{5.43}\] and if we assume that we can actually satisfy this condition, then it follows that the metric field equations take the form \[G_{\mu\nu}-\frac{1}{2}\frac{f(\mathbb{Q}_{0})-f^{\prime}(\mathbb{Q}_{0}) \mathbb{Q}_{0}}{f^{\prime}(\mathbb{Q}_{0})}\,g_{\mu\nu}=\frac{\kappa}{f^{ \prime}(\mathbb{Q}_{0})}\,\mathcal{T}_{\mu\nu}\,. \tag{5.44}\] Formally, this can be read as Einstein's field equations with an effective cosmological constant and a re-scaled energy-momentum tensor \[\Lambda_{\text{eff}} \coloneqq-\frac{1}{2}\frac{f(\mathbb{Q}_{0})-f^{\prime}(\mathbb{ Q}_{0})\mathbb{Q}_{0}}{f^{\prime}(\mathbb{Q}_{0})}\] \[\widehat{\mathcal{T}}_{\mu\nu} \coloneqq\frac{1}{f^{\prime}(\mathbb{Q}_{0})}\mathcal{T}_{\mu \nu}\,. \tag{5.45}\] Thus, it is possible to recover certain GR solutions in \(f(\mathbb{Q})\) gravity, even when \(f^{\prime\prime}(\mathbb{Q})\neq 0\), i.e., even when we are _not_ in the GR sector of the theory. For applications and formal considerations it can also be useful to know the Bianchi identities of \(f(\mathbb{Q})\) gravity. Given that the theory is generally covariant, it is possible to find such Bianchi identities by following the same reasoning as in GR (or TEGR and STEGR). One finds the identity \[\mathcal{D}_{\mu}\mathcal{M}^{\mu}{}_{\nu}+\mathcal{C}_{\nu}\equiv 0\,, \tag{5.46}\] where we have defined \[\mathcal{M}_{\mu\nu} \coloneqq\frac{2}{\sqrt{|g|}}\nabla_{\alpha}\left[\sqrt{|g|}\,f^{ \prime}(\mathbb{Q})\,P^{\alpha}{}_{\mu\nu}\right]+f^{\prime}(\mathbb{Q})\,q_{ \mu\nu}-\frac{1}{2}f(\mathbb{Q})\,g_{\mu\nu}\] \[\mathcal{C}_{\alpha} \coloneqq\nabla_{\mu}\nabla_{\nu}\left(\sqrt{|g|}\,f^{\prime}( \mathbb{Q})\,P^{\mu\nu}{}_{\alpha}\right)\,. \tag{5.47}\] We emphasize that in contrast to STEGR, \(\mathcal{M}_{\mu\nu}\) does _not_ satisfy the identity \(\mathcal{D}_{\mu}\mathcal{M}^{\mu}{}_{\nu}=0\) and thus the connection field equations are _not_ just trivial identities. Quite on the contrary, the connection field equations are now dynamical equations for the connection, which can have physical degrees of freedom. What one can conclude, however, is that when the metric field equations are satisfied, i.e., when \(\mathcal{M}_{\mu\nu}=\kappa\,\mathcal{T}_{\mu\nu}\) holds, then the connection field equations are also satisfied, due to the Bianchi identities. In fact, we easily find \[\mathcal{D}_{\mu}\mathcal{M}^{\mu}{}_{\nu}+\mathcal{C}_{\nu}=\kappa\, \underbrace{\mathcal{D}_{\mu}\mathcal{T}^{\mu}{}_{\nu}}_{=0}+\mathcal{C}_{\nu}= \mathcal{C}_{\nu}\equiv 0\,. \tag{5.48}\] This fact can, for instance, be used to simplify the Hamiltonian analysis of the theory [72]. ### \(\boldsymbol{f}(\mathbb{G})\) Gravity As discussed in subsection 4.5, the General Teleparallel Equivalent of GR encompasses TEGR and STEGR at the same time. That is, TEGR and STEGR emerge from this more general theory as partially gauge-fixed theories. It is therefore no surprise that one can also consider the non-linear extension \(\mathbb{G}\mapsto f(\mathbb{G})\) and that this modification has some relations to \(f(\mathbb{T})\) and \(f(\mathbb{Q})\) gravity. Following [85, 103], it is convenient to first introduce the auxiliary tensors \[M^{\alpha}{}_{\mu\nu} \coloneqq\Gamma^{\alpha}{}_{\mu\nu}-\left\{\begin{aligned} \alpha \\ \mu\nu\end{aligned}\right\}=K^{\alpha}{}_{\mu\nu}+L^{\alpha}{}_{\mu\nu}\] \[Z_{\alpha}{}^{\mu\nu} =-M_{\alpha}{}^{\mu\nu}-M^{\nu}{}_{\alpha}{}^{\mu}+M^{\rho}{}_{ \alpha\rho}g^{\mu\nu}+M^{\mu\rho}{}_{\rho}\delta^{\nu}{}_{\alpha}\,. \tag{5.49}\] With their help, one can express the field equations of \(f(\mathbb{G})\) in the relatively compact form \[f^{\prime}(\mathbb{G})\,G_{\mu\nu}-\frac{1}{2}\left(f(\mathbb{G} )-f^{\prime}(\mathbb{G})\,\mathbb{G}\right)g_{\mu\nu}+\mathcal{D}_{(\mu}f^{ \prime}(\mathbb{G})\,M^{\sigma}{}_{\nu)\sigma}+f^{\prime\prime}(\mathbb{G}) \left(\,M^{[\rho\sigma]}{}_{\sigma}\,g_{\mu\nu}-\,M^{\rho}{}_{(\mu\nu)} \right)\partial_{\rho}\mathbb{G}=\kappa\,\mathcal{T}_{\mu\nu}\,,\] \[\nabla_{\rho}\left(f^{\prime}(\mathbb{G})\,Z_{\mu}{}^{\nu\rho} \right)-f^{\prime}(\mathbb{G})\,M^{\lambda}{}_{\rho\lambda}Z_{\mu}{}^{\nu \rho}=0\,. \tag{5.50}\] It can be verified that by imposing either \(Q_{\alpha\mu\nu}=0\) or \(T^{\alpha}{}_{\mu\nu}=0\), which we should read as partial gauge-fixing conditions for the connection, one recovers the \(f(\mathbb{T})\) and \(f(\mathbb{Q})\) field equations, respectively. Moreover, just as before, the metric field equations reveal that choosing \(f^{\prime\prime}(\mathbb{G})=0\) with \(f^{\prime}(\mathbb{G})\neq 0\) simply yields Einstein's field equations. Finally, if we impose the condition \(\mathbb{G}=\mathbb{G}_{0}=\text{const}\), we find that \(f(\mathbb{G})\) can contain some of the GR solutions, since then the field equations reduce to \[G_{\mu\nu}-\frac{1}{2}\frac{f(\mathbb{G}_{0})-f^{\prime}(\mathbb{G}_{0}) \mathbb{G}_{0}}{f^{\prime}(\mathbb{G}_{0})}g_{\mu\nu}=\frac{\kappa}{f^{\prime }(\mathbb{G}_{0})}\mathcal{T}_{\mu\nu}\,. \tag{5.51}\] That is, we obtain Einstein's field equations with an effective cosmological constant and a rescaled energy-momentum tensor. All of this is unsurprising, since all these properties hold in \(f(\mathbb{T})\) and \(f(\mathbb{Q})\) gravity. However, since \(f(\mathbb{G})\) does not make use of a gauge-fixing condition such as \(Q_{\alpha\mu\nu}=0\) or \(T^{\alpha}{}_{\mu\nu}=0\), it is possible that it leaves more freedom to find interesting beyond-GR solutions. A first attempt at finding cosmological solutions has been carried out in [85]. The fact that removing gauge-fixing conditions can have advantages has also been shown in [13], where the so-called canonical frame has been scrutinized in the context of a general teleparallel cosmology. ## 6 \(f(\mathbb{Q})\) Gravity The non-metricity formulation of gravity, and in particular the non-linear extension \(f(\mathbb{Q})\), have witnessed a flurry of research activities over the past few years. Most of these activities concern applications to cosmology and black hole physics. This is natural, considering that one of the motivations for studying non-linear extensions is the possibility to explain phenomena which in standard GR require the introduction of dark energy, the inflaton field, and dark matter. In this section we give an overview over the most important results which have been obtained in applications of \(f(\mathbb{Q})\) gravity to cosmology and black holes. We also briefly touch upon the question of how many degrees of freedom the theory propagates. ### Cosmology in \(f(\mathbb{Q})\) Given that the coincident gauge can always be used in symmetric teleparallel theories of gravity, the simplest thing to do when working on applications of the theory is to use this particular gauge plus a fixed background metric. This was precisely the ansatz in [99], where \(f(\mathbb{Q})\) cosmology was studied for the first time. Specifically, the authors used the ansatz \[\Gamma^{\alpha}{}_{\mu\nu}=0\qquad\qquad\qquad\text{and}\qquad\qquad\qquad g_{ \mu\nu}=\begin{pmatrix}-N(t)^{2}&0&0&0\\ 0&a(t)^{2}&0&0\\ 0&0&a(t)^{2}&0\\ 0&0&0&a(t)^{2}\end{pmatrix}\,, \tag{6.1}\] where \(N(t)\) and \(a(t)\) are the usual lapse function and scale factor of the FLRW spacetime. As it turns out, the non-metricity scalar for this ansatz is simply given by \[\mathbb{Q}=6\frac{H^{2}}{N^{2}}\,, \tag{6.2}\] where \(H\coloneqq\frac{\dot{a}}{a}\) is the usual Hubble function. It is evident that the symmetry-reduced action \[\mathcal{S}[N,a]\coloneqq-\frac{1}{2\kappa}\int_{\mathcal{M}}\mathrm{d}t\, \mathrm{d}^{3}\vec{x}\,a(t)^{3}\,N(t)\,f(\mathbb{Q})\,, \tag{6.3}\] which is the \(f(\mathbb{Q})\) action evaluated on the FLRW metric and in coincident gauge, has a residual time-reparametrization invariance [98, 99]. By exploiting this reparametrization freedom, we can fix the lapse function to unity, \(N(t)=1\). The idea now is to study the resulting cosmological equations \[6f^{\prime}\,H^{2}-\frac{1}{2}f =\rho\] \[\left(12H^{2}\,f^{\prime\prime}+f^{\prime}\right)\dot{H} =-\frac{1}{2}(\rho+p)\,, \tag{6.4}\] where \(\rho\) and \(p\) denote the density and pressure, respectively, and where we defined \(f^{\prime}\coloneqq\frac{\mathrm{d}f}{\mathrm{d}\mathbb{Q}}\) and \(f^{\prime\prime}\coloneqq\frac{\mathrm{d}^{2}f}{\mathrm{d}\mathbb{Q}^{2}}\). As always, standard matter fields also satisfy the continuity equation \[\dot{\rho}=-3H\left(\rho+p\right)\,. \tag{6.5}\] A particularly interesting class of theories emerges if we demand that \(f\) satisfies the equation \[6f^{\prime}\,H^{2}-\frac{1}{2}f=\frac{1}{2\kappa}\mathbb{Q}\,, \tag{6.6}\] with \(\kappa=8\pi G\), since this gives the same background evolution as GR, but the evolution of perturbations is subjected to modifications. Using equation (6.2), we can rewrite the condition (6.6) equivalently as \[\mathbb{Q}\,f^{\prime}(\mathbb{Q})-\frac{1}{2}f(\mathbb{Q})=\frac{1}{2\kappa} \mathbb{Q}\,, \tag{6.7}\] which is a simple first order differential equation for \(f\), which is solved by \[f(\mathbb{Q})=\frac{1}{\kappa}\left(\mathbb{Q}+M\,\sqrt{\mathbb{Q}}\right)\,. \tag{6.8}\] Here, \(M\) is an integration constant and clearly the special case \(M=0\) corresponds to STEGR, while \(M\neq 0\) leads to a \(1\)-parameter family of modified theories. As mentioned before, the background evolution for this family of theories is the same as in GR. In order to discriminate between different values of \(M\), it is necessary to study perturbations, which exhibit _different_ behaviour than in GR. Another interesting ansatz for studying \(f(\mathbb{Q})\) cosmology is a power-law modification of STEGR: \[f(\mathbb{Q})=\frac{1}{\kappa}\left[\mathbb{Q}-6\lambda\,M^{2}\left(\frac{ \mathbb{Q}}{6M^{2}}\right)^{\alpha}\right]\,, \tag{6.9}\] where \(\lambda\) and \(\alpha\) are dimensionless parameters. The modified Friedmann equation for this ansatz reads \[H^{2}\left[1+(1-2\alpha)\lambda\left(\frac{H^{2}}{M^{2}}\right)^{\alpha-1} \right]=\frac{\kappa}{3}\rho \tag{6.10}\] The previous \(f\) is contained as a special case for the choice \(\alpha=\frac{1}{2}\), while STEGR emerges from \(\alpha=1\). By inspecting the form of the modified Friedmann equation one can infer that for \(\alpha<1\) the corrections to the GR evolution become important at low curvature, while for \(\alpha>1\) corrections become relevant in the high curvature regime. In other words, theories with \(\alpha>1\) play a role in the early Universe and theories with \(\alpha<1\) provide us with corrections to late-time cosmology. This opens the possibility for modified inflationary scenarios or a description of the late-time Universe without dark energy. In fact, various \(f(\mathbb{Q})\) cosmology models have been studied and applied to questions pertaining to the late-time Universe [16, 17, 18, 19], large scale structures [21], relativistic versions of MOND [22, 23], bouncing cosmologies [24, 25, 26], and quantum cosmology [27, 28]. A lot of effort has also gone into constraining or testing \(f(\mathbb{Q})\) models [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. The majority of the literature on \(f(\mathbb{Q})\) cosmology makes use of the coincident gauge. However, at this point we would like to recall the discussion on the \(f(\mathbb{Q})\) field equations from subsection 5.3, which showed that the connection field equations are no longer trivial identities (this was the case in STEGR). From this it can be expected that \(f(\mathbb{Q})\) propagates more than the two degrees of freedom of GR. When working in the coincident gauge and using the FLRW metric as an ansatz, interesting cosmological models can emerge, but we become largely oblivious to the additional degrees of freedom because of these overly restrictive choices we made. There are two ways out of this. The first is perturbation theory around FLRW, while still using the coincident gauge. This avenue was explored in [99] and it led to the insight that \(f(\mathbb{Q})\) gravity propagates _at least_ two additional degrees of freedom. The second option is to abandon the coincident gauge and instead work with a metric and connection which are both compatible with the cosmological principles of homogeneity and isotropy. The advantage of this method is that the connection is not completely trivial and it can enrich the phenomenology to be studied. A systematic study of this approach was undertaken in [83, 84] and we shall briefly review the main steps and results. #### 6.1.1 Symmetries and symmetry-reduction of the metric Following [81, 82, 83, 84, 85, 86], we define a (continuous) symmetry of a metric-affine geometry as follows: Let \(\phi_{s}:\mathbb{R}\times\mathcal{M}\rightarrow\mathcal{M}\) be a \(1\)-parameter family of diffeomorphisms with \(\phi_{s=0}=\text{id}\), which is smooth in \(s\) and which has a generating vector field \(v\coloneqq\frac{\mathrm{d}\phi_{s}}{\mathrm{d}s}\Big{|}_{s=0}\). We say that \(\phi_{s}\) is a continuous symmetry of the metric-affine geometry if and only if \[\begin{cases}\phi_{s}^{*}g_{\mu\nu}&\stackrel{{!}}{{=}}&g_{\mu\nu} \\ \phi_{s}^{*}\Gamma^{\alpha}{}_{\mu\nu}&\stackrel{{!}}{{=}}&\Gamma^ {\alpha}{}_{\mu\nu}\end{cases}\,. \tag{6.1}\] These are the **symmetry conditions**. In case there are also tensorial matter fields \(\Psi\) present, we have to impose the additional condition \[\phi_{s}^{*}\Psi \stackrel{{!}}{{=}} \Psi \tag{6.2}\] because otherwise the field equations would be inconsistent. Heuristically, this can also be understood as follows: The right hand side of the \(f(\mathbb{Q})\) field equations contain the energy-momentum tensor of the matter fields. It sources the gravitational field described by \((g_{\mu\nu},\Gamma^{\alpha}{}_{\mu\nu})\). If the matter sources do _not_ respect certain symmetries, it is hard to see how they could give rise to a gravitational field which _does_ respect these symmetries. Given that the family of diffeomorphisms \(\phi_{s}\) is smooth in \(s\), we can re-write the symmetry conditions equivalently as \[\begin{cases}\mathcal{L}_{v}g_{\mu\nu}&\stackrel{{!}}{{=}}&0\\ \mathcal{L}_{v}\Gamma^{\alpha}{}_{\mu\nu}&\stackrel{{!}}{{=}}&0 \\ \mathcal{L}_{v}\Psi&\stackrel{{!}}{{=}}&0\end{cases}\,, \tag{6.3}\] where \(\mathcal{L}_{v}\) denotes the Lie derivative along the vector field \(v\) which generates the symmetry \(\phi_{s}\). For a spacetime which is homogeneous and isotropic, the symmetry generators (written in spherical coordinates) are \[\mathcal{R}_{x} \coloneqq\sin\theta\,\partial_{\theta}+\frac{\cos\phi}{\tan \theta}\,\partial_{\phi} \mathcal{T}_{x} \coloneqq\chi\,\sin\theta\,\cos\phi\,\partial_{r}+\frac{\chi}{r} \,\cos\theta\,\cos\phi\,\partial_{\theta}-\frac{\chi}{r}\,\frac{\sin\phi}{\sin \theta}\,\partial_{\phi}\] \[\mathcal{R}_{y} \coloneqq-\cos\phi\,\partial_{\theta}+\frac{\sin\phi}{\tan \theta}\,\partial_{\phi} \mathcal{T}_{y} \coloneqq\chi\,\sin\theta\,\sin\phi\,\partial_{r}+\frac{\chi}{r} \,\cos\theta\,\sin\phi\,\partial_{\theta}+\frac{\chi}{r}\,\frac{\cos\phi}{\sin \theta}\,\partial_{\phi}\] \[\mathcal{R}_{z} \coloneqq-\partial_{\phi} \mathcal{T}_{z} \coloneqq\chi\,\cos\theta\,\partial_{r}-\frac{\chi}{r}\,\sin \theta\,\partial_{\theta}\,, \tag{6.4}\] where \(\mathcal{R}_{i}\) are the **generators of spatial rotations**, \(\mathcal{T}_{i}\) are the **generators of spatial translations**, and where we have introduced \[\chi \coloneqq\sqrt{1-k\,r^{2}}\,. \tag{6.5}\] As explained in [84, 81], it actually suffices to only use \(\mathcal{R}_{x}\), \(\mathcal{R}_{y}\), \(\mathcal{R}_{z}\), and \(\mathcal{T}_{x}\), since the remaining two generators can be obtained by taking Lie brackets of these four. Moreover, imposing the conditions \[\mathcal{L}_{\mathcal{R}_{i}}g_{\mu\nu} \stackrel{{!}}{{=}}0 \text{and} \mathcal{L}_{\mathcal{T}_{i}}g_{\mu\nu} \stackrel{{!}}{{=}}0 \tag{6.6}\] leads to the well-known result \[g_{\mu\nu} =\begin{pmatrix}g_{tt}(t)&0&0&0\\ 0&\frac{g_{rr}(t)}{\lambda}&0&0\\ 0&0&g_{rr}(t)\,r^{2}\\ 0&0&0&g_{rr}(t)\,r^{2}\,\sin^{2}\theta\end{pmatrix}\,. \tag{6.7}\] Thus, the initially ten independent components of the metric are reduced to merely two independent components, namely \(g_{tt}\) and \(g_{rr}\), which can only depend on time. Also, the metric has a simple diagonal form and the parameter \(k\in\mathbb{R}\) famously determines the spatial curvature: If \(k=0\), then the spatial sections are all flat. For \(k>0\) one obtains spherical sections, while \(k<0\) describes hyperbolic spatial sections. #### 6.2.2 Symmetry-reduction of the connection According to our definition of symmetries for metric-affine geometries, we have to impose the conditions \[\mathcal{L}_{\mathcal{R}_{i}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\] and \[\mathcal{L}_{\mathcal{T}_{i}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0 \tag{6.18}\] on the connection. The resulting equations are numerous and long, but straightforward to solve. One finds [83, 84] \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}C_{1}&0&0&0\\ 0&\frac{C_{2}}{\chi^{2}}&0&0\\ 0&0&C_{2}\,r^{2}&0\\ 0&0&0&C_{2}\,r^{2}\,\sin^{2}\theta\end{pmatrix} \Gamma^{r}{}_{\mu\nu} =\begin{pmatrix}0&C_{3}&0&0\\ C_{3}&\frac{k\,r}{\chi^{2}}&0&0\\ 0&0&-r\,\chi^{2}&-C_{5}\,r^{2}\,\chi^{2}\,\sin\theta\\ 0&0&-C_{5}\,r^{2}\,\chi^{2}\,\sin\theta&-r\,\chi^{2}\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu} =\begin{pmatrix}0&0&C_{3}&0\\ 0&0&\frac{1}{r}&\frac{C_{5}\,\sin\theta}{\chi}\\ C_{4}&\frac{1}{r}&0&0\\ 0&-\frac{C_{5}\,\csc\theta}{\chi}&0&\cot\theta\\ C_{4}&\frac{1}{r}&\cot\theta&0\end{pmatrix}\,, \tag{6.19}\] where \(C_{1}\), \(C_{2}\), \(C_{3}\), \(C_{4}\), and \(C_{5}\) are arbitrary functions of time. It should be noted that the initially \(4\times 4\times 4=64\) independent components of the connection have been reduced to these five functions and a few trigonometric functions. However, it should also be noted that the connection is _not_ symmetric and thus not torsionless. In fact, we have not yet implemented the postulates of vanishing torsion and vanishing curvature. #### 6.2.3 Implementing the postulates of vanishing torsion and curvature The vanishing of torsion is straightforward to implement. We simply have to demand that the symmetry-reduced connection (6.19) is symmetric, which leads to the two conditions \[C_{3}-C_{4} =0 \text{and} C_{5} =0\,. \tag{6.20}\] This leaves us with \(C_{1}\), \(C_{2}\), and \(C_{3}\) as free functions. Given that so many connection components are zero and that the free functions only depend on time, it is not surprising that the condition of vanishing curvature leaves us algebraic equations and first order differential equations. Specifically, \(R^{\alpha}{}_{\mu\nu\rho}\stackrel{{!}}{{=}}0\) is equivalent to the set of equations \[C_{1}\,C_{3}-C_{3}^{2}-\dot{C}_{3} =0\] \[C_{1}\,C_{2}-C_{2}\,C_{3}+\dot{C}_{2} =0\] \[k+C_{2}\,C_{3} =0\,. \tag{6.21}\] Notice that the spatially flat case is special, since then we have \(C_{2}\,C_{3}=0\), which has three possible solutions: \[\text{Case I:} C_{2}=0,\ C_{3}\neq 0\,.\] \[\text{Case II:} C_{2}\neq 0,\ C_{3}=0\,.\] \[\text{Case III:} C_{2}=0,\ C_{3}=0\,.\] If \(k\neq 0\), the situation is considerably simpler. Since neither \(C_{2}\) nor \(C_{3}\) can be zero, we obtain \[C_{3}=-\frac{k}{C_{2}}\,. \tag{6.22}\] Using this result, the two differential equations (6.21) reduce to a single equation: \[k+C_{1}\,C_{2}+\dot{C}_{2}=0\,. \tag{6.23}\] Given that \(C_{2}\neq 0\), we can solve this last equation for \(C_{1}\), obtaining \[C_{1}=-\frac{k+\dot{C}_{2}}{C_{2}}\,. \tag{6.24}\] We finally arrive at the conclusion that a connection which respects homogeneity and isotropy, and which is also torsionless and flat under the assumption that \(k\neq 0\) has the form \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}-\frac{k+\dot{C}_{2}}{C_{2}}&0&0&0\\ 0&\frac{C_{2}}{\chi^{2}}&0&0\\ 0&0&r^{2}\,C_{2}&0\\ 0&0&0&r^{2}\,C_{2}\,\sin^{2}\theta\end{pmatrix} \Gamma^{r}{}_{\mu\nu} =\begin{pmatrix}0&-\frac{k}{C_{2}}&0&0\\ -\frac{k}{C_{2}}&\frac{k}{\chi^{2}}&0&0\\ 0&0&-r\,\chi^{2}&0\\ 0&0&0&-r\,\chi^{2}\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu} =\begin{pmatrix}0&0&-\frac{k}{C_{2}}&0\\ 0&0&\frac{1}{r}&0\\ -\frac{k}{C_{2}}&\frac{1}{r}&0&0\\ 0&0&0&-\sin\theta\,\cos\theta\end{pmatrix} \Gamma^{\phi}{}_{\mu\nu} =\begin{pmatrix}0&0&0&-\frac{k}{c}\\ 0&0&0&\frac{1}{r}\\ 0&0&0&\cot\theta\\ -\frac{k}{C_{2}}&\frac{1}{r}&\cot\theta&0\end{pmatrix}\,. \tag{6.25}\] We dub this connection \(\Gamma^{(k)}\). Now we consider to spatially flat sections, \(k=0\), case by case. For Case I, defined by \(C_{2}=0\) under the assumption that \(C_{3}\neq 0\), we obtain the connection \(\Gamma^{(\text{I})}\), which is of the form \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}C_{3}+\frac{\dot{C}_{3}}{C_{3}}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix} \Gamma^{r}{}_{\mu\nu} =\begin{pmatrix}0&C_{3}&0&0\\ C_{3}&0&0&0\\ 0&0&-r&0\\ 0&0&0&-r\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu} =\begin{pmatrix}0&0&C_{3}&0\\ 0&0&\frac{1}{r}&0\\ C_{3}&\frac{1}{r}&0&0\\ 0&0&0&-\sin\theta\,\cos\theta\end{pmatrix} \Gamma^{\phi}{}_{\mu\nu} =\begin{pmatrix}0&0&0&C_{3}\\ 0&0&0&\frac{1}{r}\\ 0&0&0&\cot\theta\\ C_{3}&\frac{1}{r}&\cot\theta&0\end{pmatrix}\,, \tag{6.26}\] This connection depends on the free function \(C_{3}(t)\). In the second case, which is based on the assumption \(C_{2}\neq 0\), we obtain the connection \(\Gamma^{(\text{II})}\), parametrized as \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}-\frac{\dot{C}_{2}}{C_{2}}&0&0&0\\ 0&C_{2}&0&0\\ 0&0&r^{2}\,C_{2}&0\\ 0&0&0&r^{2}\,C_{2}\,\sin^{2}\theta\end{pmatrix} \Gamma^{r}{}_{\mu\nu} =\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&-r&0\\ 0&0&0&-r\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\phi}{}_{\mu\nu} =\begin{pmatrix}0&0&0&0\\ 0&0&0&\frac{1}{r}\\ 0&0&0&\cot\theta\\ 0&\frac{1}{r}&\cot\theta&0\end{pmatrix}\,. \tag{6.27}\] Finally, the third case, which is clearly the simplest, gives us the connection \(\Gamma^{\rm(III)}\), which can explicitly be written as \[\Gamma^{t}{}_{\mu\nu}=\begin{pmatrix}-C_{1}&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix} \Gamma^{r}{}_{\mu\nu}=\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&-r&0\\ 0&0&0&-r\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu}=\begin{pmatrix}0&0&0&0\\ 0&0&\frac{1}{r}&0\\ 0&\frac{1}{r}&0&0\\ 0&0&0&-\sin\theta\,\cos\theta\end{pmatrix} \Gamma^{\phi}{}_{\mu\nu}=\begin{pmatrix}0&0&0&0\\ 0&0&0&\frac{1}{r}\\ 0&0&0&\cot\theta\\ 0&\frac{1}{r}&\cot\theta&0\end{pmatrix}\,. \tag{6.28}\] In conclusion, we find that a connection which is homogeneous, isotropic, torsionless, and flat can be parametrized in four distinct ways. The connections \(\Gamma^{(k)}\), \(\Gamma^{(\rm I)}\), \(\Gamma^{\rm II}\), and \(\Gamma^{\rm(III)}\) could be the source of interesting and rich cosmological models. Indeed, for the connection \(\Gamma^{\rm(II)}\) with the choice \(f(\mathbb{Q})=\mathbb{Q}^{\kappa}\) (assuming \(\kappa\geq 2\)), exact vacuum solutions were obtained [84] which can reproduce the scale factor of a fluid with equation of state \(p=w\,\rho\), for some constant \(w\). The same exact vacuum solution can also mimic de Sitter space. This could be of interest for investigations concerning the early Universe, since this solution can naturally drive inflation. The effects of using different connections in \(f(\mathbb{Q})\) cosmology have been studied in [84, 85, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 444, 445, 446, 447, 448, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 52, 52, 53, 54, 55, 56, 57, 58, 59, 511, 53, 54, 56, 59, 52, 54, 56, 57, 59, 53, 58, 59, 54, 59, 55, 55, 57, 59, 56, 59, 57, 58, 59, 59, 59, 50, 51, 52, 53, 54, 55, 57, 59, 58, 59, 50, 52, 54, 55, 59, 50, 53, 55, 59, 51, 54, 55, 59, 56, 57, 59, 58, 59, 51, 55, 59, 52, 54, 55, 59, 53, 56, 57, 59, 58, 59, 50, 59, 51, 50, 52, 54, 55, 59, 52, 55, 56, 57, 59, 58, 59, 50, 59, 51, 53, 59, 52, 54, 55, 59, 56, 59, 57, 59, 58, 59, 50, 59, 51, 52, 54, 55, 59, 50, 53, 59, 52, 55, 56, 57, 59, 58, 59, 50, 59, 51, 54, 59, 50, 55, 59, 52, 56, 57, 59, 59, 50, 51, 53, 54, 55, 59, 52, 55, 56, 57, 59, 51, 55, 58, 59, 52, 59, 53, 59, 54, 55, 59, 55, 56, 57, 59, 58, 59, 59, 50, 51, 52, 54, 55, 59, 53, 59, 54, 55, 57, 59, 58, 59, 59, 50, 59, 51, 55, 59, 52, 56, 57, 59, 59, 50, 51, 53, 59, 52, 54, 59, 53, 59, 54, 55, 59, 56, 57, 59, 58, 59, 50, 59, 51, 59, 52, 59, 53, 59, 54, 55, 59, 52, 56, 57, 59, 51, 55, 59, 53, 59, 54, 55, 59, 55, 57, 59, 56, 59, 51, 55, 59, 52, 56, 57, 59, 58, 59, 50, 59, 51, 52, 59, 53, 59, 54, 55, 59, 56, 57, 59, 59, 50, 51, 52, 54, 55, 59, 53, 59, 54, 59, 55, 56, 57, 59, 58, 59, 50, 59, 51, 52, 54, 59, 53, 59, 50, 59, 51, 55, 59, 52, 56, 57, 59, 51, 55, 59, 53, 59, 50, 51, 55, 59, 52, 56, 57, 59, 51, 55, 59, 53, 59, 50, 51, 59, 52, 56, 57, 59, 53, 59, 51, 55, 59, 50, 51, 55, 59, 52, 59, 53, 59, 54, 59, 55, 56, 57, 59, 51, 55, 59, 52, 56, 57, 59, 51, 55, 59, 53, 59, 57, 59, 51, 55 2. Geometric postulates: Use the metric and connection found above and implement the postulate of vanishing curvature and vanishing torsion, \(R^{\alpha}{}_{\mu\nu\rho}=0\) and \(T^{\alpha}{}_{\mu\nu}=0\); 3. Field equations: Take the metric and connection which satisfy all symmetries and geometric postulates and plug them into the \(f(\mathbb{Q})\) field equations. We provide a brief sketch of the individual steps. The ultimate goal is to find the _simplest_ representation of a stationary, spherically symmetric metric-affine geometry \((\mathcal{M},g,\Gamma)\), before studying the field equations of \(f(\mathbb{Q})\) gravity. For details we refer the reader to [82]. Symmetries In the present context we are interested in finding solutions which are stationary and spherically symmetric. Given the notion of spacetime symmetry discussed in the previous subsection, this means that we have to impose \[\begin{cases}\mathcal{L}_{v}g_{\mu\nu}&\stackrel{{!}}{{=}}&0\\ \mathcal{L}_{v}\Gamma^{\alpha}{}_{\mu\nu}&\stackrel{{!}}{{=}}&0\\ \mathcal{L}_{v}\Psi&\stackrel{{!}}{{=}}&0\end{cases}\,, \tag{6.30}\] for the vector fields which generate temporal translations and spatial rotations around the origin: \[\mathcal{T} \coloneqq\partial_{t}\] (generator of time-translations) \[\mathcal{R}_{x} \coloneqq\sin\phi\,\partial_{\theta}+\frac{\cos\phi}{\tan\theta} \partial_{\phi}\] \[\mathcal{R}_{y} \coloneqq-\cos\phi\partial_{\theta}+\frac{\sin\phi}{\tan\theta} \partial_{\phi}\] (generators of rotations) \[\mathcal{R}_{z} \coloneqq-\partial_{\phi} \tag{6.31}\] Obtaining time-translation invariance and invariance with respect to rotations in the \(\phi\)-direction is easy: All metric and connection components have to be independent of the coordinates \(t\) and \(\phi\). Invariance with respect to the \(\mathcal{R}_{x}\) and \(\mathcal{R}_{y}\) generators requires a little more work. However, the result for the metric is well-known (see for instance [74, 77]): A metric which is time-translation and rotationally invariant necessarily has the form \[g_{\mu\nu}=\begin{pmatrix}g_{tt}&g_{tr}&0&0\\ g_{tr}&g_{rr}&0&0\\ 0&0&g_{\theta\theta}&0\\ 0&0&0&g_{\theta\theta}\sin^{2}\theta\end{pmatrix} \tag{6.32}\] with respect to the coordinates \((t,r,\theta,\phi)\). In particular, we point out that \(g_{tt},g_{tr},g_{rr}\), and \(g_{\theta\theta}\) are only functions of \(r\). In the case of the connection it is easier to first impose the postulate of vanishing torsion and then to work out the remaining two symmetry conditions. Since torsion is the anti-symmetric part of the connection, a torsionless connection is simply one that is symmetric in its lower two indices. Imposing this condition also has the effect of reducing the number of independent components of the connection from \(4\times 4\times 4=64\) to \(4\times\frac{4\times(4+1)}{2}=40\). Furthermore, it is more convenient to consider the following linear combinations when imposing the re maining symmetry conditions: \[\cos\phi\,\mathcal{L}_{\mathcal{R}_{x}}\Gamma^{\alpha}{}_{\mu\nu}+ \sin\phi\,\mathcal{L}_{\mathcal{R}_{y}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\] \[\sin\phi\,\mathcal{L}_{\mathcal{R}_{x}}\Gamma^{\alpha}{}_{\mu\nu}- \cos\phi\,\mathcal{L}_{\mathcal{R}_{y}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\,. \tag{6.33}\] We emphasize that imposing these conditions is strictly equivalent to imposing \(\mathcal{L}_{\mathcal{R}_{x}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\) and \(\mathcal{L}_{\mathcal{R}_{y}}\Gamma^{\alpha}{}_{\mu\nu}\stackrel{{!}}{{=}}0\). By imposing the first linear combination we learn that * Twenty of the \(40\) components of \(\Gamma^{\alpha}{}_{\mu\nu}\) are exactly zero: * Two components are given in terms of trigonometric functions; * Six components are determined through algebraic relations to other components of the connection. Hence, out of the initially \(64\) independent components of the connection, three of the symmetry conditions and the postulate of vanishing torsion bring this number down to only \(40-20-2-6=12\) independent components which are functions of \(r\) and \(\theta\). Finally, the second linear combination implements the last symmetry condition. It leads to a set of twelve first order partial differential equations for precisely the twelve independent connection components we are left with after imposing the first three symmetry conditions. These equations can be solved, but because these are partial differential equations with respect to \(\theta\), the solutions all depend on \(r\). Hence, we find that the symmetry conditions together with the postulate of vanishing torsion leave us with twelve independent connection components, all of which are purely functions of \(r\) and nothing else. Geometric postulatesSince we have already implemented the postulate of vanishing torsion, we are left with imposing the postulate of vanishing curvature. As can be expected from the form of the curvature tensor and the fact that \(20\) connection components vanish, this will lead to a set of algebraic equations and a set of first order partial differential equations. The detailed process of how to consistently solve all algebraic and differential equations is explained in [82], where it is found that this ultimately leads to two different sets of solutions. The first solution set is defined as follows: All connection components can be expressed in terms of the three arbitrary functions \(\Gamma^{t}{}_{rr}(r)\), \(\Gamma^{r}{}_{rr}(r)\), \(\Gamma^{\phi}{}_{r\phi}(r)\), the real constant \(c\neq 0\), and trigonometric functions. Concretely, the connection takes the form \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}c&\Gamma^{\phi}{}_{r\phi}&0&0\\ \Gamma^{\phi}{}_{r\phi}&\Gamma^{t}{}_{rr}&0&0\\ 0&0&-\frac{1}{c}&0\\ 0&0&0&-\frac{\sin^{2}\theta}{c}\end{pmatrix} \Gamma^{r}{}_{\mu\nu} =\begin{pmatrix}0&0&0&0\\ 0&\Gamma^{r}{}_{rr}&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu} =\begin{pmatrix}0&0&c&0\\ 0&0&\Gamma^{\phi}{}_{r\phi}&0\\ 0&0&0&\cot\theta\\ c&\Gamma^{\phi}{}_{r\phi}&\cot\theta&0\end{pmatrix} \tag{6.34}\] Furthermore, the derivative of \(\Gamma^{\phi}{}_{r\phi}\) can be written as \[\frac{\mathrm{d}}{\mathrm{d}r}\Gamma^{\phi}{}_{r\phi}=c\,\Gamma^{t}{}_{rr}- \Gamma^{\phi}{}_{r\phi}\left(\Gamma^{\phi}{}_{r\phi}+\Gamma^{r}{}_{rr}\right)\,. \tag{6.35}\] These are all the defining properties of solution set I. For solution set 2 one finds instead that all connection components can be expressed in terms of the four arbitrary functions \(\Gamma^{t}{}_{rr}(r)\), \(\Gamma^{t}{}_{\theta\theta}(r)\), \(\Gamma^{r}{}_{rr}(r)\), \(\Gamma^{r}{}_{\theta\theta}(r)\neq 0\) the two real constants \(c\) and \(k\), and trigonometric functions. The connection is explicitly given by \[\Gamma^{t}{}_{\mu\nu} =\begin{pmatrix}k-c-c\tilde{c}\,\Gamma^{t}{}_{\theta\theta}&\frac{ \tilde{\epsilon}\hat{\Gamma}^{t}{}_{\theta\theta}\Gamma^{t}{}_{\theta\theta}}{ \Gamma^{t}{}_{\theta\theta}}&0&0\\ \frac{\tilde{\epsilon}\hat{\Gamma}^{t}{}_{\theta\theta}\Gamma^{t}{}_{\theta \theta}}{\Gamma^{t}{}_{\theta\theta}}&\Gamma^{t}{}_{rr}&0&0\\ 0&0&\Gamma^{t}{}_{\theta\theta}&0\\ 0&0&0&\Gamma^{t}{}_{\theta\theta}\,\sin^{2}\theta\end{pmatrix}\quad\Gamma^{r }{}_{\mu\nu}=\begin{pmatrix}-c\tilde{c}\,\Gamma^{r}{}_{\theta\theta}&c+c\, \tilde{c}\,\Gamma^{t}{}_{\theta\theta}&0&0\\ c+c\,\tilde{c}\,\Gamma^{t}{}_{\theta\theta}&\Gamma^{r}{}_{rr}&0&0\\ 0&0&\Gamma^{r}{}_{\theta\theta}&0\\ 0&0&0&\Gamma^{r}{}_{\theta\theta}\,\sin^{2}\theta\end{pmatrix}\] \[\Gamma^{\theta}{}_{\mu\nu} =\begin{pmatrix}0&0&c&0\\ 0&0&-\frac{\hat{\Gamma}^{t}{}_{\theta\theta}}{\Gamma^{r}{}_{\theta\theta}}&0 \\ c&-\frac{\hat{\Gamma}^{t}{}_{\theta\theta}}{\Gamma^{r}{}_{\theta\theta}}&0&0\\ 0&0&0&\cot\theta\\ c&-\frac{\hat{\Gamma}^{t}{}_{\theta\theta}}{\Gamma^{r}{}_{\theta\theta}}& \cot\theta&0\end{pmatrix}\,, \tag{6.36}\] where we have defined \(\tilde{c}\coloneqq 2c-k\) and \(\hat{\Gamma}^{t}{}_{\theta\theta}\coloneqq 1+c\,\Gamma^{t}{}_{\theta\theta}\) in order to compactify the notation. Moreover, the derivatives of \(\Gamma^{t}{}_{\theta\theta}\) and \(\Gamma^{r}{}_{\theta\theta}\) can be expressed in terms of the other free functions. Concretely, one finds \[\frac{\mathrm{d}}{\mathrm{d}r}\Gamma^{t}{}_{\theta\theta} =-\frac{\left\{\left[c\,(2c-k)\,\Gamma^{t}{}_{\theta\theta}+3c-k \right]\,\Gamma^{t}{}_{\theta\theta}+1\right\}\Gamma^{t}{}_{\theta\theta}}{ \Gamma^{r}{}_{\theta\theta}}-\Gamma^{r}{}_{\theta\theta}\Gamma^{t}{}_{rr}\] \[\frac{\mathrm{d}}{\mathrm{d}r}\Gamma^{r}{}_{\theta\theta} =-c\left((2c-k)\Gamma^{t}{}_{\theta\theta}+2\right)\Gamma^{t}{}_{ \theta\theta}-\Gamma^{r}{}_{\theta\theta}\Gamma^{r}{}_{rr}-1\,. \tag{6.37}\] Observe that in both solution sets the derivatives of \(\Gamma^{t}{}_{rr}\) and \(\Gamma^{r}{}_{rr}\) cannot be expressed in terms of other connection components. Thus, in both cases only these two components should be regarded as the unknowns to be solved for in the connection field equations. It was also shown in [82] that the two solution sets are related to each other by a double scaling limit. However, it should be emphasized that outside of this particular limit, the two solution sets are genuinely different and they describe different physics. We elaborate more on this point further below. Simplest possible form of a stationary, spherically symmetric geometry \((\boldsymbol{\mathcal{M}},\boldsymbol{g},\boldsymbol{\Gamma})\) Recall that our task is not only to find expressions for the metric and the connection which satisfy the various symmetries and the geometric postulates. We also wish to find the _simplest possible form_, as that will hopefully help in analyzing and solving the field equations. To simplify the form of the metric, we make use of the diffeomorphism invariance of the theory. This is possible, since we did not yet fix any particular gauge. As is well-known, it is possible to find a diffeomorphism which brings the symmetry-reduced metric (6.32) into the simple diagonal form \[g_{\mu\nu}=\begin{pmatrix}g_{tt}(r)&0&0&0\\ 0&g_{rr}(r)&0&0\\ 0&0&r^{2}&0\\ 0&0&0&r^{2}\sin^{2}\theta\end{pmatrix}\,. \tag{6.38}\] This is of course nothing but the standard form of a metric which is stationary and spherically symmetric, which can be found in textbooks on GR [73, 74, 77]. However, in the context of metric-affine geometries, the diffeomorphism which achieves this transformation has of course also to be applied to the connection. What is remarkable, is that even tough this diffeomorphism in general changes the connection, it maps solution set 1 onto itself and it also maps solution set 2 onto itself! This means that when we study the field equations of \(f(\mathbb{Q})\) gravity, we can use the metric in its simple symmetry-reduced form (6.38) together with a connection which either belongs to solution set 1 or solution set 2. This is the simplest possible form of a stationary and spherically symmetric metric-affine geometry! A cautionary remark on the coincident gauge It is worth pausing at this point and discussing why the first approach, namely the approach based on a metric of the form (6.38) and the coincident gauge, \(\Gamma^{\alpha}{}_{\mu\nu}=0\), fails. This comes simply from the fact that _if_ the metric has the form (6.38), _then_ the connection cannot be identically zero if it also has to satisfy the symmetry conditions. This follows immediately from the two solution sets. Recall that these two solution sets tell us the possible forms a symmetry-reduced connection can have. Both sets exclude the possibility \(\Gamma^{\alpha}{}_{\mu\nu}=0\), because in both sets there are components which are purely expressed in terms of trigonometric function and in both sets there are certain components which are not allowed to vanish. Does this mean we cannot use the coincident gauge? No, the coincident gauge can always be used. But one has to be careful in _how_ one uses it. Our systematic implementation of symmetries and geometric postulates has shown what form the metric and the connection are allowed to have _in the coordinate system_\(t,r,\theta,\phi\). What the coincident gauge tells us, is that there exists a _different coordinate system_ where \(\Gamma^{\alpha}{}_{\mu\nu}=0\), but where the metric will no longer have its simple diagonal form! A diffeomorphism which trivializes the connection will necessarily complicate the metric. In a sense, all the information which resided in the symmetry-reduced connection is "moved" onto the metric by the diffeomorphism. Hence, nothing is gained by using the coincident gauge, which is why we prefer to stick to the two solution sets described above. In the context of stationary and spherically symmetric spacetimes, the transformations which produced the coincident gauge for both solution sets have been worked out [[2]]. #### Symmetry-reduced form of the field equations The symmetry-reduced form of the field equations are obtained by plugging the metric ansatz (6.38) and either the connection from solution set 1 or the connection from solution set 2 into the \(f(\mathbb{Q})\) field equations (5.42). In both cases we find that the structure of the field equations is \[\begin{pmatrix}\mathcal{M}_{tt}&\mathcal{M}_{tr}&0&0\\ \mathcal{M}_{tr}&\mathcal{M}_{rr}&0&0\\ 0&0&\mathcal{M}_{\theta\theta}&0\\ 0&0&0&\mathcal{M}_{\theta\theta}\sin^{2}\theta\end{pmatrix}\] \[\text{Structure of connection field equations:}\qquad\qquad\qquad\qquad \begin{pmatrix}\mathcal{C}_{t}\\ \mathcal{C}_{r}\\ 0\\ 0\end{pmatrix} \tag{6.39}\] Of course, the components of these tensors are different for the two different solution sets of the connection. However, in both cases it turns out to be highly advantageous to first study the off-diagonal component of the metric field equations, i.e., \(\mathcal{M}_{tr}=0\). This leads to two very similar and yet still different equations: * For solution set 1: \(\mathcal{M}_{tr}=0\quad\longrightarrow\quad c\,\partial_{r}\mathbb{Q}\,f^{ \prime\prime}(\mathbb{Q})=0\). * For solution set 2: \(\mathcal{M}_{tr}=0\quad\longrightarrow\quad\left(k-2c(2c-k)\Gamma^{t}{}_{ \theta\theta}\right)\partial_{r}\mathbb{Q}\,f^{\prime\prime}(\mathbb{Q})=0\). We observe that both equations admit \(\partial_{r}\mathbb{Q}\) and \(f^{\prime\prime}(\mathbb{Q})=0\) as solutions. The first option amounts to saying that the non-metricity scalar is constant. In fact, the metric and the connection for both solution sets only depend on \(r\) and \(\theta\), but, as was shown in [82], the non-metricity scalar does _not_ inherit the \(\theta\)-dependence. Thus, \(\partial_{r}\mathbb{Q}=0\) is really saying that the non-metricity scalar is a constant. It is then easy to see that this does _not_ yield any solutions which go beyond GR. In fact, the \(f(\mathbb{Q})\) field equations for \(\mathbb{Q}=\)const. simply become \[f^{\prime}(\mathbb{Q}_{0})G_{\mu\nu}+\frac{1}{2}\left(f(\mathbb{Q}_{0})-f^{ \prime}(\mathbb{Q}_{0})\,\mathbb{Q}_{0}\right)g_{\mu\nu}=\kappa\,\mathcal{T}_ {\mu\nu}\,, \tag{6.40}\] where \(\mathbb{Q}_{0}\) is a constant number. These equations can be re-written in the more suggestive form \[G_{\mu\nu}+\Lambda_{\text{eff}}g_{\mu\nu}=\kappa\,\bar{\mathcal{T}}_{\mu\nu}\,, \tag{6.41}\] where we have introduced \[\Lambda_{\text{eff}}\coloneqq\frac{1}{2}\frac{f(\mathbb{Q}_{0})-f^{\prime}( \mathbb{Q}_{0})\mathbb{Q}_{0}}{f^{\prime}(\mathbb{Q}_{0})}\qquad\qquad\qquad \qquad\qquad\bar{\mathcal{T}}_{\mu\nu}\coloneqq\frac{1}{f^{\prime}(\mathbb{Q} _{0})}\mathcal{T}_{\mu\nu}\,. \tag{6.42}\] Thus, we obtain the Einstein field equations with an effective cosmological constant and a re-scaled energy-momentum tensor! Notice that the re-scaling and the effective cosmological constant are well-defined since we always assume \(f^{\prime}\neq 0\). Otherwise, one would end up with a trivial, non-dynamical theory. Thus, we conclude that solving the off-diagonal metric field equation with \(\mathbb{Q}=\)const. does not yield beyond GR solutions. The second option is to solve \(\mathcal{M}_{tr}=0\) by \(f^{\prime\prime}(\mathbb{Q})=0\). However, we already know that this means that \(f(\mathbb{Q})=a\,\mathbb{Q}+b\), where \(a\) and \(b\) are two real constants. In other words, this option just produces STEGR plus a cosmological constant. Give that STEGR is equivalent to GR, with this option we just recover GR solutions and nothing else. Hence, also in this case we learn that we can only obtain GR solutions for both solution sets of the connection. This leads us to the third option, which is to impose the constraint equations \[\mathcal{M}_{tr} =0 \longrightarrow c=0 \text{(for solution set 1)}\] \[\mathcal{M}_{tr} =0 \longrightarrow\left(k-2c(2c-k)\Gamma^{t}_{\theta\theta}\right)=0 \text{(for solution set 2)}\,. \tag{6.43}\] A quick glace at the defining properties of solution set 1 reveals that \(c=0\) is not possible. In fact, solution set 1 is only valid if \(c\neq 0\). Hence, we reach the important conclusion that **solution set 1 only contains the GR solutions**! If we wish to find beyond GR solution, our only hope is solution set 2. Indeed, the constraint equation (6.43) for solution set 2 does have interesting solutions. As it turns out [82], there are two branches. \[\text{Branch 1:}\qquad\qquad\qquad\Gamma^{t}_{\theta\theta} =\frac{k}{2c(2c-k)} \text{for $c\neq 0$ and $k\neq 2c$}\] \[\Gamma^{t}_{rr} =\frac{k(8c^{2}+2ck-k^{2})}{8c^{2}(2c-k)^{2}\left(\Gamma^{r}_{ \theta\theta}\right)^{2}}\] \[\text{Branch 2:}\qquad\qquad\qquad\Gamma^{t}_{rr} =-\frac{\Gamma^{t}_{\theta\theta}}{\left(\Gamma^{r}_{\theta\theta }\right)^{2}}\] \[c =k=0\,. \tag{6.44}\] Both branches are viable in the sense that they lead to self-consistent field equations, as has been shown in [82]. Moreover, it has also been shown that both branches lead to beyond-GR solutions. Some solutions have been derived explicitly. #### 6.1 Overview of different developments and outlook Let us summarize the situation thus far: We began with a systematic implementation of stationarity and spherical symmetry. This drastically restricted the form of the metric and of the connection. Then, we proceeded with imposing the geometric postulates. In particular, the postulate of vanishing curvature led to further restrictions on the connection and we found that there are two possible parametrizations for a symmetry-reduced connection which also satisfies the geometric postulates. We dubbed these parametrizations solution set 1 and solution set 2. Remarkably, it is possible to diagonalize the metric and bring it into the standard form of a stationary and spherically symmetric metric _without_ spoiling the solution sets. That is, the diffeomorphism which brings the metric into its simplest form maps solution set 1 onto itself and solution set 2 onto itself. Thus, the metric (6.38) together with solution sets 1 and 2 for the connection provide us with the simplest representation of a stationary and spherically symmetric metric-affine geometry. The solution sets also allow us to understand why the coincident gauge leads to inconsistent field equations, if we simultaneously insist that the metric ansatz has the form (6.38). By studying the symmetry-reduced metric field equations, we finally learned that solution set 1 only contains the standard GR solutions. If one wishes to find beyond-GR solutions, one has to work with solution set 2. Within this solution set, one finds that the field equations allow for two branches. That is, the off-diagonal equation imposes a constraint on the connection which admits two genuinely different solutions. Both solutions are fully consistent and can be used to further study the field equations. This leads us to the question of what can be achieved with these different branches and modified gravity equations. In [82], different methods were used to find beyond-GR black hole solutions. Some exact, but rather unphysical solutions were found. Perturbative techniques led to approximate solutions of the field equations which are asymptotically flat, but which lead to multiple horizons and black hole masses which depend on the connection. Regular black holes, black bounces, and quasi normal modes within the context of gravity were studied in [123; 122]. Besides black holes, the stationary and spherically symmetric spacetimes considered here have inspired a flurry of investigations into wormholes in gravity [47; 48; 49; 50; 51; 57; 58; 59; 68] as well as modified stellar solutions [58; 59; 60; 61; 62; 63; 64; 65; 66; 67]. Some thought has also been given to the question, how observational data could be used to constrain gravity [69]. The beyond-GR black hole and stellar solutions could play an important role in this regard. ### Hamiltonian Analysis and Degrees of Freedom of Gravity The question how many degrees of freedom are being propagated in is currently under debate. Findings from cosmological perturbation theory performed in [99] revealed that possesses _at least_ two additional degrees of freedom compared to GR. Based on this insight, and the expectation that the primary constraints of gravity are all second class due to its general covariance, led to the educated guess that the theory propagates six degrees of freedom [70]. A more systematic approach based on the Hamiltonian analysis, performed in coincident gauge, was attempted in [71]. The authors concluded that there are eight degrees of freedom. However, this conclusion was challenges by [72], who put an upper bound of seven degrees of freedom using a kinetic matrix approach. In the same paper, mistakes in the analysis of [71] were brought to light and general issues with the Hamiltonian analysis were discussed. In particular, it was pointed out that the standard approach due to Dirac [124; 125] and Bergmann [126] encounters severe obstacles and new methods, such as the kinetic matrix approach, have to be employed. Finally, yet another Hamiltonian analysis was attempted by [127], who concluded that there are six degrees of freedom. This is in agreement with the upper bound of [72] and the authors claim to have overcome the obstacles of the Dirac-Bergmann algorithm which were pointed out in [72]. However, as we will discuss further below, the resolution is not beyond doubt. At the moment, only three things seem clear: (a) The theory propagates at least four degrees of freedom, (b) there are at most seven degrees of freedom, and (c) there is confusion about what the precise number might be. To better understand this unsatisfying state of affairs we shall briefly review the main results on which everyone agrees. Then we discuss the points where mistakes were made or where opinions drift apart. #### 6.4.1 ADM formulation and primary constraints In order to perform the Hamiltonian analysis, it is advantageous to employ the ADM formalism. Under the (weak) assumption that \(\mathcal{M}\) has the topology \(\mathcal{M}\simeq\mathbb{R}\times\Sigma\), where \(\Sigma\) is a three-dimensional spacelike hypersurface, we can split the coordinates \(\{x^{\mu}\}\) into one temporal and three spatial coordinates, \(\{t,x^{a}\}\). The spatial index takes values in \(\{1,2,3\}\). Moreover, the metric can be written as \[g_{\mu\nu}=\begin{pmatrix}-N^{2}+h_{ab}N^{a}N^{b}&h_{ab}N^{b}\\ h_{ab}N^{b}&h_{ab}\end{pmatrix}\,, \tag{6.45}\] where \(N>0\) is the lapse function, \(N^{a}\) is called the shift vector field, and \(h_{ab}\) is the three-dimensional metric intrinsic to \(\Sigma\). Spatial indices are raised and lowered with \(h_{ab}\). Also, we refer to \(\{N,N^{a},h_{ab}\}\) collectively as ADM variables. From now on, we work exclusively in coincident gauge. Hence, \(\Gamma^{\alpha}{}_{\mu\nu}=0\) globally and consequently covariant derivatives are turned into partial derivatives, \(\nabla_{\mu}=\partial_{\mu}\). The first step in the Hamiltonian analysis then consists in determining the momentum densities \(\tilde{\pi}_{0}\), \(\tilde{\pi}_{a}\), and \(\tilde{\pi}^{ab}\) conjugate to lapse, shift, and intrinsic metric, respectively. The second step is to determine which of the momentum densities can be solved for the velocities \(\dot{N}\), \(\dot{N}^{a}\), and \(\dot{h}_{ab}\). Momenta which are independent of any velocities, i.e., which are of the form \(\tilde{\pi}=\tilde{f}(N,N^{a},h_{ab})\), give rise to **primary constraints**\(\tilde{\mathbf{C}}\) of the form \(\tilde{C}\coloneqq\tilde{\pi}-\tilde{f}\). They put constraints on the physical field configurations and have thus the effect of lowering the number of degrees of freedom. In \(f(\mathbb{Q})\), however, one encounters an obstacle in determining primary constraints if the action functional (5.40) is used: Since the momenta are defined by taking variations of \(\mathcal{S}_{f(\mathbb{Q})}\) with respect to \(\dot{N}\), \(\dot{N}^{a}\) and \(\dot{h}_{ab}\), one finds that they are all proportional to \(f^{\prime}(\mathbb{Q})\). Thus, it is impossible to solve for the velocities without specifying a concrete function \(f\). This obstacle is overcome by introducing an auxiliary scalar field \(\phi\) and instead considering the equivalent action functional \[\mathcal{S}[N,N^{a},h_{ab},\phi]\coloneqq\int_{\mathcal{M}}\mathrm{d}^{4}x\, \sqrt{|h|}\,N\,\left[f(\phi)-f^{\prime}(\phi)\,(\phi-\mathbb{Q})\right]\,. \tag{6.46}\] The field equations derived from this functional are \[\frac{2}{\sqrt{|g|}}\partial_{\alpha}\left[\sqrt{|g|}P^{\alpha}{} _{\mu\nu}f^{\prime}(\phi)\right]+f^{\prime}(\phi)\,q_{\mu\nu}-\frac{1}{2}\left[ f(\phi)-f^{\prime}(\phi)\,(\phi-\mathbb{Q})\right] =0\] \[f^{\prime\prime}(\phi)\,(\phi-\mathbb{Q}) =0\,. \tag{6.47}\] The first equation, which is obtained from varying the action with respect to the metric variables, has almost the form (5.40), while the second equation is purely algebraic and admits two solutions: \[f^{\prime\prime}(\phi)=0\qquad\qquad\qquad\qquad\qquad\qquad \qquad\phi-\mathbb{Q}=0\,. \tag{6.48}\] In the first case, we can conclude that \(f(\phi)=a\,\phi+b\), for some real constants \(a\) and \(b\). We can always rescale the action such that \(a=1\) and then we find that the first equation reduces precisely to the metric field equation of STEGR plus a cosmological constant \(\Lambda\propto b\). The second case is even simpler, since it straightforwardly reproduces the metric field equations of \(f(\mathbb{Q})\) gravity. Thus, we conclude that the field equations are equivalent to the field equations of \(f(\mathbb{Q})\) for any \(f\), after we have solved the equations for \(\phi\). The action (6.46) can thus be regarded as equivalent to the action (5.40). The benefit of working with this action is that \(\mathbb{Q}\) is "pulled out" of \(f\), which allows us to study the momenta more easily. The momentum densities computed from the action (6.46) are given by [71, 72, 127] \[\tilde{\pi}_{0} \coloneqq\frac{\delta\mathcal{S}}{\delta\dot{N}}=0, \tilde{\pi}^{ab} \coloneqq\frac{\delta\mathcal{S}}{\delta\dot{h}_{ab}}=\sqrt{h}\,f^ {\prime}\left(K_{ab}-K\,h_{ab}\right)\] \[\tilde{\pi}_{a} \coloneqq\frac{\delta\mathcal{S}}{\delta\dot{N}^{a}}=-\frac{ \sqrt{h}}{N}f^{\prime\prime}\partial_{a}\phi \tilde{\pi}_{\phi} \coloneqq\frac{\delta\mathcal{S}}{\delta\dot{\phi}}=\frac{\sqrt{h} }{N}f^{\prime\prime}\partial_{a}N^{a}\,, \tag{6.49}\] where \(K_{ab}\) and \(K\) are the extrinsic curvature and its trace, with the former defined as \[K_{ab}\coloneqq\frac{1}{2N}\left(\mathcal{D}_{(a}N_{b)}-\dot{h}_{ab}\right)\,. \tag{6.50}\] It is important to note that these momenta have been obtained _after_ having performed a series of partial integrations in order to bring the action (6.46) into a nicer form, which gives rise to simpler momenta. Performing integrations by parts and dropping boundary terms is allowed, since this does not alter the field equations and, consequently, does not alter the number of degrees of freedom. Notice that in the special case \(f^{\prime\prime}=0\), which corresponds to STEGR, these momenta reduce precisely to the momenta found in the Hamiltonian analysis of STEGR in [70] in the coincident gauge. From now on, we shall always assume \(f^{\prime\prime}\neq 0\), since we are only interested in the degrees of freedom of the modified theory. From the form of the momenta we can immediately infer that there are five primary constraints. These are \[\tilde{C}\coloneqq\tilde{\pi}_{0}\approx 0, \tilde{C}_{a}\coloneqq\tilde{\pi}_{a}+\frac{\sqrt{h}}{N}f^{\prime \prime}\partial_{a}\phi\approx 0, \tilde{C}_{\phi}\coloneqq\tilde{\pi}_{\phi}-\frac{\sqrt{h}}{N}f^{ \prime\prime}\partial_{a}N^{a}\approx 0\,, \tag{6.51}\] where \(\approx\) stands for "weak equality" in the sense of Dirac and Bergmann [124, 125, 126] (see also [128, 129, 130]). Up to this point, there is complete agreement between [71, 72, 127]. #### 6.2.2 Primary Hamiltonian and consistency conditions The authors of [71, 72, 127] also agree on the form of the primary Hamiltonian, which is \[H_{\rm P}(\Sigma_{t})=H_{0}(\Sigma_{t})+\int_{\Sigma_{t}}\mathrm{d}^{3}x\, \left(\lambda^{0}\tilde{C}_{0}+\lambda^{a}\tilde{C}_{a}+\lambda^{\phi}\tilde{C }_{\phi}\right)\,, \tag{6.52}\] where \(\lambda^{0}\), \(\lambda^{a}\), and \(\lambda^{\phi}\) are Lagrange multipliers which enforce the primary constraints and where \(H_{0}(\Sigma_{t})\) is defined as \[H_{0}(\Sigma_{t})\coloneqq\int_{\Sigma_{t}}\mathrm{d}^{3}x\, \left(\dot{N}\tilde{\pi}_{0}+\dot{N}^{a}\tilde{\pi}_{a}+\dot{h}_{ab}\tilde{\pi }^{ab}-\mathcal{L}\right)\,. \tag{6.53}\] Here, \(\Sigma_{t}\) refers to a Cauchy surface, which is simply a leaf in the foliation of \(\mathcal{M}\), i.e., a section of \(\mathbb{R}\times\Sigma\). In yet other words, \(\Sigma_{t}\) corresponds to a \(t=\)const. spacelike hypersurface. The Dirac-Bergmann algorithm demands that the primary constraints be preserved under the time evolution generated by the primary Hamiltonian. This means that the following Poisson brackets have to vanish when the constraints are satisfied: \[\{H_{\rm P},C_{I}\}=\{H_{0},C_{I}\}+\int_{\Sigma_{t}}\mathrm{d}^{3}x\,\{C_{J}, C_{I}\}\lambda^{J}\stackrel{{!}}{{\approx}}0\,, \tag{6.54}\] where the Poisson brackets are defined as \[\{F(\Psi^{a},\tilde{\pi}_{a}),G(\Psi^{A},\tilde{\Pi}_{A})\}\coloneqq\int_{ \Sigma_{t}}\mathrm{d}^{3}x\,\left(\frac{\delta F}{\delta\Psi^{A}}\frac{ \delta G}{\delta\dot{\Pi}_{A}}-\frac{\delta F}{\delta\dot{\Pi}_{A}}\frac{ \delta G}{\delta\Psi^{A}}\right)\,, \tag{6.55}\] for some fields \(\Psi^{A}\) and their conjugate momentum densities \(\tilde{\Pi}_{A}\). Equation (6.54), also called **consistency condition**, can give rise to **secondary constraints**. That is, it can put additional constraints on the physical field configurations and thus reduce the number of degrees of freedom even further. It is also possible that it determines the Lagrange multipliers. This is precisely the point where differences in the works of [71, 72, 127] start to emerge. In [7] it was argued that (6.54) leads to one secondary constraint and a system of linear equations for the Lagrange multipliers. It was further argued that these equations possess unique solutions, hence preventing the appearance of further constraints. It thus follows that there are \(22-6=16\) phase space degrees of freedom or, equivalently, eight configuration space degrees of freedom. This conclusion was challenged by [72, 127]. It was first realised in [72] that the analysis of [71] contains an error. Namely, the equations for the Lagrange multipliers are first order partial differential equations (PDEs), rather than linear algebraic equations. This fact was overlooked in [7] and it drastically changes the situation. First of all, the original Dirac-Bergmann algorithm for counting degrees of freedom does _not_ foresee the possibility that the Lagrange multipliers are constrained by PDEs. It is silently assumed that the equations are always linear algebraic equations. That PDEs can arise has been observed also by other authors (see in particular [I28]) and it is understood that this problem is due to the presence of spatial derivatives of field variables in the primary constraints. The partial integrations necessary for computing the Poisson brackets in the consistency conditions (6.54) can move partial derivatives from the field variables onto the Lagrange multipliers. Unfortunately, the issue has received relatively little attention and no general procedure is known for how to deal with this scenario. In certain simple cases it is possible to solve the PDEs and to reach sensible conclusions from a modified version of the Dirac-Bergmann algorithm. But the general case is far from under control. Moreover, it was shown in [72] that the PDEs for the Lagrange multipliers are not all independent, thus leading potentially to further complications. Several other issues were pointed out in the same work, which is why a different route was ultimately selected to give at least an upper bound on the degrees of freedom. Before discussing these issues and the upper bound in more detail, we turn our attention to [127]. The authors of [127] propose a method to avoid having to deal with PDEs for the Lagrange multipliers. We quote directly from their text: _"For some field \(A(x)\) on a \((n+1)\)-dimensional spacetime, the term \(\sqrt{h}A(x)\partial_{I}^{(x)}\delta^{(n)}(\vec{x}-\vec{y})\), where \(I\) runs from \(1\) to the dimension of the hypersurface \(n\), in PB-algebras can be neglected by setting properly spatial boundary conditions of \(A(x)\) in the variational principle, where \(h\) is the determinant of the metric of the \(n\)-dimensional hypersurface."_ The hypersurface the authors refer to is \(\Sigma_{t}\) and the term \(\sqrt{h}A(x)\partial^{(x)_{I}}\delta^{(n)}(\vec{x}-\vec{y})\) has the generic form of the terms which lead to the aforementioned issue. That is, terms of this form lead to PDEs for the Lagrange multipliers. By dropping all terms of this form from the constraint algebra, the authors find indeed a linear system of equations for the Lagrange multipliers. Their analysis leads them to uncover three secondary and two tertiary constraints. They also conclude that all constraints are second class, eventually leading to \(\frac{1}{2}(22-5-3-2)=6\) degrees of freedom for \(f(\mathbb{Q})\). However, as we mentioned above, this procedure is not beyond doubt. Shortly before the quoted passage, the authors of [127] assert that _"[...] when taking into account that the spatially boundary terms can always be neglected by imposing appropriate spatial boundary conditions in the variational principle and it never affects the dynamics (time evolution)."_ It is correct that, given an action functional, one is allowed to drop or neglect boundary terms because such terms do not change the field equations. In this sense, boundary terms do indeed not affect the dynamics. However, it is _not_ true that spatial boundary conditions in the variational principle do not affect the dynamics. In fact, boundary conditions constrain the solution space of a theory! This can readily be seen from the following example: Take one of the actions of the Trinity and derive the field equations without any further assumptions. One obtains Einstein's field equations which, in particular and among many others, admit the Schwarzschild and FLRW spacetimes as solutions. Now, take the same action but demand that the fields are asymptotically flat. This is a boundary condition and it has the effect of eliminating certain solutions. The equations one obtains are still Einstein's field equations, but the FLRW spacetime is no longer in the solution space because it does not satisfy the boundary condition (i.e., it is not asymptotically flat). Thus, the solution space has been changed by the imposition of boundary conditions. Moreover, the term \(\sqrt{h}A(x)\partial_{I}^{(x)}\delta^{(n)}(\vec{x}-\vec{y})\) is being dropped from the _Poisson bracket algebra_, rather than from the action. It is not clear that such a modification does not affect the dynamics. In particular, since the integrals in questions are integrals over Cauchy surfaces \(\Sigma_{t}\), rather than actual boundary integrals. There is nothing which prevents a Cauchy surface to cross through the bulk of a spacetime through regions of intense field strength. In other words, Cauchy surfaces have nothing to do with the boundary surfaces of spacetimes, where fields are generically assumed to be weak and thus negligible. In conclusion, the approach of [127] does indeed allow one to carry out the Dirac-Bergmann analysis of \(f(\mathbb{Q})\) gravity to completion and count degrees of freedom. However, the method used to achieve this feat is not beyond all doubts. Issues of the Dirac-Bergmann algorithm We have mentioned issues with the Dirac-Bergmann algorithm already several times. Specifically, what was point out in [72] is that the standard algorithm does not foresee consistency conditions involving PDEs for the Lagrange multipliers. Rather, it only foresees systems of linear equations of the form \[M\,\vec{\lambda}+\vec{v}\overset{!}{\approx}0\,, \tag{6.56}\] where \(\vec{\lambda}\) contains all \(r\) Lagrange multipliers coming from \(r\) primary constraints, \(\vec{v}\) is a vector built from the fields, their conjugate momenta, and their derivatives, and \(M\) is a \(r\times r\) matrix. The symbol \(\overset{!}{\approx}\) means that this equations has to be imposed and that it only has to hold if the primary constraints hold. Three scenarios can now emerge12: Footnote 12: For more details on the Hamiltonian analysis of constrained systems and the Dirac-Bergmann algorithm see, for instance, [128, 129, 130]. See also the more recent [72, 108]. 1. If \(\det M\not\approx 0\), the matrix \(M\) is invertible and we can solve for all Lagrange multipliers, \(\vec{\lambda}=-M^{-1}\vec{v}\). 2. If \(\det M\approx 0\), it is not possible to solve for all Lagrange multipliers. If \(\text{rank}(M)=m<r\), there are \(r-m\) vectors \(\vec{u}_{D}\), with \(D\in\{1,\ldots,r-m\}\) which are null vectors of \(M\). That is, these vectors satisfy \(M\vec{u}_{D}=0\). One can show that one can consistently solve for some of the Lagrange multipliers if and only if \(\vec{u}_{D}^{\dagger}\vec{v}\approx 0\). If this last equation does not hold, one has to impose it. This leads to additional, so-called secondary constraints. 3. If \(\det M\approx 0\) and \(\vec{u}_{D}^{\dagger}\vec{v}\approx 0\), it is possible that the consistency condition is trivially satisfied or that it leads to secondary constraints. It should be noted that in the cases 2. and 3., some of the Lagrange multipliers inevitably remain _undetermined_. Since these multipliers appear in the primary Hamiltonian, which generates the dynamics, it means that there is some indeterminacy in the time evolution of the system. This indeterminacy is well-understood to be related to gauge symmetries. Thus, because of this connection to gauge symmetry, it is not alarming when the primary Hamiltonian depends on some arbitrary fields. This brings us now to the case of PDEs for Lagrange multipliers. These PDEs can arise when the constraints contain spatial derivatives of field variables. Because one has to perform an integration by parts in order to compute the second Poisson bracket in (6.54), one ends up with terms of the form \(\partial\lambda\). We emphasize that the presence of partial derivatives in the constraints is only a necessary but not a sufficient condition. After all, also the constraints of electromagnetism and GR possess spatial derivatives, but they do not cause any problems. This has also been discussed in [72]. However, if it happens that the partial derivative has been moved onto the Lagrange multiplier, the system of PDEs has generically the form \[\sum_{i=1}^{d}M^{(i)}\partial_{i}\vec{\lambda}+N\vec{\lambda}+\vec{v}\stackrel{{!}}{{\approx}}0\,. \tag{6.57}\] We have assumed that there are \(d\) spatial dimensions and consequently there are \(d\) matrices \(M^{(i)}\) of dimensions \(r\times r\) which multiply the \(d\) different first order spatial derivatives \(\partial_{i}\vec{\lambda}\). We have also introduced a \(r\times r\) matrix \(N\) and a \(r\)-dimensional vector \(\vec{v}\). The \(r\) Lagrange multipliers \(\vec{\lambda}\) all depend on the \(d\) spatial coordinates and time. As is well-known, in order to obtain a unique solution to a PDE one has to impose boundary conditions or initial value conditions. But this raises the question: How do these initial value or boundary conditions affect the primary Hamiltonian? To be more explicit: We are completely free in choosing these conditions. But no matter what we choose, this choice will affect the primary Hamiltonian and it will depend on the field values we arbitrarily chose for \(\vec{\lambda}\). In turn, these field values will show up in the time evolution of the system. Is there a relation to gauge transformation, as there is one in the standard case discussed above? If there is, it is not completely clear how it will manifest. Observe that there is a difference between \(\vec{\lambda}\) not being completely determined by the linear equations and \(\vec{\lambda}\) depending on arbitrary choices for its initial values or boundary values: In the first case, we are forced to introduce arbitrary fields which depend on space and time. In the second case, we arbitrarily fix for instance the \(x^{1}\) axis as "initial surface" and specify initial values on that surface. This amounts to specifying functions of time and \(d-1\) spatial coordinates, since one coordinate is fixed. Albeit, the fixation of \(x^{1}\) was arbitrary. Nevertheless, we are confronted with open questions and the answers are not clear. This means that we have no _reliable_ way of dealing with these PDEs such that we can count the degrees of freedom in a way we can trust. There is a further problem, also brought to light through the analysis of [72]. Namely, the PDEs for the Lagrange multipliers that emerge in \(f(\mathbb{Q})\) do _not_ give rise to a well-posed initial value formulation. This means that the PDEs are under-determined. Or, in yet other words, even if we prescribe initial values for \(\vec{\lambda}\), it is not possible to find a unique solution. We can only determine some of the components of \(\vec{\lambda}\) and they will depend on the un-determined components. It is known that this happens in gauge theories and that this issue is related to the freedom of performing gauge transformations. For instance, in electromagnetism formulated in terms of a vector potential \(A^{\mu}\), the field equations are under-determined. This is tantamount to saying that the initial value problem is not well-posed. The resolution is to realize that the field equations determine all components of \(A^{\mu}\), except one. Thus, by imposing a gauge fixing, this issue is resolved and one obtains a unique solution. Furthermore, as is well-known, this arbitrary gauge fixing does not affect physical observables. However, what does this under-determination of the PDEs mean in the context of Lagrange multipliers? Is there a connection to gauge symmetries of the theory? All these questions deserve more attention and a detailed analysis, so that we can trust the results obtained from a modified version of the Dirac-Bergmann algorithm. #### 6.4.4 **Upper and lower bound on the degrees of freedom** Given the obstacles mentioned above, which emerge from applying the Dirac-Bergmann algorithm to \(f(\mathbb{Q})\) gravity, the authors of [72] opted for a different approach. Using the so-called **kinetic matrix**, it was shown that \(f(\mathbb{Q})\) propagates _at most_ seven degrees of freedom. Together with the four degrees of freedom found through cosmological perturbation theory in [99], we have a clear lower and upper bound. The kinetic matrix approach sidesteps the issues discussed so far since it is independent of the Hamiltonian analysis and it is directly concerned with the field equations. The basic idea can easily be explained with a simple example: Consider a field theory in dimensions with second order field equations. Let's say that the field in question has two components,, which are functions of the coordinates. The coordinate plays the role of a time coordinate, while is the spatial coordinate. Then, the second order field equations can be written as (6.58) We have introduced three different matrices which multiply the three different second order derivatives,,, and. The notation will become clear later on. Now, what does it mean to solve this system of PDEs? First of all, since the system is second order, we have to prescribe two initial value conditions, if we hope to find a unique solution. These conditions are (6.59) In other words, we prescribe on a hypersurface and we prescribe what its time derivative is on that surface. Observe that if we evaluate the above equation on that particular surface, we know every term, except the first one. In fact, we find (6.60) Notice that since and are _known_ functions of, we also know what their derivatives with respect to are. What we do not know, is what equals to on the surface. That is where the field equations come into play. We can find out what is, if we can solve the above equations for the second order time derivatives. That is, if we can write (6.61) where is the inverse of. Hence, if we can invert, we can formally integrate the PDE and find out what is away from the surface (see also [72] for a more technical and detailed explanation of this point). What happens if we can _not_ invert the matrix? To answer the question, consider the following case: (6.62) Clearly, has only rank one and is therefore not invertible. Observe that this has two implications: 1. If we explicitly write out the vector-matrix product, we see that the second equation has no second order time derivatives. It is thus just a constraint equation, rather than a dynamical equation. 2. The first equation can still be solved for, say,, but then appears on the right hand side. Since there is no equation which determines, we have to prescribe it by hand. Otherwise we cannot integrate the equation for. This is what generically happens in gauge theories. We learn an important lesson from this simple example: Whether a given second order PDE can be solved or not is determined by the matrix which multiplies the second order time derivatives. We can generalize this insight in the following way. Let spacetime be \(d+1\) dimensional and let \(\Psi\) be a vector which contains the \(n\) components of a tensor field (that could be a vector, or a metric, or any other tensor). Then we can write the second order PDE for the field in question as \[\mathcal{K}\,\partial_{0}^{2}\Psi+\sum_{i=1}^{d}\mathcal{M}^{(i)}\partial_{0} \partial_{i}\Psi+\sum_{i=1,i\leq j}^{d}\sum_{j=1}^{d}\mathcal{P}^{(ij)} \partial_{i}\partial_{j}\Psi+\text{ lower order terms }=0\,, \tag{6.63}\] where we have introduced the \(n\times n\)**kinetic matrix**\(\mathcal{K}\), \(d\) so-called **mixing matrices**\(\mathcal{M}^{(i)}\), each of dimension \(n\times n\), and \(\frac{d(d+1)}{2}\)**potential matrices**\(\mathcal{P}^{(ij)}\), also each of dimension \(n\times n\). If the kinetic matrix is invertible, we obtain a unique solution for the PDE. However, if \(\mathcal{K}\) is not invertible, we find constraint equations. From our simple example it is clear that the number of constraint equation is the same as the number of rows of \(\mathcal{K}\) which are zero, in some sense. Of course, a matrix \(\mathcal{K}\) which is degenerate does not always have rows filled with zero. Rather, it has rows which are linear combinations of other rows. Thus, the mathematically precise statement is this2: Footnote 2: Further mathematical details can be found in [72, 13] and in the Appendix of [32], which also provides ample illustrations and examples. \[\text{if rank}(\mathcal{K})=r\leq n\quad\Longrightarrow\quad\text{There are $n-r$ constraint equations}. \tag{6.64}\] It thus follows that by determining the rank of the kinetic matrix, we can infer how many constraints there are _at least_. There can be more constraints than just the \(n-r\) which follow from the rank of \(\mathcal{K}\) since integrability conditions can occur. Given that each constraint reduces the number of degrees of freedom by one, we finally reach the conclusion \[\text{If rank}(\mathcal{K})=r\leq n\quad\Longrightarrow\quad\text{There are at most $r$ degrees of freedom}. \tag{6.65}\] In [72], this insight was used to show that the number of degrees of freedom in \(f(\mathbb{Q})\) is at most seven. The argument is as follows: The basic variables to consider are the ten metric components \(g_{\mu\nu}\) and the four functions \(\xi^{\alpha}\), which parametrize the flat and torsionless connection. Furthermore, if the metric field equations are satisfied, then the Bianchi identities imply that the connection field equations are automatically satisfied as well. Thus, we can completely focus on the metric field equations. If we work in coincident gauge, we remove the four functions \(\xi^{\alpha}\) from our considerations without accidentally killing degrees of freedom. In fact, all degrees of freedom are now encoded in the ten metric components, whose dynamics is described by the metric field equations. Thus, from the originally \(10+4\) potential degrees of freedom, we are left with only ten. It was then shown that the rank of the kinetic matrix of the metric field equations is seven, provided that \(f^{\prime\prime}\neq 0\). If \(f^{\prime\prime}=0\), the rank is six, which had to be expected since the Einstein equations contain \(10-6=4\) constraints. This is a nice consistency check. As a final remark, we point out that the kinetic matrix approach can in principle also be used to figure out the precise number of degrees of freedom. This involves also considerations regarding the mixing matrices and the potential matrices with highly involved computations. For more details on this outlook we refer the reader to [72]. ## 7 Summary Gravitational phenomena arise from curved spacetime, a concept made possible by the equivalence principle. This implies that gravity is independent of matter type. Within the framework of geometry, curvature is just one aspect of a manifold's affine properties. In addition to curvature, there are two other fundamental objects associated with the connection of a metric space: torsion and non-metricity. In standard General Relativity following Einstein, both non-metricity and torsion are absent. Embracing the geometric nature of gravity as advocated by the equivalence principle prompts us to explore different ways to represent gravity. In one equivalent description of General Relativity, we envision a flat spacetime with a metric but an asymmetric connection, where gravity is solely attributed to torsion. Alternatively, we can construct a third equivalent representation of GR on a flat spacetime without torsion, attributing gravity to non-metricity. Thus, the same fundamental physical theory, GR, can be articulated through the Einstein-Hilbert action, the Teleparallel Equivalent of GR action, or the Symmetric Teleparallel Equivalent of GR action [5, 6]. The fundamental foundation of these geometric interpretations paves the way for innovative approaches to modified gravity. These equivalent descriptions of General Relativity involving curvature, torsion, and non-metricity provide diverse starting points for modified gravity theories when scalar quantities are transformed into arbitrary functions. It's worth noting that quadratic non-metricity and torsion Lagrangian with detuned arbitrary 5 and 3 parameters, respectively, can also be considered, albeit with anticipated complexities. In this review, our primary focus lay on \(f(\mathbb{Q})\) theories [98]. We began by establishing the foundational elements of geometry. Starting with the basic manifold, we incorporated coordinates, points, and curves. Tensor fields, including scalars and vectors, were introduced on this manifold. To facilitate the comparison of vector fields at different points, we introduced the affine connection, delving into its general properties and the associated tensor quantities: curvature and torsion tensors. To incorporate the concept of distance, we introduced the metric, which in turn allowed us to define the non-metricity tensor. With these components in place, we were well-prepared to delve into the core principles of General Relativity. We've clearly demonstrated that the theory of General Relativity can be formulated in three distinct ways: as a curvature theory, a torsion theory, or a non-metricity theory. We've examined the key distinctions, addressed subtle nuances, and explored the consistent coupling of matter fields within these frameworks. In doing so, we've identified cases where the minimal coupling principle proves inadequate. Next, we examined strategies for departing from the principles of General Relativity in a consistent manner. We explored two complementary approaches: one involving generic quadratic Lagrangians with arbitrary parameters, and the other by transforming GR scalars into nonlinear functions. These approaches led us to derive various theories of modified gravity. Given our primary focus on \(f(Q)\) theories, we provided an overview of the fundamental characteristics of various modifications before returning our attention to \(f(\mathbb{Q})\) theories. Specifically, we introduced the defining Lagrangian, derived the corresponding field equations, and delved into discussions regarding its symmetries and Bianchi identities. Having gained a solid grasp of the overarching principles of the covariant theory, our focus shifted towards practical applications in cosmology and astrophysics. We specifically examined both cosmological and spherically symmetric backgrounds, utilizing symmetry reduction principles to establish the necessary conditions for the metric and the connection that align with the background symmetries. This systematic approach enabled us to explore the potential derivation of novel cosmological and black hole solutions within the framework of \(f(\mathbb{Q})\) theories. Our motto is Gravity: Gravity with \(\mathbb{Q}\). We firmly believe that the intricate structure inherent in the geometric framework of gravity can unlock fresh and captivating perspectives, leading us into uncharted realms and confronting the challenges of conventional formulations. Let us wholeheartedly embrace this captivating new geometry. ## Acknowledgements LH is supported by funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement No 801781. LH further acknowledges support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster).
2309.15780
AaP-ReID: Improved Attention-Aware Person Re-identification
Person re-identification (ReID) is a well-known problem in the field of computer vision. The primary objective is to identify a specific individual within a gallery of images. However, this task is challenging due to various factors, such as pose variations, illumination changes, obstructions, and the presence ofconfusing backgrounds. Existing ReID methods often fail to capture discriminative features (e.g., head, shoes, backpacks) and instead capture irrelevant features when the target is occluded. Motivated by the success of part-based and attention-based ReID methods, we improve AlignedReID++ and present AaP-ReID, a more effective method for person ReID that incorporates channel-wise attention into a ResNet-based architecture. Our method incorporates the Channel-Wise Attention Bottleneck (CWAbottleneck) block and can learn discriminating features by dynamically adjusting the importance ofeach channel in the feature maps. We evaluated Aap-ReID on three benchmark datasets: Market-1501, DukeMTMC-reID, and CUHK03. When compared with state-of-the-art person ReID methods, we achieve competitive results with rank-1 accuracies of 95.6% on Market-1501, 90.6% on DukeMTMC-reID, and 82.4% on CUHK03.
Vipin Gautam, Shitala Prasad, Sharad Sinha
2023-09-27T16:54:38Z
http://arxiv.org/abs/2309.15780v1
# AaP-RelD: Improved Attention-Aware Person Re-identification ###### Abstract Person re-identification (ReID) is a well-known problem in the field of computer vision. The primary objective is to identify a specific individual within a gallery of images. However, this task is challenging due to various factors, such as pose variations, illumination changes, obstructions, and the presence of confusing backgrounds. Existing ReID methods often fail to capture discriminative features (e.g., head, shoes, backpacks) and instead capture irrelevant features when the target is occluded. Motivated by the success of part-based and attention-based ReID methods, we improve AlignedReID++ and present AaP-ReID, a more effective method for person ReID that incorporates channel-wise attention into a ResNet-based architecture. Our method incorporates the Channel-Wise Attention Bottleneck (CWA-bottleneck) block and can learn discriminating features by dynamically adjusting the importance of each channel in the feature maps. We evaluated Aap-ReID on three benchmark datasets: Market-1501, DukeMTMC-reID, and CUHK03. When compared with state-of-the-art person ReID methods, we achieve competitive results with rank-1 accuracies of 95.6% on Market-1501, 90.6% on DukeMTMC-reID, and 82.4% on CUHK03. ## 1 Introduction Person ReID stands as a computer vision task that entails the identification and correlation of individuals across disparate surveillance cameras. The core objective of person ReID is to ascertain whether an individual captured in one camera's viewpoint (referred to as a query image) corresponds to the same person observed in another camera's viewpoint (referred to as a gallery image). This task has gained substantial attention due to its vital role in diverse intelligent applications and video surveillance systems [3]. Within this landscape, Deep Learning (DL) based approaches have achieved remarkable advancements and exhibit superiority over their counterparts [35]. There are several applications and use cases where DL is including hard and soft biometrics [19, 18, 17]. A majority of efforts have focused on global features, which are acquired through classical convolutional neural networks (CNNs) trained using classification loss and deep metric loss [9]. However, these methodologies often encounter challenges in scenarios marked by occlusions, obstacles, viewpoint variations, and changes in angle, as depicted in Fig. 1, rendering the problem even more intricate. To address these hurdles and enhance the learning process of CNNs, a variety of strategies have been proposed, encompassing part-based learning and attention-based methods. Part-based learning involves segmenting person images into distinct parts, enabling the model to concentrate on specific clothing elements or body segments pivotal for identification. This approach effectively addresses challenges arising from pose variations and occlusions. Conversely, attention mechanisms empower the model to dynamically allocate focus to pertinent regions within an image, thereby capturing intricate details and contextual cues that substantially contribute to distinguishing individuals with similar appearances. Figure 1: Challenges in person ReID: This figure presents examples of images collected from three different widely used datasets for person ReID. Each row corresponds to a dataset: (a) CHUK03, (b) Market1501, and (c) DukeMTMCReID, showcasing various examples of challenging training instances for person ReID. In this research, we introduce AaP-ReID, an extension of AlignedReID++[12]. Our framework introduces a CWA-bottleneck block aimed at extracting distinctive features like head, shoes, and backpacks from pedestrian images. This attention mechanism assigns varying weights to local features based on their relevance to the person ReID task. Consequently, it prioritizes the most discriminative features while disregarding less significant ones, leading to heightened discriminative power within the model. Through our experimental results, we demonstrate that part-based models can significantly benefit from integration with attention mechanisms. In summary, we make the following contributions. * We introduce the CWA-bottleneck block for ResNet [5] to embed channel-wise attention, showcasing its efficacy through comprehensive experimentation. * By substituting the bottleneck blocks of ResNet with the CWA-bottleneck block within the last two layers, we further enhance the discerning nature of the extracted features. * We also analyze the importance of stride of the final downsample layer to effectively retaining more spatial information within the model's extracted features. * The inclusion of batch normalization and dropout (BaND) on global feature maps serves to regularize the model and counteract overfitting tendencies. The subsequent sections of this paper are structured as follows: Section 2 provides an overview of related research in the field of person ReID. Section 3 introduces the AaP-ReID architecture. Section 4 outlines the experimental configuration and the datasets employed in this study. Section 5 showcases the outcomes attained across three publicly accessible datasets. Lastly, we conclude in Section 6. ## 2 Related Work Within the area of person ReID, two primary approaches emerge: representation learning and metric learning. Representation learning strategies center on acquiring a pedestrian representation that remains unaffected by factors like pose variations, lighting situations, and occlusions. Commonly, this entails utilizing CNNs to extract features from pedestrian images. On the other hand, metric learning approaches concentrate on acquiring a metric that gauges the similarity between two images of pedestrians using a loss function. ### Representation Learning Representation learning [13] can be partitioned into two distinct categories: Global and Local representations. Global representations encapsulate the overall persona, capturing elements such as clothing style, body shape, and overarching attributes. These representations provide a holistic view of the individual and are useful for initial matching. In the context of the person, the ReID network usually learns the global representation by training on softmax loss, often regarded as ID loss. Furthermore, attention mechanisms have gained considerable traction, allowing networks to focus on informative regions within the global representation for improved discriminative power. Several attention methods focused on channel and spatial dimensions [6, 15, 28, 27] have been proposed to enhance the global representation potency of networks. Ye et al. [4] introduced attention-aware generalized mean pooling as a strategy for refining image retrieval. Chen et al. [1] introduced an occlusion-aware attention network with multiple branches. Zhang et al. [31] put forth a dynamic part-attention (DPA) approach for person ReID, employing a dynamic attention mechanism to steer the network towards the most distinctive body parts. On a divergent note, local features home in on specific regions or segments of an individual's body, such as the head, torso, or limbs. These features excel at capturing unique patterns like clothing accessories, tattoos, or distinctive poses. Particularly effective in scenarios where global appearances might be similar among different individuals or scale variation among the same instances. To learn these refined features researchers explored local features and proposed various methods based on part-based learning and multi-branch methods. Zhang et al. [33] introduced a multi-branch method to address pose misalignment concerns. Multiple approaches [22, 2, 23] have emerged to acquire refined local representations, often entailing the division of the entire pedestrian image into multiple horizontal segments with corresponding feature embeddings. Yet, these approaches contend with misalignment issues. Luo, Hao, and Jiang [12] introduced the DMLI method, striving to align local parts using the shortest path distance. ### Deep Metric Learning Deep metric learning (DML) [2], conversely, centers on acquiring a distance metric between pairs of data points. This typically involves transforming images into feature vectors and determining their similarity by assessing the distances separating them. A pair of images depicting the same individual constitutes a positive pair, whereas images portraying distinct individuals form a negative pair. DML's principal objective lies in minimizing the distance between positive samples within the learned metric space while simultaneously maximizing the distance between negatives. Wang et al. [30] introduced a logistic discriminant metric learning method for person ReID, leveraging both original data and auxiliary data during training. Li et al. [32] de vised a triplet focal loss tailored for person ReID, capable of elevating the significance of challenging triplets while de-emphasizing simpler ones. Zheng et al. [21] put forth a global hard sample mining technique for person ReID, utilizing a ranking list network in tandem with a multiplet loss. ## 3 Attention-Aware Person ReID Currently, the majority of person ReID algorithms rely on either part-based methods for image matching or attention mechanisms. Nonetheless, these techniques often faller in capturing pivotal features like the head, shoes, and backpacks, sometimes inadvertently emphasizing irrelevant attributes such as the torso--particularly evident when the subject is partially obscured. To tackle this limitation, we expanded upon a part-based approach and judiciously incorporated our novel CWA-bottleneck block. This section commences with an exposition of the overarching architecture of AaP-ReID, followed by an in-depth exploration of our proposed CWA-bottleneck block. Subsequently, we delve into the diverse training strategies implemented to elevate the performance of AaP-ReID. ### Architecture of AaP-ReID ResNet has consistently been favored for numerous person ReID algorithms, credited to its adeptness in extracting intricate features from images. We adopted ResNet as the foundational network for AaP-ReID, a choice facilitating comparisons with other state-of-the-art (SOTA) algorithms. The comprehensive network architecture is depicted in Fig. 2. The backbone feature extractor generates a feature map \(f\) with dimensions \(N\times C\times H\times W\), where \(N\) represents batch size, \(C\) signifies channel count, and \(H\) and \(W\) stand for spatial dimensions. The ultimate convolutional layer's feature maps undergo processing via two distinct branches: the global and local branches. In the global branch, Global Average Pooling (GAP) is applied, culminating in a global feature vector of dimensions \(N\times C\times 1\), followed by the computation of the \(L_{2}\) norm. Ultimately, this global branch is subject to training through a fusion of ID and triplet losses. While GAP effectively fails to capture spatial resemblances between images of individuals, the local branch assumes a pivotal role. The local branch endeavors to establish person-to-person similarities by aligning horizontal features through the employment of the DMLI [12] method. This local branch's horizontal pooling yields a feature vector with dimensions \(N\times C\times H\times 1\). To quantify the local dissimilarity between two person images, a distance matrix \(D\) is generated, subsequently employed to compute the total shortest path. This process culminates in the computation of a local triplet loss, leveraging hard samples identified via global distances within the global branch. Figure 2: Overall design of the AaP-ReID. The blocks highlighted in green represent contributions to the baseline model, with the CWA-bottleneck block positioned at the top. ### CWA-bottleneck Block The CWA-bottleneck block is a simple but effective attention-based bottleneck that consists of three convolution blocks followed by a channel-wise attention (CWA) block, as shown in Fig. 2. The CWA block dynamically weighs the features according to their importance, allowing the CWA-bottleneck block to focus on the most important discriminative features and ignore the repeated and less important ones. The CWA involves generating attention maps \(f^{\prime}\) for a given feature map \(f\) with same shape \(C\times H\times W\), where \(C\) denotes the number of filters and \(H\) and \(W\) represent the spatial dimensions. Then they are subjected to global spatial pooling to create a channel descriptor which involved calculating the average value for each individual channels within that tensor, see the equation: \[\mathcal{G}(f)=\frac{1}{H\times W}\sum_{i=1}^{H}\sum_{j=1}^{W}f_{i,j} \tag{1}\] where \(\mathcal{G}()\) is a GAP operation performed to obtain \(f^{\prime}\) and \(f_{i,j}\) is the value at \(i\)th row and \(j\)th column in \(f\) feature map. Next, the channel descriptor of shape \(C\times 1\times 1\) is passed through a Multi-Layer Perceptron (MLP) with the reduction parameter \(r\), which helps to reduce the complexity of model and produces a descriptor of shape \(\frac{C}{r}\times 1\times 1\). The output is then processed through Batch Normalization (BN) [7] and a Rectified Linear Unit (ReLU) [14], see Eq. 2. \[\mathcal{R}(f)=ReLU(BN(\mathcal{G}(f),r))) \tag{2}\] Then the attention weights are computed by passing the activated output through a second MLP using the same reduction parameter \(r\) to adjust the dimensionality, as shown in Fig. 3. The resulting tensor, with shape \(C\times 1\times 1\), undergoes Sigmoid activation \(\sigma\) and is then element-wise multiplied by \(f\) to yield the attention feature maps as follows: \[f^{\prime}=f\odot\sigma(\text{MLP}(\mathcal{R}(f),r)) \tag{3}\] During the backpropagation stage of training, we compute the adjusted gradient for each channel \(i\) as \(g^{\prime}_{i}=w_{i}\cdot g_{i}\), where \(G=\{g_{1},g_{2},...,g_{c}\}\) represents the gradient constituents corresponding to the channels while \(w_{i}\) stands for the attention score assigned to the \(i\)th channel. This implies that the gradient contribution from channel \(i\) is proportionally influenced by its attention score. By acquiring knowledge of these attention scores, the network highlights the significant channels while tuning down the significance of the less crucial ones. It's plausible to think of the attention scores as coefficients that are applied to the gradients of each channel. As a result, the network gains the ability to magnify or diminish the impact of specific channels, based on their respective significance. ### Impact of Strides The convolutional operation's stride pertains to how many pixels the convolutional kernel shifts over the input image. When the stride is increased, the resultant feature map becomes smaller, causing information to be sacrificed. Conversely, a reduced stride generates a larger feature map, providing a greater amount of information for learning. To address this concern, we made an adjustment to the final downsampling step of the foundational network by altering the stride from 2 to 1. This alteration resulted in feature maps with dimensions of \(16\times 8\) instead of \(8\times 4\), thereby enhancing the scope of the area each unit covers and alleviating the loss of spatial information. As a result, the model has gained the capacity to uphold intricate details, thereby enhancing its overall ability to distinguish. This alteration has enabled the model to maintain nuanced details that were previously not retained. ### Integration of BN and Dropout (BaND) BN and dropout are two well-known methods employed to enhance the performance of DL models. BN takes care of the normalization of activations in each layer, thereby promoting stable training processes and safeguarding against overfitting. Conversely, dropout involves the random omission of units in each layer, a strategy aimed at reducing the model's dependency on specific sets of features. In our approach, we strategically implemented BN followed by _ReLU_ activation right before the global branch, as shown in Fig. 2. The objective here was twofold: first, to acquire normalized feature maps denoted as \(f_{n}\), and second, to eliminate non-linearities. This normalization step serves the purpose of steadying the features, consequently augmenting the model's capacity to generalize effectively. Ultimately, the GAP layer yields feature maps with dimensions \(N\times C\times 1\). For an added layer of resilience in the network, we introduced dropout subsequent to the GAP layer which acts as a regularization technique, countering overfitting by randomly nullifying a portion of activations during both forward and backward passes. Mathematically, this can be represented as follows: \[\text{Dropout}(f_{\text{GAP}})=f_{\text{GAP}}\odot Bernoulli(p) \tag{4}\] Figure 3: Structure of Channel Wise Attention block. In this context, \(f_{\text{GAP}}\) represents the outcome feature maps generated by the Global Average Pooling (GAP) layer. The symbol \(\odot\) stands for element-wise multiplication, and \(Bernoulli(p)\) refers to a binary dropout mask in which values are drawn from a Bernoulli distribution. Here, the dropout rate \(p\) is set at \(0.5\). By implementing dropout following GAP, we encourage the network to depend less on specific activations, thereby boosting its generalization capacity. This leads to a model that is more resilient and efficient. The deliberate combination of BN, _ReLU_ activation, and subsequent dropout in the final phases of the architecture collectively contribute to the overall improvement of the model's performance. ## 4 Experimental Settings In this section, we commence by providing an overview of the datasets utilized in our experimental phase. Following that, we delve into the evaluation metrics chosen to assess the efficacy AaP-ReID. Lastly, we provide detailed insights into the implementation particulars of our approach, covering the nuances of the training process. ### Datasets and Evaluation Metrics We assessed the effectiveness of the suggested approach on three commonly employed datasets: Market-1501 [34], DukeMTMC-reID [20], and CUHK03 [10]. **Market1501** dataset encompasses 32,217 images capturing 1501 identified individuals, observed from six distinct camera viewpoints. It comprises a training subset with 751 unique identities and a testing subset with 750 distinct identities. **DukeMTMCReID** comprises 36,411 images of 1812 individuals captured by eight cameras. The training subset of DukeMTMCReID includes 702 identities, while the testing subset encompasses 1110 identities. **CUHK03** comprises 8765 images of 1467 labeled individuals. To ensure a fair comparison with the existing method, we adopted the same dataset division, where the training subset consists of 767 distinct IDs and the testing subset contains 700 distinct IDs. To evaluate the performance of the proposed model, we employed two metrics: Cumulative Matching Characteristics (CMC) and Mean Average Precision (mAP). CMC gauges the likelihood of correctly identifying a pedestrian within the top k outcomes, while mAP quantifies the average precision across all results. Moreover, we applied the Re-Ranking technique [36] (RK). RK is a post-processing approach that enhances person ReID accuracy by refining the outcomes of the initial matching algorithm. To ensure fairness, we present results both with and without applying the RK technique. ### Implementation Details All experimentation was carried out employing the PyTorch framework [16]. AaP-ReID underwent optimization utilizing the ADAM optimizer, and the weight decay was configured to 0.0005. During the training phase, a batch size of 32 was employed, and the learning rate was set at 0.0002 with a step size of 150. The training process was executed over a total of 450 epochs. Images were resized to 256x128 for both training and testing. Data augmentation strategies, including random erasing and random horizontal flipping, were applied during training, along with normalization using mean values of (0.485, 0.456, and 0.406) and standard deviation values of (0.229, 0.224, and 0.225). Testing, on the other hand, only involved normalization. ## 5 Results and Discussion In this section, we discuss the results of experiments conducted on the previously mentioned datasets. The examination provides a comprehensive exploration of both quantitative and qualitative discoveries. Initially, we delve into the outcomes derived from the ablation study. Subsequently, we compare AaP-ReID against currently prevailing state-of-the-art algorithms. Lastly, we present a qualitative analysis wherein we delve into salience maps and loss plots. ### Ablation Study In order to assess the efficacy of the introduced alterations to the baseline network, we conducted a comparative analysis of the performance enhancements resulting from the application of each modification. This encompassed the incorporation of the CWA-bottleneck block across all layers and specifically on the last two layers. Then, we conducted a comparative evaluation of the CWA-bottleneck block in contrast to various versions of attention mechanisms, such as the Bottleneck Attention Module (BAM) [15], Efficient Channel Attention (ECA) [27], Spatial Attention (SP) and Channel Wise Attention with Addition (CWA(+)). **Impact of Training Strategies and CWA-bottleneck Block** Tables 1 and 2 offer a comprehensive examination of how the CWA-bottleneck block and various adjustments introduced to the ResNet50 backbone at different layers influence performance. The first row illustrates the performance of the baseline model, AlignedReID++, without any alterations. The rows denoted as "Stride impact" and 'BaND' pertain to alterations in the stride of the final downsample layer of ResNet50 from 2 to 1 and the incorporation of a BN layer with dropout, respectively. 'CWA-all' signifies the configuration in which the CWA-bottleneck block replaces all the bottleneck blocks within ResNet50. Lastly, AaP-ReID demonstrate the outcomes when the CWA-bottleneck block substitutes the bottleneck blocks in the last two layers of ResNet50 (layers 3 and 4). We progressively applied each technique to the baseline and assessed the resulting performance enhancement. Notably, substantial performance gains were observed after implementing 'BaND' and adjusting the'stride'. For instance, there was an increase of 2.6% in mAP on Market1501, 2.7% on DUKE-MTMC, and 3.2% on CUHK03. Corresponding rank-1 accuracy scores saw improvements of 0.7%, 1.9%, and 1.6%, with rank-5 accuracy scores rising by 0.6%, 0.9%, and 2.7% for the respective datasets. Similarly, employing CWA-all yielded similar results, elevating mAP scores by 6.1%, 5.9%, and 12.2% on Market1501, DUKE-MTMC, and CUHK03, respectively. Hence, the integration of attention into the backbone architecture enhanced performance. Through exhaustive experiments, we scrutinized the impact of the CWA-bottleneck block on specific layers. It was discerned that applying attention to the last two layers not only reduced the number of trainable parameters (as indicated in Table 2) but also significantly improved the performance. Specifically, we achieved an increase of 7.2%, 6.5%, and 12.8% in mAP scores, along with 2.8%, 5.5%, and 13.4% boosts in rank-1 accuracy scores on Market1501, DUKE-MTMC, and CUHK03, respectively. In summary, by replacing the CWA-bottleneck block with the existing bottleneck block in the last two layers, our approach strikes a balance between performance and model complexity, positioning it as an optimal choice for person ReID. The proposed method not only attains superior accuracy results but also demonstrates a reduction in model parameters, rendering it a more efficient and effective solution for person ReID tasks. **Attention Analysis** Table 3 provides a comprehensive comparison of the impact of diverse attention techniques on different ResNet backbones, using the Market1501 dataset. The considered attention methods include BAM, SP, ECA, CWA(+), and CWA(x). Notably, CWA(+) and CWA(x) represent distinct versions of the CWA-bottleneck block, wherein we explored varying feature fusion strategies--element-wise addition and multiplication, respectively. The BAM block integrates spatial and channel-wise attention by simultaneously computing spatial and channel attention in parallel branches, followed by an element-wise multiplication operation between the input feature maps and the consolidated attention maps from both branches. Thus, to dissect the effects of spatial and channel-wise attention, we conducted separate experiments for each. The outcomes consistently indicate that the proposed approach, CWA(x) outperforms the other attention methods across a range of ResNet variants including ResNet18, ResNet34, ResNet50, and ResNet101. As an illustration, when considering ResNet50 with CWA(x), it attains rank-1 and rank-5 accuracies of 94.6% and 98.3%, respectively, alongside an mAP score of 86.3%. In contrast, BAM, despite having more parameters (30.20M), achieves a lower rank-1 accuracy of 94.3%, rank-5 accuracy of 98%, and slightly lower mAP score of 86.2%. Furthermore, the superiority of CWA(x) extends to ResNet18, ResNet34, and ResNet101 over BAM and SP. While these other variants utilize more parameters, CWA(x) manages to outperform \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Market1501**} & \multicolumn{2}{c}{**DUKE-MTMC**} & \multicolumn{2}{c}{**CUHK03**} \\ \hline Method & maP & rank-1 & rank-5 & maP & rank-1 & rank-5 & maP & rank-1 & rank-5 \\ \hline \hline Baseline & 79.1 & 91.8 & 96.9 & 69.7 & 82.1 & 91.8 & 59.6 & 61.5 & 79.4 \\ Stride impact & 81.3(\(\uparrow\)2.2) & 92.2(\(\uparrow\)0.4) & 97.1(\(\uparrow\)0.2) & 71.8(\(\uparrow\)2.1) & 84.8(\(\uparrow\)2.7) & 92.5(\(\uparrow\)0.7) & 61.7(\(\uparrow\)2.1) & 63.1(\(\uparrow\)1.6) & 81.1(\(\uparrow\)1.7) \\ BaND & 81.7(\(\uparrow\)2.6) & 92.5(\(\uparrow\)0.7) & 97.5(\(\uparrow\)0.6) & 72.4(\(\uparrow\)2.7) & 84.0(\(\uparrow\)1.9) & 92.7(\(\uparrow\)0.9) & 62.8(\(\uparrow\)3.2) & 63.1(\(\uparrow\)1.6) & 82.1(\(\uparrow\)2.7) \\ CWA-all & 85.2(\(\uparrow\)6.1) & 93.8(\(\uparrow\)2.0) & 98.0(\(\uparrow\)1.1) & 75.6(\(\uparrow\)5.9) & 86.6(\(\uparrow\)4.5) & 93.9(\(\uparrow\)2.1) & 71.8(\(\uparrow\)12.2) & 73.6(\(\uparrow\)12.1) & 88.1(\(\uparrow\)8.7) \\ \hline _AaP-ReID_ & 86.3(\(\uparrow\)**7.2**) & 94.6(\(\uparrow\)**2.8**) & 98.3(\(\uparrow\)**1.4**) & 76.2(\(\uparrow\)**6.5**) & 87.6(\(\uparrow\)**5.5**) & 94.4(\(\uparrow\)**2.6**) & 72.4(\(\uparrow\)**12.8**) & 74.9(\(\uparrow\)**13.4**) & 89.1(\(\uparrow\)**9.7**) \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Baseline\({}^{*}\) & 89.4 & 92.8 & 96.0 & 83.5 & 86.3 & 92.4 & 73.5 & 70.7 & 80.0 \\ Stride impact\({}^{*}\) & 91.0(\(\uparrow\)1.6) & 93.4(\(\uparrow\)0.6) & 96.3(\(\uparrow\)0.3) & 86.2(\(\uparrow\)2.7) & 88.6(\(\uparrow\)2.3) & 93.1(\(\uparrow\)0.7) & 76.0(\(\uparrow\)2.5) & 72.8(\(\uparrow\)2.1) & 82.0(\(\uparrow\)2.0) \\ BaND\({}^{*}\) & 91.8(\(\uparrow\)2.4) & 94.3(\(\uparrow\)1.5) & 97.0(\(\uparrow\)1.0) & 86.4(\(\uparrow\)2.9) & 89.0(\(\uparrow\)2.7) & 93.4(\(\uparrow\)0.6) & 77.2(\(\uparrow\)3.7) & 74.7(\(\uparrow\)4.0) & 82.4(\(\uparrow\)2.4) \\ CWA-all\({}^{*}\) & 93.3(\(\uparrow\)3.9) & 94.9(\(\uparrow\)2.1) & 97.5(\(\uparrow\)1.5) & 88.3(\(\uparrow\)4.8) & 89.9(\(\uparrow\)3.6) & 94.2(\(\uparrow\)1.8) & 84.2(\(\uparrow\)10.7) & 82.1(\(\uparrow\)11.4) & 89.4(\(\uparrow\)9.4) \\ \hline _AaP-ReID_\({}^{*}\) & 93.9(\(\uparrow\)**4.5**) & 95.6(\(\uparrow\)**2.8**) & 97.7(\(\uparrow\)**1.7**) & 88.6(\(\uparrow\)**5.1**) & 90.6(\(\uparrow\)**4.3**) & 94.8(\(\uparrow\)**2.4**) & 84.7(\(\uparrow\)**11.2**) & 82.4(\(\uparrow\)**11.7**) & 89.5(\(\uparrow\)**9.5**) \\ \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular} * **Bold**: superior, The symbols \(\uparrow\)’ and \({}^{*}\) indicate enhanced performance relative to the Baseline and Re-ranking (RK) algorithm, respectively. \end{table} Table 1: Impact of the training strategies and CWA-bottleneck block on ResNet50 backbone at different layers across different person ReID datasets \begin{table} \begin{tabular}{l c c} \hline \hline & **Market1501** & **DUKE-MTMC** & **CUHK03** \\ \hline \hline Baseline & 25.31 & 25.21 & 25.34 \\ \hline CWA-all & 27.85(\(\uparrow\)2.53) & 27.75(\(\uparrow\)2.53) & 27.88(\(\uparrow\)2.58) \\ _AaP-ReID_ & 27.69(\(\uparrow\)**2.37**) & 27.59(\(\uparrow\)**2.37**) & 27.72(\(\uparrow\)**2.42**) \\ \hline \hline \end{tabular} * **Bold**: best. The Notation (m) indicates a million for parameters, hence least is desired. \end{table} Table 2: Comparison of (m) Parameter on ResNet50 Backbone at Different Layers Across Various Person ReID Datasets. them while employing fewer parameters. ### Comparison with State-of-the-Art Table 4 presents a summarized account of our thorough competitive analysis. We extensively compared our work with state-of-the-art person ReID methodologies, encompassing SNR [8], CBN [37], CAP [26], CtF [24], APR [11], HOReID [25], OAMN [1], SGAM [29], and AlignedReID++ [12]. Given that our work is built upon AlignedReID++, we employed it as the baseline. Furthermore, all the methods under consideration utilize ResNet as a feature extractor. On Market-1501, we attained a mAP score of 86.3% and a notable rank-1 accuracy of 94.6%, the highest among all the methods. Within the context of DukeMTMC-reID, we outperformed all methods, achieving a mAP score of 76.2% and a rank-1 accuracy of 87.6%, which only slightly trails behind CAP. For CUHK03, we adopted detected bounding boxes as part of the testing protocol and achieved a commendable mAP score of 67.2% alongside a robust rank-1 accuracy of 81.4%. When compared to AlignedReID++ in conjunction with RK, AaP-ReID demonstrates substantial improvement, yielding a mAP of 93.9% and a rank-1 accuracy of 95.6%. This represents an enhancement of 4.5% and 2.8%, respectively. Similar performance gains are evident when contrasting our work with plain AlignedReID++, where we achieve a mAP of 84.7% and a rank-1 accuracy of 82.4%. ### Qualitative Results **Heatmaps visualization** Saliency maps serve as visual representations that highlight the significant areas within an image that influence the decision-making process of DL model. For the purpose of qualitative analysis, we generated saliency maps for both the AaP-ReID and baseline models. Fig. 5 offers a comparison of the heatmaps produced by these two models. The top row showcases the original pedestrian images, while the subsequent two rows exhibit the heatmaps generated by the AaP-ReID and baseline models, respectively. Through a range of examples, we effectively illustrate the prowess of our approach in scenarios involving obstructed pedestrians, obstacles, or partially visible subjects. In row (b) of Fig. 5, AaP-ReID adeptly focuses on discriminative pedestrian attributes, capturing overarching global traits like 'face,''shoes,' and 'backpacks,' while attenuating less universal characteristics such as 'box' and 'bicycle.' Conversely, the existing method struggles to encapsulate these distinguishing attributes. For instance, when faced with occlusions, it remains fixated on the occluded areas instead of emphasizing features like the face or legs, which serve as distinguishing traits. This marked differentiation underscores the effectiveness of our proposed method in recognizing more generalized and distinctive global features. **Loss Analysis** To showcase the enhanced convergence achieved through the proposed CWA-bottleneck block, we present loss curves for ResNet18, ResNet34, ResNet50, and ResNet101 in Fig. 4. Each curve visualizes the progression of loss for ECA, CWA(x), SP, CWA(+), and BAM, specifically applied to layers 3 and 4 in our approach. The depicted plots reveal that for smaller networks like ResNet18 and ResNet34, SP exhibited inadequate convergence, failing to effectively reach the global minimum \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline **Method** & \multicolumn{2}{c}{**Market1501**} & \multicolumn{2}{c}{**DukeMTMC**} & \multicolumn{2}{c}{**CUHK03**} \\ \cline{2-7} & mAP & rank-1 & mAP & rank-1 & mAP & rank-1 & mAP & rank-1 \\ \hline \hline SNR[8] & 84.7 & 94.4 & - & - & - & - \\ CBN[37] & 77.3 & 91.3 & 67.3 & 82.5 & - & - \\ CAP[26] & 85.1 & 93.3 & 76.0 & **87.7** & - & - \\ CtF[24] & 84.9 & 93.7 & 74.8 & 87.6 & - & - \\ APR[11] & 66.8 & 87.04 & 55.6 & 73.9 & - & - \\ HOReID[25] & 84.9 & 94.2 & 75.6 & 86.9 & - & - \\ OAMN[1] & 79.8 & 93.2 & 72.6 & 86.3 & - & - \\ SGAM[29] & 77.6 & 91.4 & 67.3 & 83.5 & - & - \\ A-ReID++[12] & 79.1 & 91.8 & 69.7 & 82.1 & 59.6 & 61.5 \\ AaP-ReID & **86.3** & **94.6** & **76.2** & 87.6 & **72.4** & **74.9** \\ \hline A-ReID++\({}^{*}\) & 89.4 & 92.8 & 83.5 & 86.3 & 73.5 & 70.7 \\ AaP-ReID\({}^{*}\) & **93.9** & **95.6** & **88.6** & **90.6** & **84.7** & **82.4** \\ \hline \hline \end{tabular} * **Bold**: best, A-ReID++ is shortorm for AlignedReID++, The superscript\({}^{*}\) represents RK is used. \end{table} Table 4: SOTA Comparison on different person ReID Datasets \begin{table} \begin{tabular}{l|l|c c c c c} \hline \hline **Backbone** & **Metrics** & **BAM** & **SP** & **ECA** & **CWA(+)** & **CWA(x)** \\ \hline ResNet18 & mAP & 74.5 & 70.7 & 73.60 & 75.70 & **75.80** \\ & rank-1 & 88.2 & 86.3 & 87.1 & 88.50 & **89.40** \\ & rank-5 & 95.5 & 96.6 & 94.9 & **97.50** & 96.20 \\ & params & 11.80 & 11.72 & 11.62 & 11.71 & 11.71 \\ \hline ResNet34 & mAP & 77.9 & 74.8 & 78.5 & 79.9 & **80.3** \\ & rank-1 & 89.5 & 88.3 & 90 & 90.6 & **91.5** \\ & rank-5 & 95.7 & 95.2 & 96.3 & 96.7 & **96.9** \\ & params & 22.04 & 21.89 & 21.73 & 21.88 & 21.88 \\ \hline ResNet50 & mAP & 86.2 & 84.4 & 85.3 & 85.6 & **86.3** \\ & rank-1 & 94.3 & 93.3 & 93.4 & 93.8 & **94.6** \\ & rank-5 & 98 & 97.7 & 97.7 & 98.2 & **98.3** \\ & params & 30.20 & 27.83 & 25.31 & 27.69 & 27.69 \\ \hline ResNet101 & mAP & 85.7 & 83.6 & 85.9 & 85.7 & **86.6** \\ & rank-1 & 93.4 & 92.5 & 93.9 & 93.8 & **94.2** \\ & rank-5 & 97.6 & 97.2 & **98.6** & 98.0 & 98.2 \\ & params & 53.82 & 49.20 & 44.30 & 48.93 & 48.93 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of attention methods impact on various ResNet backbones with the proposed Approach CWA(x) on Market1501 dataset across all methods. Conversely, most methodologies displayed convergence within the epoch range of 340 to 380. Notably, our approach, CWA(x), consistently demonstrated the lowest loss across all backbone variations when compared to other techniques. The efficacy of our method is underscored by its substantial convergence improvement. Upon a closer examination, the loss curves exhibit a gradual downward trend with minor fluctuations. This consistent pattern suggests that the model is progressively absorbing and adapting to the data, fostering sustained learning performance. Notably, our model maintains its dynamic performance, exhibiting similar proficiency when compared to other state-of-the-art backbone networks. ## 6 Conclusion In this paper, we proposed a novel person ReID method that incorporates channel-wise attention into a ResNet-based architecture. The CWA-bottleneck block is able to learn discriminative features by dynamically adjusting the importance of each channel in the feature maps. We evaluated our method on three benchmark datasets: Market-1501, DukeMTMC-reID, and CUHK03. We achieved competitive results on all three datasets, with a rank-1 accuracy of 95.6% on Market-1501, 90.6% on DukeMTMC-reID, and 82.4% on CUHK03. These results demonstrate the effectiveness of our method on a variety of challenging datasets. Additionally, we conducted a systematic exploration of the CWA-bottleneck block, incorporating it into different ResNet backbones and benchmarking it against prominent attention techniques. Our results showed that the CWA-bottleneck block consistently outperforms other methods, demonstrating its superior efficacy.
2309.04855
Anisotropic neutron star crust, solar system mountains, and gravitational waves
"Mountains" or non-axisymmetric deformations of rotating neutron stars (NS) efficiently radiate gravitational waves (GW). We consider analogies between NS mountains and surface features of solar system bodies. Both NS and moons such as Europa or Enceladus have thin crusts over deep oceans while Mercury has a thin crust over a large metallic core. Thin sheets may wrinkle in universal ways. Europa has linear features, Enceladus has "Tiger" stripes, and Mercury has lobate scarps. NS may have analogous features. The innermost inner core of the Earth is anisotropic with a shear modulus that depends on direction. If NS crust material is also anisotropic this will produce an ellipticity, when the crust is stressed, that grows with spin frequency. This yields a braking index (log derivative of spin down rate assuming only GW spin down) very different from $n=5$ and could explain the maximum spin observed for neutron stars and a possible minimum ellipticity of millisecond pulsars.
J. A. Morales, C. J. Horowitz
2023-09-09T18:01:34Z
http://arxiv.org/abs/2309.04855v2
# Anisotropic neutron star crust, solar system mountains, and gravitational waves ###### Abstract "Mountains" or non-axisymmetric deformations of rotating neutron stars (NS) efficiently radiate gravitational waves (GW). We consider analogies between NS mountains and surface features of solar system bodies. Both NS and moons such as Europa or Enceladus have thin crusts over deep oceans while Mercury has a thin crust over a large metallic core. Thin sheets may wrinkle in universal ways. Europa has linear features, Enceladus has "Tiger" stripes, and Mercury has lobate scarps. NS may have analogous features. The innermost inner core of the Earth is anisotropic with a shear modulus that depends on direction. If NS crust material is also anisotropic this will produce an ellipticity, when the crust is stressed, that grows with spin frequency. This yields a breaking index (log derivative of spin down rate) very different from \(n=5\) and could explain the maximum spin observed for neutron stars and a possible minimum ellipticity of millisecond pulsars. The opening of the gravitational wave (GW) sky is an historic time. We have observed GW from black hole [1] and neutron star [2] mergers. Galileo opened the electromagnetic sky and its extraordinary riches. The GW sky, no doubt, contains additional very exciting signals. Galileo observed mountains on the Moon. Ongoing searches for continuous GW from "mountains" (large scale deformations) on rotating neutron stars have not yet detected signals. Targeted searches have focused on known pulsars, with known spin frequency and spin down parameters [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. On the other hand, directed searches have focused on locations in the sky that are known or suspected to harbor a neutron star, without prior knowledge of neither the frequency nor any spin-down parameter [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Finally, all-sky searches have focused on searching for unknown sources at unknown locations [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. These various types of searches are improving. Next generation GW observatories Cosmic Explorer and Einstein Telescope should extend these searches to hundreds to thousands of times more neutron stars [50; 51]. Neutron stars, like many solar system bodies, have solid crusts. "Mountains", or non-axisymmetric deformations of the crust, radiate gravitational waves as the star rotates [52]. The amplitude \(h_{0}\) of gravitational wave radiation from a star a distance \(d\) away, rotating with angular frequency \(\omega\) and moment of inertia \(I\) is [53], \[h_{0}=16\pi G\frac{I}{d}\omega^{2}\epsilon\,. \tag{1}\] Here the important unknown is the shape of the star or ellipticity \(\epsilon\) defined as the fractional difference in the star's principle moments of inertia, \[\epsilon=(I_{1}-I_{2})/I_{3}\,. \tag{2}\] Large scale deformations in the crust, or mountains, can give rise to a non-zero \(\epsilon\). An important first step is to calculate the maximum ellipticity that the crust can support. This involves simulating the breaking strain of neutron star crust material and then determining the maximum ellipticity the crust material can support against the star's gravity. Molecular dynamics simulations, including the effects of impurities, defects, and grain boundaries, find that the breaking strain of the crust is large, of order 0.1, because the crust is under great pressure that prevents the formation of voids or fractures [54; 55; 56]. Given this breaking strain, Ushomirsky et al. [52] have calculated the maximum ellipticity in an intuitive formalism. They assume the crust can be strained near its breaking strain everywhere and write the maximum deformation the crust can support as a simple integral of the crust breaking stress divided by the local gravitational acceleration. This yields a maximum ellipticitiy \(\epsilon_{max}\) of a few \(\times 10^{-6}\). Note that Gittins et al., using a simplified external force to deform the star, claim a smaller \(\epsilon_{max}\)[57]. However, using Gittins et al. formalism with an improved external force, we find a larger \(\epsilon_{max}\) consistent with Ushimsky et al. [58]. We therefore assume \(\epsilon_{max}\approx\mathrm{few}\times 10^{-6}\). This value for \(\epsilon_{max}\) can be compared to the smallest observed upper limit, for a rapidly spinning nearby pulsar, \(\epsilon_{min}\approx\mathrm{few}\times 10^{-9}\)[59]. This gives a dynamic range of \(\epsilon_{max}/\epsilon_{min}\approx 1000\). We can detect mountains 1000 times smaller than the maximum crust mountain. Unfortunately, we do not know the actual size of neutron star mountains and \(\epsilon\) for particular stars. Electromagnetic observations of surface features are very limited. For example rotational phase resolved X-ray spectroscopy has mapped the shape of hot spots on some pulsars [60]. However, this thermal information does not directly provide elevations or mass distributions. Furthermore, mountain building mechanisms may be complex and depend on poorly known material properties. For example, viscoelastic creep, how a stressed elastic medium relaxes with time, may be important for the lifetime of neutron star mountains [55]. For a given postulated mechanism, for example temperature dependent electron capture on accreting stars [52], one can estimate the resulting ellipticity. However, neutron star crusts may be very rich physical systems that can involve many possible deformation mechanisms. We consider analogies between neutron star mountains and surface features of solar system bodies for two reasons. First, solar system observations may suggest particular mountain building mechanisms that could produce interesting \(\epsilon\) values for neutron stars and lead to detectable gravitational wave radiation. Second, the great diversity of solar system bodies suggests that neutron star crusts may also be diverse. Although the analogy between neutron stars and solar system bodies is incomplete, we have unique observations of solar system surface features. This provides "ground truth" for complex mountain building physics. We consider a large range of solar system planets and moons starting from very general considerations and proceed to more specialized observations and mountain building mechanisms. Perhaps the most basic observation is that solar system moons are extraordinarily diverse. This was revealed with the first images of the Galilean moons of Jupiter from the Pioneer and Voyager spacecraft. "The satellite surfaces display dramatic differences including extensive active volcanism on Io, complex tectonism on Ganymede and possibly Europa, and flattened remnants of enormous impact features on Callisto" [61]. Not only are these four moons very different from each other, none is similar to our Moon. If neutron stars are also diverse, this could be promising for gravitational wave searches. Indeed, we know of several different classes of neutron stars such as pulsars, millisecond pulsars, or magnetars. Even if some stars are symmetric with small ellipticities, others may be very different and have larger ellipticities. Many solar system bodies have large scale asymmetries. For example the near and far sides of the Moon are very different. Mars is strikingly asymmetric. Not only is the low smooth northern hemisphere quite different from the high cratered southern hemisphere but great volcanoes and high plateaus make the East very different from the West [62]. Iapetus, a moon of Saturn, is extremely asymmetric with a very dark leading hemisphere and an extraordinarily bright trailing hemisphere [63]. These several bodies, with large observed asymmetries, provide at least moral support for there also being asymmetric neutron stars. Mims, another moon of Saturn, has a very large crater Herschel [64]. This single feature, by itself, creates a significant ellipticity. Single catastrophic events on neutron stars could likewise produce large asymmetries. For example, an event that melts a significant fraction of the crust could leave a large "scar" when the crust re-freezes and produce a non-zero ellipticity. Leptodermous kosmos is a possible Greek translation of thin-skinned worlds. Neutron stars have a thin crust, approximately 1 km thick, over a deep liquid core. There are a number of thin-skinned moons in the solar system. Both Europa and yet another moon of Saturn Enceladus have thin ice crusts over deep oceans. These moons have characteristic linear surface features. Indeed the lines on Enceladus look like "Tiger stripes". Neutron stars, with their thin crusts, may have analogous linear surface features. Accretion can spin up the equatorial bulge of a neutron star and put the crust under tension while EM or GW radiation can spin down the bulge and put the crust under compression. Thin sheets may wrinkle in universal ways. Examples of wrinkling under tension include hanging drapes [65], stretched thin sheets [66], or a water drop on a thin sheet [67]. Compressional examples include wrinkling from thermal contraction mismatch [68] or growth induced wrinkling in leafs and flowers [69]. The planet Mercury has a thin silicate crust over a large metallic core. Lobate scarps on Mercury are bow shaped ridges that can be hundreds of kilometers long. These are the most prominent tectonic features on the planet with a few hundred to several thousand meters of vertical relief [70; 71; 72]. The formation of these features is thought to involve the thermal contraction of the core leading to compressional wrinkling of the thin crust [73]. Neutron stars that are significantly spun down may have lobate scarp like wrinkles and these could contribute to a nonzero ellipticity. Recent observations of seismic waves reverberating through the Earth's center find an anisotropic innermost inner core [74]. Here the sound velocity of innermost inner core material is observed to depend on direction. Don Anderson wrote "Crystals are anisotropic and tend to be oriented by sedimentation, freezing, recrystallization, deformation, and flow. Therefore we expect the solid portions of the earth to be anisotropic to the propagation of seismic waves and material properties [75]." We postulate that neutron star crust may also be anisotropic. This provides a new way to break axial symmetry and generate a non-zero \(\epsilon\). Single crystals are anisotropic. Neutron star crust is believed to form a body centered cubic (bcc) lattice. A bcc lattice has a small shear modulus for compressing one axis of the lattice (and expanding the other two axis by half as much so as to conserve the volume). In addition there is an \(\approx 8\) times larger shear modulus for distorting the square lattice into a rhombus [76]. Thus a single bcc crystal has a large anisotropy and the velocity of sound depends strongly on direction [77]. In addition, complex nuclear pasta phases are expected over an approximately 100 m region between the crust and the core [78; 79]. This region can be important because it is the densest part of the crust and may contain half of the crust's mass. Some pasta shapes, such as Lasagna, are strongly anisotropic [80]. However, one typically assumes macroscopic regions of a neutron star involve large numbers of micro-crystals (or domains) and each domain has a random orientation. As a result almost all calculations assume an angle-averaged shear modulus [81], see also [80], where the velocity of sound is independent of direction. We now consider that the micro-crystals may be partially aligned due to re-crystallization or some other mechanism. For example, material on an accreting neutron star is constantly being both crystallized as new material is added and melted as material is buried to higher densities and dissolves into the core. This may create one or more regions, that are not negligibly small compared to the size of the star, where crystals are at least partially aligned. Each region is assumed to have a single orientation determined for example by the random orientation of a first seed crystal. Alternatively, pasta may form (partially) aligned with respect to the magnetic field with spaghetti forming along B or B in the plane of lasagna sheets [80]. The shear modulus will be anisotropic by an amount that depends on the amount of alignment of the micro-crystals. We obtain a first estimate of the ellipticity produced by an anisotropic crust with a simple two dimensional calculation. This order of magnitude result will suffice, given the large uncertainty in the anisotropy of the crust. We replace a hollow sphere by a hollow cylinder that is assumed thin in the z direction (out of the plane in Fig. 1). We consider a thin disc and treat the anisotropy as a first order perturbation to the corresponding axially symmetric constitutive relationships [82]. A thick disk is expected to yield qualitatively similar results, however the calculation is somewhat more involved. Figure 1 shows the equatorial plane of a rotating neutron star. Elastic perturbations of the stress tensor \(\sigma_{ij}\) are related to the strain tensor \(\epsilon_{ij}\) by the elasticity of the medium, \[\sigma_{xx}=\frac{E}{1-\nu^{2}}(1+\langle\phi\rangle)\epsilon_{xx}+\frac{\nu E }{1-\nu^{2}}\epsilon_{yy}, \tag{3}\] \[\sigma_{yy}=\frac{E}{1-\nu^{2}}\epsilon_{yy}+\frac{\nu E}{1-\nu^{2}}\epsilon_ {xx}, \tag{4}\] and \(\sigma_{xy}=E\,\epsilon_{xy}/(1+\nu)\), with \(E\) the Young's modulus and \(\nu\) the Poisson ratio. The degree of alignment of micro-crystals in the crust is described by the small parameter \(\langle\phi\rangle\). If \(\langle\phi\rangle=0\) the medium is isotropic. As an example, we consider a partially aligned medium with the symmetries of the Lasagna phase of nuclear pasta [80]. The \(X\) axis in Fig. 1 is normal to the Lasagna planes. We assume this direction arose from spontaneous symmetry breaking, for example the random orientation of a seed micro-crystal. We start with a symmetric medium \(\langle\phi\rangle=0\) and then calculate first corrections from \(\langle\phi\rangle\neq 0\). We note the shear modulus is \(\mu=E/[2(1+\nu)]\) and bulk modulus is \(K=E/[3(1-2\nu)]\). We define \(\langle\phi\rangle\) to describe the degree of alignment with respect to the shear (or Young's) modulus. The larger bulk modulus (since \(\nu\) is near 0.5) is dominated by the isotropic electron pressure. This symmetric pressure will tend to reduce (but not eliminate) the ellipticity of the star, see discussion of Eq. 12 below. We assume the crust froze while the star was rotating with initial angular frequency \(\omega_{0}\). If the star is then spun up or spun down to a new angular frequency \(\omega\), stresses will be induced according to the equation of motion, \[\frac{\partial\sigma_{rr}}{\partial r}+\frac{1}{r}(\sigma_{rr}-\sigma_{\theta \theta})=-\rho r(\omega^{2}-\omega_{0}^{2}), \tag{5}\] with \(\rho\) the average crust density. The radial stress is [82], \[\sigma_{rr}(r)=\frac{3+\nu}{8}\rho(\omega^{2}-\omega_{0}^{2})[R^{2}+R_{0}^{2} -r^{2}-\frac{R^{2}R_{0}^{2}}{r^{2}}]\,, \tag{6}\] and satisfies boundary conditions \(\sigma_{rr}(R_{0})=\sigma_{rr}(R)=0\) at the inner \(R_{0}\) and outer \(R\) radii of the crust. The angular stress \(\sigma_{\theta\theta}\) is [82], \[\sigma_{\theta\theta}(r)=\frac{3+\nu}{8}\rho(\omega^{2}-\omega_{0}^{2})[R^{2}+ R_{0}^{2}-\frac{1+3\nu}{3+\nu}r^{2}+\frac{R^{2}R_{0}^{2}}{r^{2}}]\,, \tag{7}\] and does not vanish at \(r=R_{0}\) or \(R\). We now consider \(\langle\phi\rangle\neq 0\). We rewrite Eqs. 3,4 in polar coordinates, invert to obtain strain \(\epsilon_{ij}\) as a function of stress, and expand to lowest order in \(\langle\phi\rangle\). We provide a first estimate of the change in strain \(\delta\epsilon_{ij}\) with \(\langle\phi\rangle\) by using the unperturbed stresses from Eqs 6 and 7 to get, \[\begin{bmatrix}\delta\epsilon_{rr}\\ \delta\epsilon_{\theta\theta}\end{bmatrix}\approx-\frac{\langle\phi\rangle}{E }\begin{bmatrix}\frac{(C^{2}-S^{2}\nu)^{2}}{1-\nu^{2}}&\frac{C^{2}S^{2}(1+\nu) }{1-\nu^{2}}\\ \frac{C^{2}S^{2}(1+\nu)}{1-\nu}&\frac{(C^{2}-S^{2})^{2}}{1-\nu^{2}}\end{bmatrix} \begin{bmatrix}\sigma_{rr}\\ \delta\theta\end{bmatrix} \tag{8}\] Figure 1: Cut through the equatorial plane of a rotating star. The crust extends from \(R_{0}\) to \(R\) and is slightly anisotropic in the \(X\) direction. with \(S=\sin\theta\) and \(C=\cos\theta\). We assume a thin crust \(R-R_{0}\ll R\) where \(\sigma_{rr}\ll\sigma_{\theta\theta}\) and \(\sigma_{\theta\theta}\) is approximately independent of \(r\). This gives \(\delta\epsilon_{rr}\) and \(\delta\epsilon_{\theta\theta}\) that are also \(\approx\) independent of \(r\). The radial displacement is written \(u_{r}=u_{r}^{0}+\delta u_{r}\) where \(u_{r}^{0}\) is the displacement in the isotropic case (\(\phi\)) = 0. Likewise \(u_{\theta}=u_{\theta}^{0}+\delta u_{\theta}\). The perturbation \(\delta u_{r}\approx(r-R_{0})\delta\epsilon_{rr}(\theta)\) follows from integrating \(\partial\delta u_{r}/\partial r=\delta\epsilon_{rr}\). The angular displacement follows by integrating \(\delta\epsilon_{\theta\theta}=(\partial\delta u_{\theta}/\partial\theta+ \delta u_{r})/r\) to get \(\delta u_{\theta}\approx\int d\theta^{\prime}[r\delta\epsilon_{\theta\theta }-(r-R_{0})\delta\epsilon_{rr}]+C_{\theta}\) or, \[\delta u_{\theta}(\theta)\approx R\int_{0}^{\theta}d\theta^{\prime}\delta \epsilon_{\theta\theta}+C_{\theta}\,, \tag{9}\] assuming \(r\gg r-R_{0}\). Here the integration constant \(C_{\theta}\) is independent of \(\theta\) and does not contribute to moments of inertia. We now calculate the difference in moments of inertia in Eq. 2. For simplicity we work in two dimensions and actually calculate moments of inertia of a deformed hoop. This provides an order of magnitude estimate for the ellipticity. A mass \(\rho({\bf r})\) that was initially at \({\bf r}\) is now at \({\bf r}^{\prime}=(r+u_{r}^{0}+\delta u_{r})\hat{r}+(u_{\theta}^{0}+\delta u_{ \theta})\hat{\theta}+u_{z}\hat{z}\). The difference in moments of inertia is, \[I_{x}-I_{y}=\int d^{3}r\rho(r)[({\bf r}^{\prime}\cdot\hat{y})^{2}-({\bf r}^{ \prime}\cdot\hat{x})^{2}]\,. \tag{10}\] Working to first order in the small displacement \(\delta u_{\theta}(\theta)\), this becomes \(I_{x}-I_{y}\approx 4\int d^{3}r\rho(r)r\delta u_{\theta}(\theta)\sin\theta\cos\theta\). We write this as \(I_{x}-I_{y}\approx m_{cr}R^{2}A\) where \(m_{cr}=\int d^{3}r\rho(r)\) is the mass of the crust and the important angular integral is \[A=\frac{4}{2\pi}\int_{0}^{2\pi}d\theta\sin\theta\cos\theta\int_{0}^{\theta}d \theta^{\prime}\delta\epsilon_{\theta\theta}(\theta^{\prime})\,. \tag{11}\] Finally, dividing by the moment of inertia \(I\approx\frac{2}{5}MR^{2}\), of a star of mass \(M\), gives the ellipticity \(\epsilon\approx\frac{5}{2}\frac{m_{cr}}{M}A\). Using \(\sigma_{\theta\theta}\approx\rho R^{2}(\omega^{2}-\omega_{0}^{2})\) from Eq. 7, \(\delta\epsilon_{\theta\theta}\) from Eq. 8, and evaluating the integral gives \[A=\frac{5-2\nu+\nu^{2}}{8E(1-\nu^{2})}\rho R^{2}\langle\phi\rangle(\omega^{2} -\omega_{0}^{2})\,. \tag{12}\] Before deriving our final result, we first discuss how a thick disk, where the strain is in the xy plane, might differ from the thin disk (with the stress in the xy plane) that we have calculated. We expect a thick disk to have a similar result to Eq. 12 except that the Young's modulus \(E\) would be replaced by the (larger) bulk modulus \(K\). As a conservative approximation we replace either \(E\) or \(K\) with \(\rho R^{2}\omega_{K}^{2}\) where \(\omega_{K}\) is the Keplarian angular frequency for breakup. This gives our main result for the ellipticity of a star with anisotropic material in its crust, \[\epsilon\approx\frac{m_{cr}}{M}\langle\phi\rangle\frac{\Omega^{2}-\Omega_{0} ^{2}}{\Omega_{K}^{2}}\,. \tag{13}\] Note that we have rewritten the angular frequencies \(\omega\), \(\omega_{0}\), and \(\omega_{K}\) in terms of rotational frequencies \(\Omega=\omega/(2\pi)\) etc. Here \(m_{cr}/M\approx 10^{-2}\) and \(\Omega_{K}\approx 1400\) Hz depending on the equation of state. We see that \(\epsilon\) is a strong function of the rotational frequency \(\Omega\) and the initial frequency \(\Omega_{0}\) (when the crust froze). Unfortunately, we do not know the degree of anisotropy of the neutron star crust \(\langle\phi\rangle\). Even a small average can lead to a significant \(\epsilon\) and produce observable gravitational waves. As an example we consider the innermost inner core of the Earth because we have no neutron star observations. There is a few percent anisotropy in the Earth that extends over the innermost inner core [74]. This region has a radius of 300 km and contains about \(3\times 10^{-4}\) of the Earth's mass. If we ignore anisotropies in the rest of the Earth, this corresponds to \(\langle\phi\rangle\approx 10^{-5}\) when averaged over the Earth's total mass. Of course this value is not directly relevant for neutron stars. Nevertheless, if material in a neutron star has an anisotropy of \(\langle\phi\rangle\approx 10^{-5}\) when averaged over the crust mass, this would yield \(\epsilon\approx 10^{-7}(\Omega^{2}-\Omega_{0}^{2})/\Omega_{K}^{2}\) or \(\epsilon\approx 10^{-8}\) for a rapidly rotating object with \((\Omega^{2}-\Omega_{0}^{2})/\Omega_{K}^{2}\approx 0.1\). Gravitational waves from a nearby star could be detectable for this \(\epsilon\) value. The breaking index \(n=d\ln\dot{\omega}/d\ln\omega\) describes how the spin down rate \(\dot{\omega}\) depends on rotational frequency. For simplicity we neglect spin down from electromagnetic radiation. Spin down from gravitational wave radiation only with a frequency independent \(\epsilon\) has \(n=5\). However the strong spin dependence of \(\epsilon\) in Eq. 13 leads to \(n=5+4\Omega^{2}/(\Omega^{2}-\Omega_{0}^{2})\). This is very different from 5 as shown in Fig. 2. In the limit \(\Omega\gg\Omega_{0}\), \(n=9\) and \(n\) changes rapidly for \(\Omega\) near \(\Omega_{0}\). Neutron stars that have been spun up since crust formation have \(n>9\) while stars that have spun down since crust formation have \(n<5\). Finally, \(n=0\) at \(\Omega=\sqrt{5/9}\,\Omega_{0}\). For constant \(\epsilon\), \(|\dot{\omega}|\) decreases with decreasing \(\Omega\). However as \(\Omega\) decreases \(\epsilon\) increases (given \(\Omega<\Omega_{0}\)) and this increases \(|\dot{\omega}|\). At \(\Omega=\sqrt{5/9}\,\Omega_{0}\) the two effects cancel and \(n=0\). Figure 2: Breaking index \(n\) (solid black curve) and ellipticity \(\epsilon\) (dashed red curve) vs rotational frequency \(\Omega\) assuming the crust froze while the star was rotating at \(\Omega_{0}=300\) Hz. Torque from gravitational wave radiation could balance the spin up from accretion and limit neutron star spins [52]. Using \(\epsilon\) in Eq. 13, this torque \(N_{gw}\) rises very rapidly with \(\Omega\) so \(N_{gw}\propto\Omega^{5}(\Omega^{2}-\Omega_{0}^{2})^{2}\langle\phi\rangle^{2}\). If \(N_{gw}\) balances the accretion torque \(N_{a}\approx\dot{M}(GMR)^{1/2}\) then the equilibrium spin, \[\Omega_{eq}\approx 300\;{\rm Hz}\,(\frac{\dot{M}}{10^{-8}{\rm M}_{\odot}\,{ \rm yr}^{-1}})^{1/9}(\frac{10^{-4}}{\langle\phi\rangle})^{2/9}\,, \tag{14}\] could agree with observed values. Note that \(\Omega_{eq}\) only depends very weakly on the accretion rate \(\dot{M}\) or \(\langle\phi\rangle\). Here we assume \(\Omega_{eq}\gg\Omega_{0}\), \(M=1.4M_{\odot}\), and \(R\approx 10\) km. Because our ellipticity rises strongly with \(\Omega\), this torque balance can be achieved with modest \(\langle\phi\rangle\approx 10^{-4}\). Furthermore, our mechanism, with a somewhat smaller \(\langle\phi\rangle\approx 10^{-6}\), could explain a possible minimum ellipticity \(\epsilon\approx 10^{-9}\) suggested by an observed minimum spin down rate for millisecond pulsars [83]. In conclusion "mountains" or non-axisymmetric deformations of rotating neutron stars (NS) efficiently radiate gravitational waves (GW). There are many ongoing searches for continuous GW from such stars. Present detectors are sensitive, in the best cases, to mountains that are 1000 times smaller than the maximum mountain that the crust can support. Unfortunately, we do not know the size of NS mountains. We consider analogies between NS mountains and surface features of solar system (SS) bodies. Here SS observations can provide "ground truth" for complex mountain building physics. Both NS and moons such as Europa or Enceladus have thin crusts over deep oceans while Mercury has a thin crust over a large metallic core. Thin sheets may wrinkle in universal ways. Europa has linear features, Enceladus has "Tiger" stripes, and Mercury has lobate scarps. NS may have analogous features. The innermost inner core of the Earth is anisotropic with a shear modulus that depends on direction. If NS crust material is also anisotropic this could produce a significant ellipticity that grows rapidly with increasing rotational frequency. Gravitational wave emission torques from this ellipticity may limit the spin rate of neutron stars. ###### Acknowledgements. Matt Caplan, Cole Miller, Jing Ming, and Ruedi Widmer-Schmidrig are thanked for helpful comments. This work is partially supported by the US Department of Energy grant DE-FG02-87ER40365 and National Science Foundation grant PHY-2116686.
2309.12789
Insights into the properties of GRBs with TeV emission
This study investigates the environments and characteristics of Gamma-Ray Bursts (GRBs) exhibiting very high energy (VHE) emission. Recent detections of VHE emission, up to TeV energies, challenge synchrotron-only emission models and particle acceleration concepts in GRBs. Until now, only a handful of GRBs have been detected in the VHE range. We compare the number densities of the circumburst medium of VHE-detected GRBs to check if the environment impacts the VHE emission. This shows that these GRBs have environments similar to the larger population of GRBs. We employ machine learning algorithms to create two-dimensional embeddings of GRB prompt emission light curves from the {\it Swift}-BAT catalog. VHE-detected GRBs are located across the map, indicating that VHE emission does not favour any particular cluster. These findings indicate that VHE-detected GRBs do not show any peculiar characteristics other than the observational detection of VHE photons. Future detections will increase the sample size required for a rigorous understanding of the origin of VHE emission in GRBs.
Kuntal Misra, Dimple, Ankur Ghosh
2023-09-22T11:00:09Z
http://arxiv.org/abs/2309.12789v1
# Insights into the properties of GRBs with TeV emission ###### Abstract This study investigates the environments and characteristics of Gamma-Ray Bursts (GRBs) exhibiting very high energy (VHE) emission. Recent detections of VHE emission, up to TeV energies, challenge synchrotron-only emission models and particle acceleration concepts in GRBs. Until now, only a handful of GRBs have been detected in the VHE range. We compare the number densities of the circumburst medium of VHE-detected GRBs to check if the environment impacts the VHE emission. This shows that these GRBs have environments similar to the larger population of GRBs. We employ machine learning algorithms to create two-dimensional embeddings of GRB prompt emission light curves from the _Swift_-BAT catalog. VHE-detected GRBs are located across the map, indicating that VHE emission does not favour any particular cluster. These findings indicate that VHE-detected GRBs do not show any peculiar characteristics other than the observational detection of VHE photons. Future detections will increase the sample size required for a rigorous understanding of the origin of VHE emission in GRBs. GRBs, VHE emission, emission mechanisms, environments, Machine Learning ## 1 Introduction Gamma-Ray Bursts (GRBs) are brief, luminous flashes of gamma-rays arising from extreme cataclysmic events in the universe. Though GRBs have been studied extensively for decades, the emission mechanisms powering them remain uncertain. The relativistic fireball shock model is widely accepted, where a relativistic outflow dissipates its kinetic energy through internal or external shocks, producing GRB emission (Piran, 2005; Kumar and Zhang, 2015). But details of the microphysical dissipation and radiation processes are still debated. Suggested emission mechanisms invoke dissipation through internal or external shocks. Broadband spectra and variability of GRB prompt emission indicate that non-thermal radiation mechanisms involving relativistic particles play a crucial role. Recently, TeV emission has been detected from GRB afterglows using the High Energy Stereoscopic System (H.E.S.S.) and Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes. Observations reveal the complex nature of the afterglow, which does not follow expected relations. In the case of GRB 190114C, the model required the shock microphysical parameters to evolve with time to explain the afterglow evolution (Misra et al., 2021). Detection of VHE emission from some GRBs, extending into the TeV range, presents novel challenges for GRB models and particle acceleration concepts. Existing afterglow models cannot explain the production of TeV photons. Proposed scenarios for the VHE emission include synchrotron self-Compton, proton synchrotron radiation, or the decay of secondary particles from photohadronic interactions (Zhang and Meszaros, 2001; Beniamini et al., 2015; Vietri, 1997; Bottcher and Dermer, 1998). It remains unclear if these mechanisms can fully explain VHE-detected GRBs and if they possess exceptionally distinct properties. Comparing the environments and cluster structure of GRBs with and without VHE emission could determine what makes these VHE bursts unique. In this work, we study whether GRBs that exhibit VHE emission have distinguishing characteristics by probing their environments and examining similarities between their light curves using machine learning techniques. ## 2 Are VHE-detected GRBs similar? Thus far, VHE emission has been reported in six GRBs (Fraija et al., 2019; MAGIC Collaboration et al., 2019; H. E. S. S. Collaboration et al., 2021; Blanch et al., 2020, 2020, 2022). The redshift \(z\), the \(T_{90}\) duration in sec, isotropic equivalent energy (\(E_{iso}\)) in erg, the maximum energy of the photon detected, and the facility which made the detection are listed in Table 1. Understanding the physical mechanisms that generate VHE emission in these sources can provide valuable insights into the phenomenon of this emission from GRBs in general. A key question is whether GRBs that exhibit emission at such extreme energies share characteristics that distinguish them from the overall GRB population. To investigate this, we studied and compared the properties and environments of the VHE-detected GRBs summarised in Table 1. Further, machine learning algorithms can examine the fine structures in the GRB light curves and cluster them together based on similarities and dissimilarities between them. We analysed the locations of these GRBs in two-dimensional embeddings created using machine learning algorithms applied to _Swift_-BAT catalog data1(Lien et al., 2016). The locations of the GRBs in these embeddings allow us to determine if GRBs with VHE emission tend to lie within the same cluster. Any preferential clustering would suggest common underlying factors that can enable their VHE emission (Dimple et al., 2023). Footnote 1: [https://swift.gsfc.nasa.gov/results/batgrbcat/](https://swift.gsfc.nasa.gov/results/batgrbcat/) ### Environments The densities of the circumburst medium in GRB environments can vary greatly in the VHE regime. GRBs 180720B, 190114C and 221009A have circumburst medium densities around or below 0.1 cm\({}^{-3}\)(Guarini et al., 2023). Tentative interpretation of radio, optical, and X-ray data suggest even lower circumburst medium densities of approximately 0.1 cm\({}^{-3}\). In contrast, GRB 201015A has a mild-relativistic jet surrounded by a very dense medium (n=1202.3 cm\({}^{-3}\), Zhang et al. 2023), while GRB 201216C has an ultra-relativistic jet surrounded by a moderately dense medium (n=5 cm\({}^{-3}\), Zhang et al. 2023). GRB 190829A also has **a** moderate environmental density (n=15 cm\({}^{-3}\), Zhang et al. 2021). Figure 1 shows the relation between the number density and kinetic energy of GRBs with the VHE detected GRBs highlighted with different colors. We notice that these GRBs follow the general trend of the GRB population, do not exhibit any peculiar behavior, and show no preference for a specific environment. This suggests that VHE GRBs behave similarly to other GRBs regarding their number density and kinetic energy relationship. These findings emphasise the need for further observations and analysis to comprehend the properties of their environment and the implications for understanding these captivating astrophysical phenomena. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline GRB & redshift & \(T_{90}\) & \(E_{\rm iso}\) & Maximum photon & TeV detection & References \\ & (z) & (s) & (erg) & (TeV) & & \\ \hline GRB 180720B & 0.654 & \(51.1\pm 3.0\) & \(6.82^{+0.24}_{-0.22}\times 10^{53}\) & 0.44 & H.E.S.S. & Vreeswijk et al. (2018); Fraija et al. (2019) \\ GRB 190114C & 0.425 & \(\sim 116\) & \(3.5\pm 0.1\times 10^{53}\) & 1 & MAGIC & MAGIC Collaboration et al. (2019) \\ GRB 190829A & 0.078 & \(57\pm 3\) & \(\sim 2.0\times 10^{40}\) & 3.3 & H.E.S.S. & H. E. S. S. Collaboration et al. (2021) \\ GRB 201015A & 0.426 & \(9.78\pm 3.47\) & \(\sim 3.86\times 10^{51}\) & - & MAGIC & Blanch et al. (2020a) \\ GRB 201216C & 1.1 & \(29.95\pm 0.57\) & \(\sim 6.32\times 10^{53}\) & - & MAGIC & Blanch et al. (2020b) \\ GRB 221009A\({}^{\dagger}\) & 0.151 & \(1068.40\pm 13.38\) & \(\sim 1.2\times 10^{55}\) & 251 & Capet-2 & Dzhappuev et al. (2022) \\ \hline \end{tabular} \({}^{\dagger}\) Also detected by MAGIC \end{table} Table 1: Properties of the TeV detected GRBs Figure 1: The relation between the number density and kinetic energy of GRBs is shown in this plot, where the VHE-detected GRBs are indicated with different colors. These GRBs are seen to follow the general trend along with the larger GRB population. ### Locations in two-dimensional embeddings Studying populations of GRBs detected by an instrument and analysing their corresponding prompt emission light curves may help identify any structures within GRB populations. This could provide clues on different GRB progenitor classes. Analyzing large numbers of GRB light curves makes the analysis difficult due to high dimensionality. Machine learning algorithms like t-distributed Stochastic Neighbor Embedding (tSNE; van der Maaten and Hinton, 2008; van der Maaten, 2014) and Uniform Manifold Approximation and Projection (UMAP; McInnes et al., 2018) with Principal Component Analysis (PCA; Hotelling, 1933) initialisation can be used to reduce the dimensionality of the data as well as to visualise the local and global structures within the data. tSNE is a nonlinear dimensionality reduction method that visualises data by embedding high-dimensional neighborhoods stochastically. It minimises the differences between probability distributions in high and lower-dimensional spaces using Kullback-Leibler divergence. On the other hand, UMAP utilises concepts from topology to construct a high-dimensional representation of the data using a similarity matrix. It then leverages these topological relationships to find a low-dimensional projection that preserves distances between points, effectively capturing both local and global structures in the data. We employ both tSNE and UMAP with PCA initialisation (PCA-tSNE/PCA-UMAP) to cluster GRB prompt emission light curves. For this, we use the light curves from the Swift-BAT catalog 2022 available in four energy bins (15-25, 25-50, 50-100, and 100-350 keV) with a temporal resolution of 64 ms. This catalog includes light curves of 1525 GRBs, detected between December 17, 2004, and July 15, 2022. The catalog for 2022 consists of light curves of 1525 GRBs detected between December 17, 2004, and July 15, 2022. The data show variations in burst duration, fluence, and start times that can impact machine learning algorithms. Therefore, we standardised the data, which involves normalising the light curves with fluence, Figure 2: The location of VHE-detected GRBs in two-dimensional embedding obtained using PCA-UMAP over the _Swift_-BAT sample. The different colors indicate the clusters identified by AutoGMM. shifting them to the same start time, padding them with zeros to match length, and performing the Discrete-time Fourier Transform (DTFT) to retain time delay information between light curves. We could standardise the data only for 1450 GRBs since fluence information was sometimes missing. Next, we performed PCA on the standardised dataset and estimated the number of PCs, preserving around 99% of the variance, and utilised those for further dimensionality reduction using tSNE and UMAP algorithms. These algorithms were optimised by tuning their key hyperparameters, such as perplexity in tSNE and n_neighbors and min_dist in UMAP. We used a perplexity=25, n_neighbors=25, and min_dist=0.01, obtained after several iterations, to generate two-dimensional embeddings. The embeddings were further subjected to the Auto Gaussian Mixture Model (AutoGMM, Athey et al., 2019) algorithm to identify different clusters present. The detailed description can be found in Dimple et al. (2023). Figure 2 shows the two-dimensional embeddings obtained using PCA-UMAP with the locations of VHE-detected GRBs. Similar results are found with PCA-tSNE. The GRBs are located all over the map, giving indications that these GRBs do not cluster in a particular subgroup and do not bear similarities with each other. However, their spread in location all over the map hints that these could be distributed amongst the GRB population, with the only difference being their detection in very high energy regimes. ## 3 Summary In this work, we investigated the properties and environments of GRBs that exhibit VHE emission, a phenomenon that challenges current GRB models and particle acceleration concepts. We have compared the number density and kinetic energy of these GRBs with the overall GRB population and found no significant difference or any peculiar behavior. We have applied machine learning techniques tSNE and UMAP with PCA initialisation (PCA-tSNE/PCA-UMAP) to cluster GRB prompt emission light curves. We created two-dimensional embeddings of GRB prompt emission light curves from the _Swift_-BAT catalog and located the VHE-detected GRBs on these maps. We have found that these GRBs are distributed across the map and do not cluster in any particular subgroup, indicating that they do not share similarities among themselves or with any specific progenitor class. Our findings suggest that VHE-detected GRBs behave similarly to other GRBs in terms of their number density, kinetic energy, and light curve morphology and that any distinctive environmental or physical factors do not influence their VHE emission. A larger sample of VHE-detected GRBs is needed to confirm these results and to explore further the origin and mechanism of their high energy emission extending out to TeV bands. Future detections of VHE in GRBs with sensitive detectors will provide more data to advance our understanding of these GRBs. The authors thank the referee for providing constructive comments on the manuscript. The authors thank Prof. K. G. Arun and Prof. L. Resmi for their useful discussions. ## Further Information ### ORCID identifiers of the authors 0000-0003-1637-267X (Kuntal Misra) 0000-0001-9868-9042 (Dimple ) 0000-0003-2265-0381 (Ankur Ghosh) ### Author contributions All authors in this work have made significant contributions. ### Conflicts of interest The authors declare no conflict of interest.
2309.06924
Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast
Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization. Our code is available at https://github.com/zhaodongsun/contrast-phys.
Zhaodong Sun, Xiaobai Li
2023-09-13T12:50:21Z
http://arxiv.org/abs/2309.06924v3
Contrast-Phys+: Unsupervised and Weakly-supervised Video-based Remote Physiological Measurement via Spatiotemporal Contrast ###### Abstract Video-based remote physiological measurement utilizes facial videos to measure the blood volume change signal, which is also called remote photoplethysmography (rPPG). Supervised methods for rPPG measurements have been shown to achieve good performance. However, the drawback of these methods is that they require facial videos with ground truth (GT) physiological signals, which are often costly and difficult to obtain. In this paper, we propose Contrast-Phys+, a method that can be trained in both unsupervised and weakly-supervised settings. We employ a 3DCNN model to generate multiple spatiotemporal rPPG signals and incorporate prior knowledge of rPPG into a contrastive loss function. We further incorporate the GT signals into contrastive learning to adapt to partial or misaligned labels. The contrastive loss encourages rPPG/GT signals from the same video to be grouped together, while pushing those from different videos apart. We evaluate our methods on five publicly available datasets that include both RGB and Near-infrared videos. Contrast-Phys+ outperforms the state-of-the-art supervised methods, even when using partially available or misaligned GT signals, or no labels at all. Additionally, we highlight the advantages of our methods in terms of computational efficiency, noise robustness, and generalization. Remote Photoplethysmography, Face Video, Unsupervised Learning, Weakly-supervised Learning, Semi-supervised Learning, Contrastive Learning ## 1 Introduction In the realm of traditional physiological measurement, skin-contact sensors are commonly employed to capture physiological signals. Examples of such sensors include contact photoplethysmography (PPG) and electrocardiography (ECG). These sensors enable the derivation of crucial physiological parameters such as heart rate (HR), respiration frequency (RF), and heart rate variability (HRV). However, the reliance on skin-contact sensors necessitates specialized biomedical equipment like pulse oximeters, which can lead to discomfort and skin irritation. An alternative approach is remote physiological measurement, which employs cameras to record facial videos for the measurement of remote photoplethysmography (rPPG). This technique harnesses the ability of cameras to capture subtle color changes in the human face, from which multiple physiological parameters including HR, RF, and HRV can be extracted [1]. Unlike traditional methods, video-based physiological measurement relies on readily available cameras rather than specialized biomedical sensors. This approach offers the advantage of not being constrained by physical proximity, rendering it particularly promising for applications in remote healthcare [2, 3, 4], emotion analysis [5, 6, 7], and facial recognition security [8, 9]. In earlier studies related to rPPG [1, 10, 11, 12], researchers devised handcrafted features to extract rPPG signals. Subsequently, several deep learning (DL)-based methods [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] were introduced. These DL-based approaches utilize supervised techniques and diverse network architectures to measure rPPG signals. Under certain conditions, such as when head movements are present or the videos exhibit heterogeneity, DL-based methods tend to exhibit greater robustness compared to traditional handcrafted approaches. However, it's important to note that DL-based rPPG methods heavily rely on extensive datasets comprising face videos and ground truth (GT) physiological signals. Acquiring GT physiological signals, typically measured by contact sensors and synchronized with facial videos, can be a costly endeavor. Issues like missing GT signals or misalignment with facial videos during data collection are common challenges encountered in this context. Considering the cost and challenges associated with obtaining GT physiological signals, we propose an unsupervised and weakly-supervised method for rPPG measurement, particularly when dealing with data that lacks complete or high-quality labels. The unsupervised method can effectively process facial videos that lack GT signals, while the weakly-supervised method can be employed when dealing with data containing incomplete or low-quality labels, where GT signals may be missing or misaligned. In our approach, we leverage four key rPPG observations as foundational knowledge. 1) **rPPG spatial similarity**: rPPG signals obtained from different facial regions tend to exhibit similar power spectrum densities (PSDs). 2) **rPPG **temporal similarity**: Segments of rPPG data taken within short time intervals (e.g., two consecutive 5-second clips) typically display similar PSDs, as HR tends to transition smoothly in most cases. 3) **Cross-video rPPG dissimilarity**: PSDs of rPPG signals from different videos often exhibit variations. 4) **HR range constraint**: The HR typically falls within the range of 40 to 250 beats per minute (ppm). In our prior ECCV 2022 publication [23], we introduced Contrast-Phys, an unsupervised learning framework. The present work, referred to as Contrast-Phys+, represents an extension of our earlier research. This work contributes significantly in the following ways: * We propose Contrast-Phys+, a versatile model capable of adapting to diverse data conditions, including scenarios with no labels, partial labels, or misaligned labels. Importantly, Contrast-Phys+ operates effectively in both unsupervised and weakly-supervised settings. To the best of our knowledge, Contrast-Phys+ is the first work to train an rPPG model in both weakly-supervised and unsupervised settings. * We showcase the efficacy of Contrast-Phys+ in weakly-supervised scenarios, where some ground truth signals may be missing or lack synchronization. Remarkably, Contrast-Phys+ with missing labels exhibits performance that can surpass that of fully supervised methods employing complete label sets. Moreover, Contrast-Phys+ demonstrates significantly enhanced robustness when faced with ground truth signal desynchronization, outperforming other fully supervised methods. * We conduct extensive experiments and analyses pertaining to Contrast-Phys+. A comprehensive performance comparison is also offered, contrasting the capabilities of Contrast-Phys+ against recent state-of-the-art baselines. Additional experiments also demonstrate that Contrast-Phys+ can use unlabeled data to expand and diversify the training dataset for improved generalization. We also offer a thorough analysis of the reasons why Contrast-Phys+ can be effective in unsupervised and weakly-supervised scenarios. Besides, we present statistical analysis to validate the proposed rPPG observations and include detailed ablation studies to substantiate the effectiveness of Contrast-Phys+. ## 2 Related Work ### _Video-Based Remote Physiological Measurement_ The concept of measuring remote photoplethysmography (rPPG) from facial videos via the green channel was initially introduced by Verkruysse et al. [10]. Subsequently, various handcrafted methods [1, 12, 24, 25, 26, 27] were proposed to enhance the quality of rPPG signals. These methods, predominantly developed in the earlier years, relied on manual procedures and did not necessitate training datasets, earning them the label of "traditional methods." In recent years, deep learning (DL) techniques have surged in rPPG measurement. Some studies [13, 14, 18, 22, 28] employed a 2D convolutional neural network (2DCNN) with two consecutive video frames as input for rPPG estimation. Another category of DL-based methods [19, 20, 21, 29, 30] utilized spatial-temporal signal maps extracted from various facial regions as input for 2DCNN models. Additionally, 3DCNN-based methods [15, 16, 31] were introduced to achieve high performance, particularly on compressed videos [16]. These DL-based approaches, categorized as supervised methods, demand both facial videos and ground truth (GT) physiological signals for training. More recently, Wang et al. [32] proposed a self-supervised rPPG method to acquire rPPG representations, although it still necessitates heart rate (HR) labels for fine-tuning the rPPG model. Gideon et al. [31] introduced the first unsupervised rPPG method, which does not rely on GT physiological signals for training. However, this method, while pioneering, exhibits lower accuracy compared to state-of-the-art supervised methods and can be sensitive to external noise. Subsequent to these developments, multiple unsupervised rPPG techniques have emerged [33, 34, 35, 23]. These unsupervised rPPG methods have gained attention because they solely require facial videos for training, eliminating the need for GT signals, yet they achieve performance levels similar to those of supervised methods. This is particularly advantageous given the expense associated with collecting GT signals alongside facial videos. However, none of the methods above considered utilizing partial or low-quality labels to further refine rPPG signal quality. ### _Contrastive Learning_ Contrastive learning, a widely adopted self-supervised learning technique in computer vision tasks, empowers deep learning models to map high-dimensional images or videos into lower-dimensional feature embeddings without the need for labeled data [36, 37, 38, 39, 40, 41, 42, 43, 44]. Its primary objective is to ensure that features derived from different perspectives of the same sample (referred to as positive pairs) are brought closer together, while features from different samples (referred to as negative pairs) are pushed apart. This approach finds extensive utility in pre-training models, thereby facilitating subsequent task-specific training in domains such as image classification [39], video analysis [44, 45], face recognition [37], and face detection [46]. This is particularly advantageous in situations characterized by limited access to labeled data. In our research, we leverage prior knowledge related to remote photoplethysmography (rPPG) to generate suitable positive and negative pairs of rPPG signal instances for contrastive learning. Diverging from prior methodologies that focus on feature embedding, our proposed method, Contrast-Phys+, possesses the capability to directly generate rPPG signals without the need for labeled data, thereby enabling unsupervised learning. Additionally, we harness ground truth (GT) signals to construct positive/negative pairs for contrastive learning, thus facilitating end-to-end weakly-supervised training even in scenarios where labels are missing or of suboptimal quality. ## 3 Observations about rPPG This section describes four observations about rPPG, which are the preconditions to designing Contrast-Phys+ and enabling unsupervised and weakly-supervised learning. ### _rPPG Spatial Similarity_ rPPG signals originating from various facial regions exhibit analogous waveforms, accompanied by the similarity in their Power Spectrum Densities (PSDs). This spatial coherence in rPPG signals has been leveraged in the design of multiple methodologies, as demonstrated in prior works [25, 26, 27, 47, 48, 49, 50]. While subtle phase and amplitude disparities may exist in the temporal domain when comparing rPPG signals from distinct skin areas [51, 52], these distinctions become inconsequential when rPPG waveforms are analyzed in the frequency domain, where PSDs are normalized. As illustrated in Fig. 1, the rPPG waveforms derived from four distinct spatial regions share a striking resemblance, characterized by identical peaks in their respective PSDs. ### _rPPG Temporal Similarity_ The heart rate (HR) undergoes gradual changes within short time frames, as noted by Gideon et al. [31]. A similar finding was reported by Stricker et al. [53], who observed slight HR variations in their dataset over short time intervals. Given that HR is prominently represented by a dominant peak in the PSD, it follows that the PSD experiences minimal fluctuations as well. Therefore, when randomly selecting small temporal windows from a brief rPPG segment (e.g., 10 seconds), one can anticipate that the PSDs of these windows will exhibit similarity. As depicted in Fig. 2, we illustrate this by sampling two 5-second windows from a 10-second rPPG signal and comparing the PSDs of these windows. Indeed, the two PSDs demonstrate similarity, with dominant peaks occurring at identical frequencies. It is important to note that this observation holds true when dealing with short-term rPPG signals. We will delve into the impact of signal duration on our model's performance in the forthcoming ablation study. We can summarize spatiotemporal rPPG similarity using the following equation. \[\small\texttt{PSD}\{G\big{(}v(t_{1}\to t_{1}+\Delta t,\mathcal{H}_{1}, \mathcal{W}_{1})\big{)}\}\approx\texttt{PSD}\{G\big{(}v(t_{2}\to t_{2}+\Delta t,\mathcal{H}_{2},\mathcal{W}_{2})\big{)}\} \tag{1}\] In this equation, \(v\in\mathbb{R}^{T\times H\times W\times 3}\) represents a facial video, and \(G\) signifies an rPPG measurement algorithm. We can select a facial region defined by height \(\mathcal{H}_{1}\) and width \(\mathcal{W}_{1}\) and a time interval \(t_{1}\to t_{1}+\Delta t\) from video \(v\) to derive one rPPG signal. A similar rPPG signal can be obtained from the same video, utilizing parameters \(\mathcal{H}_{2}\), \(\mathcal{W}_{2}\), and \(t_{2}\to t_{2}+\Delta t\). To meet the criteria for short-term rPPG signals, the temporal separation \(|t_{1}-t_{2}|\) should remain small. ### _Cross-video rPPG Dissimilarity_ rPPG signals obtained from different facial videos exhibit distinct PSDs. This divergence arises from the fact that each video features distinct individuals with varying physiological conditions, such as physical activity and emotional states, which are known to influence HRs [54]. Even in cases where HRs between two videos may appear similar, disparities in the PSDs can persist. This is due to the presence of additional physiological factors within the PSDs, such as respiration rate [55] and HRV [56], which are unlikely to align entirely across different videos. To substantiate this observation, we conducted an analysis involving the calculation of mean squared errors for cross-video PSD pairs within the OBF dataset [57]. The results, illustrated in Fig. 3, underscore the primary dissimilarity in cross-video PSDs as being centered around the heart rate peak. This cross-video rPPG dissimilarity is described by the following equation: \[\small\texttt{PSD}\{G\big{(}v(t_{1}\to t_{1}+\Delta t,\mathcal{H}_{1}, \mathcal{W}_{1})\big{)}\}\neq\texttt{PSD}\{G\big{(}v^{\prime}(t_{2}\to t_{2}+ \Delta t,\mathcal{H}_{2},\mathcal{W}_{2})\big{)}\} \tag{2}\] where \(v\) and \(v^{\prime}\) represent two distinct videos. By selecting facial areas and time intervals from these videos, one can expect the PSDs of the two resulting rPPG signals to exhibit noticeable differences. ### _HR Range Constraint_ The typical HR range for the majority of individuals falls within the interval of 40 to 250 beats per minute (bpm) [58]. In line with established practices [1, 59], this HR range serves as the basis for rPPG signal filtering, with the highest peak identified within this range to estimate HR. Consequently, our method will primarily concentrate on PSD within the frequency band of 0.66 Hz to 4.16 Hz. ## 4 Method In this section, we propose Contrast-Phys+ for weakly-supervised and unsupervised rPPG learning as shown in Fig. 4. We describe the face preprocessing in Sec. 4.1, the ST-rPPG block representation in Sec. 4.2, the rPPG spatiotemporal sampling in Sec. 4.3, and the contrastive loss function in Sec. 4.5. ### _Preprocessing_ The initial step involves preprocessing the original video, and the primary task is facial cropping. Utilizing OpenFace [60], we generate facial landmarks. To determine the central facial point for each frame, we compute the minimum and maximum horizontal and vertical coordinates of these landmarks. Subsequently, a bounding box is established, sized at 1.2 times the vertical coordinate range of the landmarks observed in the initial frame, and this size remains constant for all subsequent frames. With the central facial point and bounding box size determined for each frame, we proceed to crop the face in every frame. These cropped facial regions are then resized to dimensions of \(128\times 128\), rendering them ready for input into our model. ### _Spatiotemporal rPPG (ST-rPPG) Block Representation_ We have adapted the 3DCNN-based PhysNet [15] to compute the ST-rPPG block representation. Our modified model takes as input an RGB video with dimensions \(T\times 128\times 128\times 3\), where \(T\) represents the number of frames. In the final stage of our model, we employ adaptive average pooling to perform downsampling along spatial dimensions, enabling control over the output spatial size. This alteration facilitates the generation of a spatiotemporal rPPG block with dimensions \(T\times S\times S\), where \(S\) denotes the length of the spatial dimension, as depicted in Fig. 5. Further elaboration on the 3DCNN model is available in the supplementary material. The ST-rPPG block is essentially a collection of rPPG signals embedded within spatiotemporal dimensions. To denote this ST-rPPG block, we use \(P\in\mathbb{R}^{T\times S\times S}\). When selecting a specific spatial location \((h,w)\) within the ST-rPPG block, the corresponding rPPG signal \(P(\cdot,h,w)\) is extracted from the receptive field associated with that spatial position in the input video. It is worth noting that when the spatial dimension length \(S\) is small, each spatial position within the ST-rPPG block encompasses a larger receptive field, albeit with fewer rPPG signals contained within the block. Importantly, the receptive field of each spatial position in the ST-rPPG block encompasses a portion of the facial region, ensuring that all spatial positions in the ST-rPPG block encompass valuable rPPG information. ### _rPPG Spatiotemporal Sampling_ **Input**: ST-rPPG block: \(P\) with shape \(T\times S\times S\), Number of rPPG samples per spatial location: \(K\), The default rPPG sample length length \(\Delta t=T/2\) ``` 1:Initialze an empty list \(H\) for storing all rPPG samples 2:for\(h,w\in\{1,...,S\},\{1,...,S\}\)do\(\triangleright\) Loop over all spatial locations 3:for\(k\in\{1,...,K\}\)do\(\triangleright\)\(K\) rPPG samples per spatial location 4: Randomly choose a starting time \(t\) between 0 and \(T-\Delta t\) 5: Append the rPPG sample \(P(t\to t+\Delta t,h,w)\) into the list \(H\) 6:endfor 7:endfor 8:The list \(H=[p_{1},...,p_{N}]\) containing rPPG samples ``` **Output**: The list \(H=[p_{1},...,p_{N}]\) containing rPPG samples **Algorithm 1** rPPG Spatiotemporal Sampling ### _rPPG Spatiotemporal Sampling_ In the process of generating rPPG samples from the ST-rPPG block, as depicted in Fig. 5, which is the spatial and temporal sampler illustrated in Fig. 4, we employ both spatial and temporal sampling techniques. For spatial sampling, we extract the rPPG signal denoted as \(P(\cdot,h,w)\) from a specific spatial position. In the case of temporal sampling, we select a short time interval from \(P(\cdot,h,w)\), resulting in the final spatiotemporal sample, denoted as \(P(t\to t+\Delta t,h,w)\), where \(h\) and \(w\) represent the spatial position, \(t\) signifies the starting time, and \(\Delta t\) signifies the duration of the time interval. Given an ST-rPPG block, we iterate through all spatial positions and extract \(K\) rPPG clips, each with a randomly selected starting time \(t\), for each spatial position as shown in Algorithm 1. Consequently, we obtain a total of \(N=S\cdot S\cdot K\) rPPG clips from the ST-rPPG block. It is important to note that these sampling procedures are employed to generate multiple rPPG samples for use in contrastive learning during the model training phase. During inference, the ST-rPPG block is spatially averaged to yield the final rPPG signal. ### _GT Signal Temporal Sampling_ Unlike ST-rPPG blocks, which encompass spatiotemporal signals, GT signals, which are one-dimensional temporal Fig. 1: Illustration of rPPG spatial similarity. The rPPG signals from four facial areas (A, B, C, D) have similar waveforms and power spectrum densities (PSDs). Fig. 3: The most similar (left) and most different (right) cross-video PSD pairs in the OBF dataset. Fig. 2: Illustration of rPPG temporal similarity. The rPPG signals from two temporal windows (A, B) have similar PSDs. signals, necessitate different sampling approaches. Given the dimensional disparity between GT signals and ST-rPPG blocks, we employ temporal sampling for GT signals and spatiotemporal sampling for ST-rPPG blocks. The GT signal temporal sampling process, as depicted in Fig. 6, entails selecting a short time interval from the GT signal \(y\), resulting in a temporal sample denoted as \(y(t\to t+\Delta t)\), where \(t\) represents the starting time, and \(\Delta t\) signifies the duration of the time interval. For a single GT signal, we sample \(N\) GT clips, each with a randomly determined starting time \(t\). As illustrated in both the top and bottom branches of Fig. 4, the GT signals \(y\) and \(y^{\prime}\) corresponding to the facial videos \(v\) and \(v^{\prime}\) undergo temporal sampling, generating two sets of GT samples, namely \([q_{1},...,q_{N}]\) and \([q^{\prime}_{1},...,q^{\prime}_{N}]\), respectively. Subsequently, these two sets of GT samples are transformed into two sets of PSDs, denoted as \([g_{1},...,g_{N}]\) and \([g^{\prime}_{1},...g^{\prime}_{N}]\), respectively. ### _Contrastive Loss for Contrast-Phys+_ As illustrated in Fig. 4, our process begins with the selection of two distinct videos randomly chosen from a dataset as input. For each video, we derive an ST-rPPG block denoted as \(P\), a set of rPPG samples \([p_{1},\ldots,p_{N}]\), and their corresponding rPPG Power Spectral Densities (PSDs) \([f_{1},\ldots,f_{N}]\). If the GT signal is available for this video, we additionally obtain a set of GT signal samples \([q_{1},\ldots,q_{N}]\) and their corresponding GT PSDs \([g_{1},\ldots,g_{N}]\). This procedure is repeated for the second video. As shown in Fig. 4 (right), the underlying principle of our contrastive loss lies in the alignment of GT-rPPG or rPPG-rPPG PSD pairs stemming from the same video, while simultaneously pushing apart GT-rPPG or rPPG-rPPG PSD pairs originating from different videos. Importantly, it should be noted that we exclusively consider PSDs within the frequency range of 0.66 Hz to 4.16 Hz, in accordance with the HR range constraint outlined in Section 3.4. **rPPG-rPPG Positive Loss.** In accordance with the rPPG spatiotemporal similarity, it is expected that the rPPG PSDs Fig. 4: The diagram of Contrast-Phys+ for weakly-supervised or unsupervised learning. Fig. 5: Spatial and temporal Sampler for an ST-rPPG Block. Fig. 6: Temporal Sampler for a GT Signal. resulting from spatiotemporal sampling of the same ST-rPPG block should exhibit similarity. The following equations outline this property for the two input videos: For one video: \[\text{PSD}\{P(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\}\approx\text{ PSD}\{P(t_{2}\to t_{2}+\Delta t,h_{2},w_{2})\} \tag{3}\] \[\implies f_{i}\approx f_{j},i\neq j\] For the other video: \[\text{PSD}\{P^{\prime}(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\}\approx\text{ PSD}\{P^{\prime}(t_{2}\to t_{2}+\Delta t,h_{2},w_{2})\} \tag{4}\] \[\implies f_{i}^{\prime}\approx f_{j}^{\prime},i\neq j\] To bring together the rPPG PSDs from the same video, we employ the mean squared error as the loss function for rPPG-rPPG positive pairs, denoted as \((f_{i},f_{j})\). The rPPG-rPPG positive loss term, \(L_{p}^{RR}\), is presented below, and it is normalized with respect to the total number of rPPG-rPPG positive pairs. \[L_{p}^{RR}=\sum_{i=1}^{N}\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{N}\frac{\parallel f_{i}-f_{j}\parallel^{2}+\parallel f_{ i}^{\prime}-f_{j}^{\prime}\parallel^{2}}{2N(N-1)} \tag{5}\] **rPPG-rPPG Negative Loss.** In accordance with the cross-video rPPG dissimilarity, it is expected that the rPPG PSDs resulting from spatiotemporal sampling of two different ST-rPPG blocks should differ. We can employ the following equation to describe this property for the two input videos: \[\text{PSD}\{P(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\}\neq\text{ PSD}\{P^{\prime}(t_{2}\to t_{2}+\Delta t,h_{2},w_{2})\} \tag{6}\] \[\implies f_{i}\neq f_{j}^{\prime}\] To separate the rPPG PSDs originating from two different videos, we utilize the negative mean squared error as the loss function for rPPG-rPPG negative pairs, represented as \((f_{i},f_{j}^{\prime})\). The rPPG-rPPG negative loss term, denoted as \(L_{n}^{RR}\), is presented below, and it is normalized with respect to the total number of rPPG-rPPG negative pairs. \[L_{n}^{RR}=-\sum_{i=1}^{N}\sum_{j=1}^{N}\parallel f_{i}-f_{j}^{\prime} \parallel^{2}/N^{2} \tag{7}\] **GT-rPPG Positive Loss.** Inspired by the rPPG temporal similarity, it is expected that the rPPG PSDs from temporal sampling of the ST-rPPG block and the GT PSDs from temporal sampling of the corresponding GT signal should be similar since GT signals are the reference of rPPG signals. The following equations outline this property. For one input video and the corresponding GT signal: \[\text{PSD}\{P(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\}\approx\text{ PSD}\{y(t_{2}\to t_{2}+\Delta t)\} \tag{8}\] \[\implies f_{i}\approx g_{j}\] For the other video and the corresponding GT signal: \[\text{PSD}\{P^{\prime}(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\}\approx\text{ PSD}\{y^{\prime}(t_{2}\to t_{2}+\Delta t)\} \tag{9}\] \[\implies f_{i}^{\prime}\approx g_{j}^{\prime}\] The GT-rPPG positive loss \(L_{p}^{GR}\) is to pull together rPPG PSDs from one ST-rPPG block and GT PSDs from the corresponding GT signal (GT-rPPG positive pairs, e.g., \((f_{i},g_{j})\) where \(f_{i}\) is from ST-rPPG block \(P\) and \(g_{j}\) is from the corresponding GT signal \(y\)) so that the model is encouraged to output rPPG signals similar to the corresponding GT signals. Note that this GT-rPPG positive loss does not require exactly synchronized GT signals since rPPG PSDs and GT PSDs are from rPPG samples and GT samples which are randomly temporally sampled from the ST-rPPG block and the GT signal. This indicates that GT-rPPG positive loss does not need the alignment information between the GT signal and the video. Since it is assumed that some videos may not have GT signals in weakly-supervised learning, the function \(\phi\) is defined below to return whether a video has a GT signal. \[\phi(v)=\begin{cases}1,&\text{video $v$ has a GT signal}\\ 0,&\text{otherwise}\end{cases} \tag{10}\] The GT-rPPG positive loss term \(L_{p}^{GR}\) is defined below, which is normalized by the number of GT-rPPG positive pairs. \[L_{p}^{GR}=\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\phi(v)\parallel f_{i}-g_{j} \parallel^{2}+\phi(v^{\prime})\parallel f_{i}^{\prime}-g_{j}^{\prime}\parallel ^{2}}{\big{(}\phi(v)+\phi(v^{\prime})\big{)}N^{2}} \tag{11}\] **GT-rPPG Negative Loss.** Like the cross-video rPPG dissimilarity, it is expected that the rPPG PSDs sampled from the ST-rPPG block and the GT PSDs from temporal sampling of the non-corresponding GT signal should be different. The following equations illustrate this property. For one input video and the non-corresponding GT signal: \[\text{PSD}\big{\{}P(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\big{\}}\neq\text{ PSD}\big{\{}y^{\prime}(t_{2}\to t_{2}+\Delta t)\big{\}} \tag{12}\] \[\implies f_{i}\neq g_{j}^{\prime}\] For the other input video and the non-corresponding GT signal: \[\text{PSD}\big{\{}P^{\prime}(t_{1}\to t_{1}+\Delta t,h_{1},w_{1}) \big{\}}\neq\text{PSD}\big{\{}y(t_{2}\to t_{2}+\Delta t)\big{\}} \tag{13}\] \[\implies f_{i}^{\prime}\neq g_{j}\] The GT-rPPG negative loss term \(L_{n}^{GR}\) pushes away PSDs from one ST-rPPG block and a non-corresponding GT signal (GT-rPPG negative pairs, e.g., \((f_{i},g_{j}^{\prime})\) where \(f_{i}\) is from ST-rPPG block \(P\) and \(g_{j}^{\prime}\) is from the non-corresponding GT signal \(y^{\prime}\)) so that more negative pairs can be involved during the contrastive learning. [39] has demonstrated that more negative samples in contrastive learning can improve performance and facilitate convergence. The GT-rPPG negative loss \(L_{n}^{GR}\) is defined below, which is normalized by the number of GT-rPPG negative pairs. \[L_{n}^{GR}=-\sum_{i=1}^{N}\sum_{j=1}^{N}\frac{\phi(v)\parallel f_{i}^{\prime}-g_{ j}\parallel^{2}+\phi(v^{\prime})\parallel f_{i}-g_{j}^{\prime}\parallel^{2}}{ \big{(}\phi(v)+\phi(v^{\prime})\big{)}N^{2}} \tag{14}\] **Overall Loss.** The overall loss function for Contrast-Phys+ is the combination of the four losses, which can adapt to both unsupervised and weakly-supervised settings. \[L_{CP+}=L_{p}^{RR}+L_{n}^{RR}+L_{p}^{GR}+L_{n}^{GR} \tag{15}\] ### _Why Contrast-Phys+ Works with Missing or Unsynchronized Labels_ The four rPPG observations are used as constraints to make the model learn the target rPPG signal and exclude noises since noises do not satisfy all observations. Noises that appear in a small local region, such as periodical eye blinking, are excluded since the noises violate rPPG spatial similarity. Noises such as head motions/facial expressions that do not have a temporal constant frequency are excluded since they violate rPPG temporal similarity. The rPPG spatiotemporal similarity is satisfied by minimizing rPPG-rPPG positive loss \(L_{p}^{RR}\). Cross-video rPPG dissimilarity can make two videos' PSDs discriminative and show distinguishable heart rate peaks between two videos' PSDs since heart rate peaks are the main features to distinguish two videos' PSDs as shown in Fig. 3. Cross-video rPPG dissimilarity is fulfilled by minimizing rPPG-rPPG negative loss \(L_{n}^{RR}\). In addition, PSD values during the heart rate range are used so that noises such as light flickering exceeding the heart rate range are excluded due to the heart rate range constraint. The loss function \(L_{CP+}\) can always be used even though some GT signals are missing. rPPG-rPPG positive loss \(L_{p}^{RR}\) and rPPG-rPPG negative loss \(L_{n}^{RR}\) using rPPG observations do not require GT signals. GT-rPPG positive loss \(L_{p}^{GR}\) and GT-rPPG negative loss \(L_{n}^{GR}\) using GT signals can be adapted to different situations (e.g., Both videos have GT signals, only one video has a GT signal, or neither video has a GT signal). Contrast-Phys+ is also robust to unsynchronized GT signals. GT-rPPG negative loss \(L_{n}^{GR}\) is only intended to increase negative pairs using GT samples and rPPG samples for improved contrastive learning [39], so the loss does not require synchronization between facial videos and GT signals. GT-rPPG positive loss \(L_{p}^{GR}\) encourages the rPPG PSD to be similar to the GT PSD. When the GT signal is not precisely synchronized with the facial video, temporally sampled GT/rPPG for the same video can still share similar PSDs since PSDs do not change rapidly in a short time interval as shown in Fig. 7. The temporal sampling of GT/rPPG also removes alignment between the GT signal and the video to some extent, making GT-rPPG positive loss \(L_{p}^{GR}\) independent of the exact synchronization. Therefore, temporally sampled GT/rPPG for the same video can be pulled together in the unsynchronized case. We can also use the following equations to demonstrate that GT-rPPG positive loss \(L_{p}^{GR}\) is robust to GT signal misalignment. Suppose that the GT signal \(y(t)\) has a small misalignment \(u\), resulting \(y(t+u)\). The PSDs of temporal samples of \(y(t)\) and \(y(t+u)\) are \(\text{PSD}\{y(t_{2}\to t_{2}+\Delta t)\}\) and \(\text{PSD}\{y(t_{2}+u\to t_{2}+u+\Delta t)\}\), respectively. According to the temporal similarity in Sec. 3.2, \[\text{PSD}\{y(t_{2}\to t_{2}+\Delta t)\}\approx\text{PSD}\{y(t_{2}+u\to t_{2}+u+ \Delta t)\} \tag{16}\] holds if \(|t_{2}+u-t_{2}|=|u|\) is small where \(u\) is the small misalignment. Combine the above equation with Equation 8, we get \[\text{PSD}\{P(t_{1}\to t_{1}+\Delta t,h_{1},w_{1})\} \approx\text{PSD}\{y(t_{2}\to t_{2}+\Delta t)\}\] \[\approx\text{PSD}\{y(t_{2}+u\to t_{2}+u+\Delta t)\} \tag{17}\] which indicates that rPPG samples from the ST-rPPG block \(P\) are similar to the GT samples from the misaligned GT signal \(y(t+u)\). Therefore, our method is robust to GT signal misalignment. ## 5 Experiments ### _Experimental Setup and Metrics_ #### 5.1.1 Datasets We conducted experiments using five common rPPG datasets, encompassing RGB and NIR videos recorded under diverse scenarios. Specifically, we employed the PURE dataset [53], UBFC-rPPG dataset [61], OBF dataset [57], and MR-NIRP dataset [62] for intra-dataset evaluations. Additionally, we employed the MMSE-HR dataset [63] for both intra-dataset and cross-dataset evaluations. **PURE** comprises facial videos from ten subjects recorded in six distinct setups, encompassing both static and dynamic tasks. To ensure consistency, we followed the same experimental protocol used in previous studies [14, 21] for partitioning the training and test sets. **UBFC-rPPG** comprises facial videos from 42 subjects who participated in a mathematical game designed to elevate their heart rates. For evaluation, we adhered to the protocol outlined in [21] for train-test split. **OBF** encompasses 200 videos from 100 healthy subjects recorded both before and after exercise sessions. To facilitate a fair comparison with prior work, we conducted subject-independent ten-fold cross-validation, as previously described in [15, 16, 20]. **MR-NIRP** contains NIR videos of eight subjects, capturing instances of subjects remaining stationary as well as engaging in motion tasks. Due to its limited scale and the inherent challenge of weak rPPG signals in NIR [64, 65], we employed a leave-one-subject-out cross-validation protocol for our experiments. Finally, **MMSE-HR** includes 102 videos from 40 subjects recorded during emotion elicitation experiments. Given the Fig. 7: Illustration showing that temporally sampled GT/rPPG are similar and independent of the exact synchronization. presence of spontaneous facial expressions and head movements, we conducted subject-independent 5-fold cross-validation for intra-dataset testing on the MMSE-HR dataset. Further details regarding these datasets are available in the supplementary material. #### 5.1.2 Experimental Setup During each training iteration, the model receives two 10-second clips from two different videos as inputs. If available, ground truth (GT) signals are incorporated; for instance, 20% of the videos contain GT signals, or in some cases, all videos possess unsynchronized GT signals. To train Contrast-Phys+ effectively, we employ the AdamW optimizer [66] with a learning rate of \(10^{-5}\), training the model for 30 epochs on a single NVIDIA Tesla V100 GPU. Following the approach in [31], we select the model with the lowest irrelevant power ratio (IPR) on the training set to achieve model selection (for further insights into IPR, refer to the supplementary materials). During the testing phase, we segment each test video into non-overlapping 30-second clips and extract the rPPG signal from each clip. To compute the heart rate (HR), we identify the HR peak in the PSD of the rPPG signal. Additionally, we employ Neurokit2 [67] to locate systolic peaks within the rPPG signals, allowing us to derive heart rate variability (HRV) metrics. According to our ablation study for the ST-rPPG blocks in Sec. 5.5.1, we set the spatial resolution of the ST-rPPG block to be \(2\times 2\), with the time duration of 10 seconds. For the rPPG spatiotemporal sampling process, we use \(K=4\), indicating that, for each spatial position within the ST-rPPG block, four rPPG samples are randomly selected. The time interval \(\Delta t\) between each rPPG sample is set to 5 seconds, which is half of the time duration of the ST-rPPG block. Consequently, we obtain 16 rPPG samples (\(N=16\)) from each ST-rPPG block. Regarding the temporal sampling of GT signals, we maintain the same \(\Delta t\) of 5 seconds, resulting in the selection of 16 GT samples (\(N=16\)) from a GT signal. #### 5.1.3 Evaluation Metrics In line with prior research [16, 20, 59], we use three metrics to assess the accuracy of heart rate (HR) measurement: the mean absolute error (MAE), root mean squared error (RMSE), and Pearson correlation coefficient (R). Additionally, we utilize the signal-to-noise ratio (SNR) [11] to evaluate the quality of the rPPG signal. For the evaluation of HRV features, which encompass respiration frequency (RF), low-frequency power (LF) in normalized units (n.u.), high-frequency power (HF) in normalized units (n.u.), and the LF/HF power ratio, we follow the approach outlined in [21] and employ the standard deviation (STD), RMSE, and R as evaluation metrics. In the context of MAE, RMSE, and STD, smaller values indicate lower errors, whereas for R, higher values approaching one denote reduced errors. For SNR, larger values indicate higher-quality rPPG signals. For a more comprehensive understanding of these evaluation metrics, please refer to the supplementary material. ### _Intra-dataset Testing_ #### 5.2.1 HR Estimation We conducted intra-dataset testing for HR estimation on four datasets: PURE, UBFC-rPPG, OBF, and MR-NIRP. Contrast-Phys+ was trained under various conditions, including scenarios where 0%, 20%, or 60% of the videos contain GT signals. These settings represent the unsupervised and semi-supervised paradigms, with the semi-supervised setup encompassing partially available labels. Additionally, Contrast-Phys+ was trained with 100% of the labels, representing the supervised setting. The results of HR estimation for Contrast-Phys+ are presented in Table I and compared against multiple baseline methods. These baselines include traditional methods, supervised methods, semi-supervised methods, and recent unsupervised methods. Notably, Contrast-Phys+ (0%) outperforms several unsupervised baselines [31, 33, 35] and comes remarkably close to the performance of supervised methods [20, 21, 22]. In the semi-supervised setting, when partial GT signals are available (Contrast-Phys+ 20% and 60%), the performance improves further, often surpassing recent supervised methods [20, 21, 22]. In the supervised setting (Contrast-Phys+ (100%)), Contrast-Phys+ achieves the best performance among supervised methods across most evaluation metrics. This underscores the advantage of Contrast+Phys+ as it learns from both labels and rPPG observations, whereas previous supervised methods only rely on labels. The consistently superior performance of Contrast-Phys+ holds across all four datasets, including the MR-NIRP dataset containing NIR videos. #### 5.2.2 HRV Estimation Intra-dataset testing for heart rate variability (HRV) evaluation was conducted on the UBFC-rPPG dataset, and the results are presented in Table II. HRV analysis demands precisely measured, high-quality rPPG signals for accurate systolic peak detection. Notably, Contrast-Phys+ significantly outperforms traditional methods and the previous unsupervised baseline [31] in terms of HRV results. When partial GT signals are incorporated, the performance of Contrast-Phys+ closely approaches that of supervised methods. In the case of Contrast-Phys+ utilizing all labels (100%), it achieves the best results across most HRV metrics. These findings underscore the capability of Contrast-Phys+ to yield high-quality rPPG signals with accurate systolic peaks, enabling the derivation of HRV features. This feature makes it a promising candidate for applications in emotion understanding [5, 6, 7] and healthcare [2, 3]. Additionally, Contrast-Phys+ has the potential to further refine its understanding of rPPG signals by leveraging GT information, as illustrated in Section 5.8.2. ### _Cross-dataset Testing_ We perform cross-dataset testing on MMSE-HR to test the generalization of the proposed methods. We train recent supervised methods [15, 18, 71, 72], the unsupervised baseline [31], and Contrast-Phys+ on UBFC and test the models on MMSE-HR. In addition, we also provide intra-dataset results by training and testing the models on MMSE-HR as a reference to be compared with the cross-dataset re sults. Tab. 3 shows the cross-dataset and intra-dataset results on MMSE-HR, which can be summarized in four aspects as below. **1)** First, Contrast-Phys+ achieves good cross-dataset results compared with other supervised and unsupervised baselines, which means the proposed method can generalize well to a new dataset. The results are very promising, as in practical applications, we might potentially use enormous facial videos from different sources with no/partial GT signals to train Contrast-Phys/Contrast-Phys+ and then apply them to the target data. **2)** Second, more labels from Contrast-Phys+ (0%, unsupervised) to Contrast-Phys+ (100%, fully supervised) can provide better performance for both cross- and intra-dataset results, which means additional GT signals can help fit rPPG signals and improve generalization. **3)** Third, for both cross- and intra-dataset results, Contrast-Phys+ (100%) using both label information and rPPG observations achieves better performance than other supervised methods that only utilize label information. Therefore, rPPG observations as the prior knowledge play an important role in improving rPPG measurement performance in the fully supervised setting. **4)** Last, comparing cross- and intra-dataset results, performance for intra-dataset is generally better than for cross-dataset for each deep learning-based method, so training and testing on the same dataset are preferred to keep good performance. Compared with previous supervised methods, Contrast-Phys+ lowers the requirement of intra-dataset training since it only needs facial videos with no or partial labels. Contrast-Phys+ exhibits the capability to adapt to both labeled and unlabeled videos during training, allowing for the expansion and diversification of the training dataset by incorporating unlabeled videos from other sources. This augmentation strategy aims to enhance the model's generalization. To this end, we employed all labeled UBFC videos alongside additional unlabeled videos from PURE or OBF to train Contrast-Phys+ and evaluated the model's performance on MMSE-HR. The results in Table 4 demonstrate that the inclusion \begin{table} \begin{tabular}{l l c c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Method Types} & \multirow{3}{*}{Methods} & \multicolumn{3}{c}{UBFC-rPPG} & \multicolumn{3}{c}{PURE} & \multicolumn{3}{c}{OBF} & \multicolumn{3}{c}{MR-NIRP (NIR)} \\ \cline{3-14} & & MAE & RMSE & & MAE & RMSE & & & MAE & RMSE & & & MAE & RMSE & \\ & & (ppm) & (ppm) & R & (ppm) & R & (ppm) & (ppm) & R & (ppm) & (ppm) & R & \\ \hline \multirow{4}{*}{Traditional} & GREEN [10] & 7.50 & 14.41 & 0.62 & - & - & - & - & 2.162 & 0.99 & - & - & - \\ & ICA [1] & 5.17 & 11.76 & 0.65 & - & - & - & - & - & - & - & - & - \\ & CHROM [11] & 2.37 & 4.91 & 0.89 & 2.07 & 9.92 & 0.99 & - & 2.733 & 0.98 & - & - & - \\ & 25R [68] & - & - & - & 2.44 & 3.06 & 0.98 & - & - & - & - & - & - \\ & POS [12] & 4.05 & 8.75 & 0.78 & - & - & - & - & 1.906 & 0.991 & - & - & - \\ \hline \multirow{4}{*}{Super-vised} & CAN [13] & - & - & - & - & - & - & - & - & - & 7.78 & 16.8 & -0.03 \\ & HR-CNN [14] & - & - & - & 1.84 & 2.37 & 0.98 & - & - & - & - & - & - & - \\ & SynRhythm [69] & 5.59 & 6.82 & 0.72 & - & - & - & - & - & - & - & - & - \\ & PhysNet [15] & - & - & - & 2.1 & 2.6 & 0.99 & - & 1.812 & 0.992 & 3.07 & 7.55 & 0.655 \\ & rPPGNet [16] & - & - & - & - & - & - & - & - & 1.8 & 0.992 & - & - & - \\ & CVD [20] & - & - & - & - & - & - & - & 1.26 & 0.996 & - & - & - \\ & PulseGAN [70] & 1.19 & 2.10 & 0.98 & - & - & - & - & - & - & - & - & - \\ & Dual-GAN [21] & 0.44 & **0.67** & **0.99** & 0.82 & 1.31 & 0.99 & - & - & - & - & - & - \\ & Nowara2021 [22] & - & - & - & - & - & - & - & - & 2.34 & 4.46 & 0.85 \\ & **Contrast-Phys+ (100%)** & **0.21** & 0.80 & **0.99** & **0.48** & **0.98** & 0.99 & **0.34** & **0.75** & **0.998** & **1.96** & **3.02** & **0.93** \\ \hline \multirow{4}{*}{Semi-supervised} & **Contrast-Phys+ (60%)** & 0.22 & 0.81 & **0.99** & 0.52 & 1.02 & 0.99 & 0.35 & 0.79 & 0.997 & 2.58 & 3.65 & 0.89 \\ & **Contrast-Phys+ (20%)** & 0.24 & 0.87 & **0.99** & 0.61 & 1.18 & 0.99 & 0.37 & 0.84 & 0.997 & 2.57 & 4.02 & 0.88 \\ \hline \multirow{4}{*}{Unsupervised} & **Contrast-Phys+ (0%)** & 0.64 & 1.00 & **0.99** & 1.00 & 1.40 & 0.99 & 0.51 & 1.39 & 0.994 & 2.68 & 4.77 & 0.85 \\ & Gideon2021 [31] & 1.85 & 4.28 & 0.93 & 2.3 & 2.9 & 0.99 & 2.83 & 7.88 & 0.825 & 4.75 & 9.14 & 0.61 \\ \cline{1-1} & SiNC [33] & 0.59 & 1.83 & **0.99** & 0.61 & 1.84 & **1.00** & - & - & - & - & - & - \\ \cline{1-1} & Yue _et al._[35] & 0.58 & 0.94 & **0.99** & 1.23 & 2.01 & 0.99 & - & - & - & - & - & - \\ \hline \hline \end{tabular} \end{table} TABLE I: Intra-dataset HR results. The best results are in bold, and the second-best results are underlined. \begin{table} \begin{tabular}{l l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method Types} & \multirow{2}{*}{Methods} & \multicolumn{3}{c}{LF (n.u.)} & \multicolumn{3}{c}{HF (n.u.)} & \multicolumn{3}{c}{LF/HF} & \multicolumn{3}{c}{RF(Hz)} \\ \cline{3-14} & & STD & RMSE & R & STD & RMSE & R & STD & RMSE & R & STD & RMSE & R \\ \hline \multirow{4}{*}{Traditional} & GREEN [10] & 0.186 & 0.186 & 0.280 & 0.186 & 0.186 & 0.280 & 0.361 & 0.365 & 0.492 & 0.087 & 0.086 & 0.111 \\ & ICA [1] & 0.243 & 0.240 & 0.159 & 0.243 & 0.240 & 0.159 & 0.655 & 0.645 & 0.226 & 0.086 & 0.089 & 0.102 \\ & POS [12] & 0.171 & 0.169 & 0.479 & 0.171 & 0.169 & 0.479 & 0.405 & 0.399 & 0.518 & 0.109 & 0.107 & 0.087 \\ \hline \multirow{4}{*}{Super-vised} & CVD [20] & 0.053 & 0.065 & 0.740 & 0.053 & 0.065 & 0.740 & 0.169 & 0.168 & 0.812 & 0.017 & 0.018 & 0.252 \\ & Dual-GAN [21] & 0.034 & 0.035 & 0.891 & 0.034 & 0.035 & 0.891 & 0.131 & 0.136 & 0.881 & **0.010** & **0.010** & 0.395 \\ & **Contrast-Phys+ (100%)** & **0.025** & **0.025** & **0.947** & **0.025** & **0.035** & **0.947** & **0.064** & **0.066** & **0.963** & 0.029 & 0.029 & **0.803** \\ \hline \multirow{4}{*}{Semi-supervised} & **Contrast-Phys+ (60%)** & 0.035 & 0.035 & 0.098 & 0.035 & 0.035 & 0.908 & 0.100 & 0.105 & 0.906 & of additional unlabeled videos for training results in improved performance compared to training solely with labeled UBFC data. When additional unlabeled training data is introduced, the cross-dataset testing performance even approaches the levels achieved by the best intra-dataset testing performance, as demonstrated in Table III. This suggests that Contrast-Phys+ can seamlessly expand its training dataset by incorporating unlabeled videos from different domains, thereby enhancing generalization and achieving performance levels close to intra-dataset results. Such a capability was not feasible with previous supervised methods, highlighting the strengths of Contrast-Phys+. ### _Training with Unsynchronized GT Signals_ In our experiments, we explored scenarios where ground truth (GT) signals are desynchronized from the facial videos, which is a noisy label case in weakly-supervised learning. We introduced a parameter, the maximum desynchronization \(D_{\text{max}}\), and temporally shifted each GT signal by a random offset within the range of \(-D_{\text{max}}\) to \(D_{\text{max}}\), ensuring that the GT signals were no longer synchronized with the corresponding facial videos. This desynchronization was applied to GT signals in the training set, after which we trained the model and evaluated its performance on the test set. This experiment was tested on a single fold of the MMSE-HR dataset and trained on the other 4 folds. As depicted in Fig. 8, we analyzed the RMSE and SNR across various levels of maximum desynchronization. Notably, as the maximum desynchronization increased, the performance of previous supervised methods exhibited significant deterioration. Even a small maximum desynchronization of 0.25 seconds, which is realistic and likely to occur during data collection, considerably impacted their performance. In contrast, Contrast-Phys+ (100%) demonstrated robust and stable performance in terms of both RMSE and SNR across different maximum desynchronization values. These results underscore the robustness of Contrast-Phys+ to GT signal desynchronization, while previous supervised methods proved to be highly susceptible to even minor misalignments. This robustness can be attributed to Contrast-Phys+'s use of PSD instead of pulse curves in the temporal domain, which is comparatively stable over short time intervals. Consequently, learning an rPPG signal with misaligned GT signals in the frequency domain, aided by the rPPG observation constraint, is a viable approach as demonstrated in Sec. 4.6. These results indicate that Contrast-Phys+ offers greater tolerance when facial videos and GT signals are not perfectly aligned, streamlining the rPPG data collection process. ### _Ablation Study_ #### 5.5.1 ST-rPPG Block Parameters In our ablation study, we investigated the impact of two key parameters of the ST-rPPG block: spatial resolution (\(S\)) and temporal length (\(T\)). Table V(a) presents the heart rate (HR) results for Contrast-Phys+ (0%) on UBFC-rPPG when varying the spatial resolution of the ST-rPPG block across four levels: 1x1, 2x2, 4x4, and 8x8. It's important to note that 1x1 implies that rPPG spatial similarity is not considered. As evident from the results, the performance with a spatial resolution \begin{table} \begin{tabular}{c|c c|c c c c} \hline \hline \multicolumn{3}{c|}{Training Sets} & \multicolumn{4}{c}{Cross-dataset Results} \\ \multicolumn{3}{c|}{(test on MMSE-HR)} \\ \hline Labeled & Unlabeled & Unlabeled & MAE & RMSE & R & SNR \\ UBFC & PURE & OBF & (BPM) & (ppm) & R & (dB) \\ \hline ✓ & & & 1.76 & 5.34 & 0.92 & 1.37 \\ \hline ✓ & ✓ & & 1.13 & **3.71** & **0.96** & 2.37 \\ ✓ & & ✓ & 1.47 & 4.55 & 0.94 & **3.17** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Cross-dataset results of Contrast-Phys+ when additional unlabeled videos (PURE and OBF) are used for training. The best results are in bold. Fig. 8: rPPG measurement performance ((a) RMSE and (b) SNR) with respect to maximum desynchronization of GT signals. \begin{table} \begin{tabular}{c l c c c c c c c c} \hline \hline \multirow{2}{*}{Method Types} & \multirow{2}{*}{Methods} & \multicolumn{4}{c}{Cross-dataset (UBFC \(\rightarrow\) MMSE-HR)} & \multicolumn{4}{c}{Intra-dataset (MMSE-HR \(\rightarrow\) MMSE-HR)} \\ \cline{3-10} & & MAE (ppm) & RMSE (ppm) & R & SNR (dB) & MAE (ppm) & RMSE (ppm) & R & SNR (dB) \\ \hline \multirow{4}{*}{Traditional} & Li2014 [1] & - & 19.95 & 0.38 & - & - & 19.95 & 0.38 & - \\ & CHROM [11] & - & 13.97 & 0.55 & - & - & 13.97 & 0.55 & - \\ & SAMC [26] & - & 11.37 & 0.71 & - & - & 11.37 & 0.71 & - \\ \hline \multirow{4}{*}{Supervised} & PhysNet [15] & 2.04 & 6.85 & 0.86 & 1.17 & 1.22 & 4.49 & 0.94 & 2.8 \\ & TS-CAN [18] & 3.41 & 9.29 & 0.76 & -1.18 & 2.89 & 7.18 & 0.86 & -2.01 \\ & PhysFormer [71] & 2.68 & 7.01 & 0.86 & 1.2 & 1.48 & 4.22 & 0.95 & 2.55 \\ & **Contrast-Phys+ (100\%)** & **1.76** & **5.34** & **0.92** & **1.37** & **1.11** & **3.83** & **0.96** & **3.72** \\ \hline \multirow{2}{*}{Semi-supervised} & **Contrast-Phys+ (60\%)** & 2.30 & 6.32 & 0.89 & 1.25 & 1.20 & 3.89 & **0.96** & 3.51 \\ & **Contrast-Phys+ (20\%)** & 2.28 & 6.51 & 0.88 & 1.15 & 1.51 & 4.15 & 0.95 & 2.93 \\ \hline \multirow{2}{*}{Unsupervised} & **Contrast-Phys+ (0\%)** & 2.43 & 7.34 & 0.86 & 1.09 & 1.82 & 6.69 & 0.87 & 2.64 \\ & Gideon2021 [31] & 4.10 & 11.55 & 0.70 & 0.26 & 3.98 & 9.65 & 0.85 & 0.67 \\ \hline \hline \end{tabular} \end{table} TABLE III: Cross-dataset and intra-dataset HR results for MMSE-HR. The best results are in bold, and the second-best results are underlined. of 1x1 is inferior to the other resolutions, indicating that rPPG spatial similarity enhances performance. Furthermore, a spatial resolution of 2x2 yields satisfactory results, and larger resolutions do not substantially improve HR estimation. This is because larger resolutions, such as 8x8 or 4x4, provide more rPPG samples, but each block has a smaller receptive field, leading to noisier rPPG samples. Table V(b) demonstrates the HR results for Contrast-Phys+ (0%) on UBFC-rPPG while varying the temporal length of the ST-rPPG block across three levels: 5 seconds, 10 seconds, and 30 seconds. The results highlight that a temporal length of 10 seconds yields the best performance. A shorter time length (5 seconds) results in coarse PSD estimation, while a longer time length (30 seconds) might violate the conditions for rPPG temporal similarity. As a result, we opted for \(S=2\) and \(T=10\) seconds in our experiments, as these settings strike a balance and offer optimal performance. #### 5.5.2 rPPG Observations In our ablation study, we examined the individual impact of each of the four rPPG observations on the performance of Contrast-Phys+. These observations include rPPG spatial and temporal similarity (represented by rPPG spatial and temporal sampling), rPPG cross-video dissimilarity (represented by the rPPG-rPPG negative loss \(L_{n}^{RR}\)), and the HR range constraint (utilizing rSDs in the HR frequency range). Table VI showcases the results for Contrast-Phys+ (0%) when one of the rPPG observations is removed, as well as the results when all observations are utilized. The findings indicate that Contrast-Phys+ achieves its best performance when all rPPG observations are enabled. When rPPG spatial or temporal similarity is disabled, the performance experiences a slight decrease. However, when rPPG cross-video dissimilarity or the HR range constraint is disabled, the performance deteriorates significantly. The HR range constraint plays a crucial role in preventing the model from learning irrelevant periodic noises, such as light flickering, which can interfere with accurate HR estimation. Additionally, rPPG cross-video dissimilarity, represented by the rPPG-rPPG negative loss \(L_{n}^{RR}\), is essential in contrastive learning as it prevents the model from collapsing into trivial solutions, as discussed in [36]. These results underscore the importance of all four rPPG observations in enhancing the performance of Contrast-Phys+ and emphasize their individual contributions to accurate and robust rPPG signal extraction. #### 5.5.3 The Influence of GT Signals **GT-related Losses.** We conducted an ablation study to assess the influence of GT-related losses on our model's performance. Table VII presents the results of the ablation study performed on the MMSE-HR dataset using Contrast-Phys+ (100%). When we exclude all GT-related terms, the model effectively undergoes unsupervised training, resulting in the lowest performance. However, when we include only the GT-rPPG negative term, the model's performance improves, as it generates more negative pairs from both GT signals and ST-rPPG blocks. Subsequently, utilizing solely the GT-rPPG positive term further enhances performance, as it enforces consistency between ST-rPPG blocks and their corresponding GT signals, effectively incorporating GT information into the model's training. The combined use of both terms yields the highest performance, which is the top-performing configuration. **GT Signal Ratios.** Since Contrast-Phys+ is capable of adapting to different availability of data labels, we conducted an ablation study to examine the impact of different GT signal ratios. Specifically, we trained Contrast-Phys+ using 0%, 20%, 40%, 60%, 80%, and 100% labels from the MMSE-HR dataset. The performance variation of Contrast-Phys+ under different label ratios is illustrated in Fig. 9. Regarding RMSE, the performance reaches a plateau at 40% label ratio, and the HR error does not significantly decrease when using more than 40% labels. On the other hand, SNR, which serves as a metric for rPPG signal quality, exhibits continuous improvement with an increasing number of labels. These findings suggest that while employing more labels (beyond 40%) may not lead to a substantial reduction in HR measurement error, they do contribute to refining the quality of the output rPPG signals. We will further demonstrate this through waveform visualization in Sec. 5.8.2. ### _Statistical Validation for rPPG Observations_ We conducted a statistical analysis to validate both the spatiotemporal similarity of rPPG signals within the same video (referred to as "intra-video") and the dissimilarity of \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{1}{c}{(a)} & \\ \hline Spatial Resolution & MAE (bpm) & RMSE (bpm) & R \\ \hline \(1\times 1\) & 3.14 & 4.06 & 0.963 \\ \(\mathbf{2\times 2}\) & **0.64** & **1.00** & **0.995** \\ \(4\times 4\) & 0.55 & 1.06 & 0.994 \\ \(8\times 8\) & 0.60 & 1.09 & 0.993 \\ \hline \hline & \multicolumn{1}{c}{(b)} & \\ \hline Time Length & MAE (bpm) & RMSE (bpm) & R \\ \hline 5s & 0.68 & 1.36 & 0.990 \\ **10s** & **0.64** & **1.00** & **0.995** \\ 30s & 1.97 & 3.58 & 0.942 \\ \hline \hline \end{tabular} \end{table} TABLE V: Ablation study for ST-rPPG block parameters: (a) HR results of Contrast-Phys+ on UBFC-rPPG with different ST-rPPG block spatial resolutions. (b) HR results of Contrast-Phys+ on UBFC-rPPG with different ST-rPPG block time lengths. (The best results are in bold.) Fig. 9: rPPG measurement performance ((a) RMSE and (b) SNR) with respect to label ratios. rPPG signals between different videos (referred to as "cross-video"). The spatiotemporal similarity of rPPG signals refers to the similarity in the PSDs of rPPG signals measured at different spatiotemporal locations within the same video. Conversely, the cross-video rPPG dissimilarity refers to the differences in the PSDs of rPPG signals measured at different spatiotemporal locations between two different videos. To quantify these observations, we calculated the mean squared errors (MSE) of PSD pairs for both the intra-video and cross-video cases. Figure 10 illustrates that the PSD pair MSE for the intra-video case is significantly smaller compared to the PSD pair MSE for the cross-video case. To assess the significance of these differences, we employed the two-sample Kolmogorov-Smirnov test [6, 73]. The results indicate that the PSD pair MSE for the cross-video case is significantly higher than for the intra-video case (\(p<0.001\)) across all five rPPG datasets. These statistical test results provide solid evidence supporting the validity of both the rPPG spatiotemporal similarity and the cross-video rPPG dissimilarity observations. ### _Running Speed_ We conducted experiments to compare the running speed of Contrast-Phys+ and Gideon2021 [31]. During training, the running speed of Contrast-Phys+ (0%) was measured at **802.45** frames per second (fps), while Gideon2021 achieved a speed of **387.87** fps, which is approximately half of Contrast-Phys+'s speed. This significant difference in speed can be attributed to the different method designs employed by the two models. In Gideon2021, the input video is fed into the model twice, first as the original video and then as a temporally resampled video, resulting in double computation. On the other hand, Contrast-Phys+ only requires the input video to be fed into the model once, leading to a substantial decrease in computational cost. Additionally, the running speed of Contrast-Phys+ for label ratios of 60% and 100% was measured at **792.70** fps and **776.19** fps, respectively. When compared to Contrast-Phys+ (0%) (802.45 fps), incorporating GT signals in Contrast-Phys+ (60%, 100%) only resulted in a slight decrease in speed. Furthermore, we compared the convergence speed using the metric of Irrelevant power ratio (IPR). IPR is used in [31] to evaluate signal quality during training with lower values indicating higher signal quality. More details about IPR can be found in the supplementary materials. Figure 12 illustrates the IPR values over time during training on the OBF dataset. The results demonstrate that Contrast-Phys+ achieves faster convergence to a lower IPR compared to Gideon2021. While Contrast-Phys+ (60%, 100%) takes slightly longer to reach the lowest IPR compared to Contrast-Phys+ (0%), it ultimately achieves a lower IPR due to its ability to utilize GT signals to further enhance the rPPG signal quality. ### _Result Visualization_ #### 5.8.1 Saliency Maps To demonstrate the interpretability of Contrast-Phys+, we present saliency maps. These saliency maps are generated using a gradient-based method proposed in [74]. We keep the weights of the trained model fixed and calculate the \begin{table} \begin{tabular}{c c c c c c} \hline \hline rPPG Spatial & rPPG Temporal & rPPG Cross-video & HR Range Constraint & MAE (bpm) & RMSE (bpm) & R \\ Similarity & Similarity & Dissimilarity & & & 39.66 & 44.49 & -0.401 \\ \hline ✓ & ✓ & ✓ & ✓ & 22.11 & 33.84 & 0.281 \\ ✓ & ✓ & ✓ & ✓ & 1.26 & 3.64 & 0.948 \\ & ✓ & ✓ & ✓ & 3.14 & 4.06 & 0.963 \\ ✓ & ✓ & ✓ & ✓ & **0.64** & **1.00** & **0.995** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Ablation Study for rPPG Observations on UBFC-rPPG dataset. The best results are in bold. Fig. 10: Boxplots of PSD pair MSE for intra-video and cross-video for rPPG datasets: (a) PURE, (b) UBFC, (c) OBF, (d) MR-NIRP, and (e) MMSE-HR. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(L_{p}^{GR}\) & \(L_{n}^{GR}\) & MAE (bpm) & RMSE (bpm) & R \\ \hline & & 1.82 & 6.69 & 0.87 \\ & ✓ & 1.77 & 5.30 & 0.88 \\ ✓ & & 1.39 & 4.46 & 0.91 \\ ✓ & ✓ & **1.11** & **3.83** & **0.96** \\ \hline \hline \end{tabular} \end{table} TABLE VII: Ablation Study for GT-related Positive Loss Term \(L_{p}^{GR}\) and Negative Loss Term \(L_{n}^{GR}\) on MMSE-HR dataset. The best results are in bold. gradient of the Pearson correlation with respect to the input video. More detailed information can be found in the supplementary materials. Saliency maps are useful for highlighting the spatial regions that contribute to the estimation of rPPG signals by the model. A saliency map of a good rPPG model should exhibit a strong response in skin regions, as demonstrated in previous works such as [13, 15, 16, 22, 31]. Fig. 13 presents saliency maps in two scenarios to showcase the robustness of our method against interferences: 1) when periodic noise is manually injected, and 2) when head motion is involved. In the presence of a periodic noise patch injected into the upper-left corner of the videos, Contrast-Phys+ remains unaffected by the noise and continues to focus on skin areas. In contrast, Gideon2021 is completely \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Methods} & \begin{tabular}{c} Injected \\ Periodic \\ Noise \\ \end{tabular} & \begin{tabular}{c} MAE \\ (bpm) \\ \end{tabular} & \begin{tabular}{c} RMSE \\ (bpm) \\ \end{tabular} & R \\ \hline [31] & \begin{tabular}{c} w/o \\ w/ \\ \end{tabular} & \begin{tabular}{c} 1.85 \\ 22.47 \\ \end{tabular} & \begin{tabular}{c} 4.28 \\ 25.41 \\ \end{tabular} & \begin{tabular}{c} 0.939 \\ 0.244 \\ \end{tabular} \\ \hline \multirow{2}{*}{**Contrast-Phys+ (0\%)**} & w/o & \begin{tabular}{c} 0.64 \\ 0.74 \\ \end{tabular} & \begin{tabular}{c} 1.00 \\ 1.34 \\ \end{tabular} & \begin{tabular}{c} 0.995 \\ 0.991 \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} TABLE VIII: HR results trained on UBFC+rPPG with/without injected periodic noise shown in Fig. 13(a). Fig. 11: rPPG waveforms from Contrast-Phys+ trained with different label ratios: (a) 0%, (b) 20%, (c) 60%, and (d) 100%. The ambiguous/wrong peaks from rPPG are highlighted in gray areas. Fig. 12: Irrelevant power ratio (IPR) with respect to training time. distracted by the noise block. We also evaluate the performance of both methods on UBFC-rPPG videos with the injected noise, and the results are summarized in Table VIII. These results align with the saliency map analysis, confirming that Contrast-Phys+ is not impacted by the periodic noise, while Gideon2021 fails to handle it effectively. The robustness of Contrast-Phys+ to noise can be attributed to the rPPG spatial similarity constraint, which helps to filter out noise. Fig. 13(b) displays the saliency maps when head motion is involved. The saliency maps of Contrast-Phys+ primarily focus on and activate skin areas, indicating its ability to handle head motions effectively. In contrast, the saliency maps of Gideon2021 exhibit noise and are scattered, covering only partial facial areas during head motions. #### v.8.2 rPPG Waveforms Fig. 11 displays the rPPG waveforms obtained from Contrast-Phys+ trained with different label ratios on the MMSE-HR dataset. As more labels are available during training, ranging from 0% to 100%, the rPPG waveforms become more similar to the ground truth (GT) signal and exhibit fewer ambiguous or incorrect peaks. The waveform corresponding to the 0% label ratio contains noisy components highlighted by gray areas indicating ambiguous or incorrect peaks. In contrast, the waveform at the 100% label ratio is well aligned with the GT signal, with almost all peaks clearly distinguishable. The presence of distinguishable peaks in the rPPG waveform also facilitates the accurate calculation of HRV. The visualization of rPPG waveforms demonstrates that incorporating more labels during the training of Contrast-Phys+ improves the quality of the rPPG signal. This finding is consistent with the signal-to-noise ratio (SNR) results discussed in Sec. V.5.3. ## 6 Conclusion We propose Contrast-Phys+, which can be trained in unsupervised and weakly-supervised settings and achieve accurate rPPG measurement. Contrast-Phys+ is based on four rPPG observations and utilizes spatiotemporal contrast to enable unsupervised and weakly-supervised learning including missing and unsynchronized GT signals, or even no labels. By combining rPPG prior knowledge and additional GT information, Contrast-Phys+ outperforms both unsupervised and supervised state-of-the-art methods and achieves good generalization to unseen data. Besides, the proposed method is robust against noise interference and computationally efficient. For future studies, the proposed method can be extended to learn other periodic signals such as respiration signals. ## Acknowledgments The study was supported by the Academy of Finland (Project 323287 and 345948) and the Finnish Work Environment Fund (Project 200414). The authors also acknowledge CSC-IT Center for Science, Finland, for providing computational resources.
2309.06930
Modeling Dislocation Dynamics Data Using Semantic Web Technologies
Research in the field of Materials Science and Engineering focuses on the design, synthesis, properties, and performance of materials. An important class of materials that is widely investigated are crystalline materials, including metals and semiconductors. Crystalline material typically contains a distinct type of defect called "dislocation". This defect significantly affects various material properties, including strength, fracture toughness, and ductility. Researchers have devoted a significant effort in recent years to understanding dislocation behavior through experimental characterization techniques and simulations, e.g., dislocation dynamics simulations. This paper presents how data from dislocation dynamics simulations can be modeled using semantic web technologies through annotating data with ontologies. We extend the already existing Dislocation Ontology by adding missing concepts and aligning it with two other domain-related ontologies (i.e., the Elementary Multi-perspective Material Ontology and the Materials Design Ontology) allowing for representing the dislocation simulation data efficiently. Moreover, we show a real-world use case by representing the discrete dislocation dynamics data as a knowledge graph (DisLocKG) that illustrates the relationship between them. We also developed a SPARQL endpoint that brings extensive flexibility to query DisLocKG.
Ahmad Zainul Ihsan, Said Fathalla, Stefan Sandfeld
2023-09-13T13:03:44Z
http://arxiv.org/abs/2309.06930v1
# Modeling Dislocation Dynamics Data Using Semantic Web Technologies ###### Abstract Research in the field of Materials Science and Engineering focuses on the design, synthesis, properties, and performance of materials. An important class of materials that is widely investigated are crystalline materials, including metals and semiconductors. Crystalline material typically contains a distinct type of defect called "dislocation". This defect significantly affects various material properties, including strength, fracture toughness, and ductility. Researchers have devoted a significant effort in recent years to understanding dislocation behavior through experimental characterization techniques and simulations, e.g., dislocation dynamics simulations. This paper presents how data from dislocation dynamics simulations can be modeled using semantic web technologies through annotating data with ontologies. We extend the already existing Dislocation Ontology by adding missing concepts and aligning it with two other domain-related ontologies (i.e., the Elementary Multi-perspective Material Ontology and the Materials Design Ontology) allowing for representing the dislocation simulation data efficiently. Moreover, we show a real-world use case by representing the discrete dislocation dynamics data as a knowledge graph (DisLocKG) that illustrates the relationship between them. We also developed a SPARQL endpoint that brings extensive flexibility to query DisLocKG. **Keywords:** Ontology, Knowledge Graph, Reasoning, Dislocation, Crystallographic Defects, Semantic Web ## 1 Introduction Plastic deformation in metals and other crystalline materials can be attributed to a one-dimensional lattice defect type known as dislocation. The concept of the dislocation was introduced in the 1930s by Taylor [1] and Polanyi [2]. Dislocations determine mechanical properties of materials, such as strength, hardness, and ductility. For instance, materials engineers have discovered the strengthening mechanism of crystalline materials by studying the relationship between dislocation motion and the mechanical behavior of metals [3]. By controlling the motion of dislocations in crystalline materials, materials engineers can build, for example, an airplane turbine blade that can withstand an operation temperature of \(\sim 1000^{\circ}C\) and creep deformation due to centrifugal forces while the turbine is rotating [4]. Significant efforts have been made to understand dislocation systems using dedicated microscopy techniques and simulation methods. These simulation methods along with other techniques have been created to predict dislocation evolution. In recent years, data-driven approaches have brought new methods and tools for analyzing and understanding the evolution of dislocation systems [5, 6, 7, 8, 9, 10]. This intensely transforms the Materials Science and Engineering (MSE), combining simulations, data mining, and experiments, making the digital transformation possible [11, 12]. However, a digital transformation without being supported by the appropriate data infrastructure often ends up with isolated and inaccessible data repositories, the so-called "data silos". In this regard, materials informatics plays a significant role in materials science research to overcome the data silos problem. This is because materials informatics combines two discourses of materials science and information technologies to tackle the major problems in materials science, such as data management and analysis. Moreover, it helps to develop intelligent systems to, e.g., explore materials, find novel materials properties, or study the behavior of a specific materials phenomenon. To fully understand the behavior of materials and in particular dislocations, aspects from different length scales need to be considered. This variety of length scales makes the knowledge representation of systems of dislocations challenging, even though this has yet to be perceived as a significant research hindrance in materials science. Generally, the schematic representation of knowledge, i.e. the representation through ontologies, can significantly boost data management and analysis; it helps to extract knowledge from data. Ontologies also allow the domain knowledge to be machine-understandable, meaning that machines can read and interpret this knowledge efficiently. Furthermore, it has become an essential part of achieving FAIR (Discoverable, Accessible, Interoperable, and Reusable) data [13]. This paper presents how Discrete Dislocation Dynamics (DDD) data can be enriched using Semantic Web technologies, such as the Resource Description Framework (RDF) [14], the Web Ontology Language (OWL) [15] and SPARQL [16]. The first step we have taken is to adapt and extend the dislocation ontology (DISO) [17, 18] so that it can model various concepts and relationships in the DDD domain. The adaption includes adding missing concepts, improving class definition, exploring additional relationships between concepts, and finally aligning it with other domain-related ontologies, including the Elementary Multi-perspective Material Ontology (EMMO) and the Materials Design Ontology (MDO). This allows for representing the dislocation simulation data efficiently. DISO is one of the Dislocation Ontology Suite (DISOS)1 ontologies that represent the concepts and relationships of linear defects in crystalline materials. In fact, DISOS comprises several modules describing materials scientific concepts, representations of dislocations, and different simulation models in the dislocation domain. The adapted version of DISO is developed and maintained in the DISOS GitHub repository. The ontology is available in several RDF serializations via a persistent identifier (i.e., [https://purls.helmholtz-metadata.de/disos/diso](https://purls.helmholtz-metadata.de/disos/diso)) provided by PIDA (Persistent Identifiers for Digital Assets)2. PIDA employs content negotiation [19] to serve different versions of the ontology (i.e., the HTML documentation or an RDF representation) via its IRI. DISO has been syntactically validated by the W3C RDF validation service3 to conform with the W3C RDF standards. The documentation of the ontology is available via its IRI. Footnote 1: [https://purls.helmholtz-metadata.de/disos](https://purls.helmholtz-metadata.de/disos) Footnote 2: [https://purls.helmholtz-metadata.de/](https://purls.helmholtz-metadata.de/) Footnote 3: [https://www.w3.org/RDF/Validator/](https://www.w3.org/RDF/Validator/) The next step after adapting the ontology is to annotate the data gathered from multiple DDD simulations with the adapted version of DISO resulting in a knowledge graph (DisLocKG) of DDD data (more details can be found in section 6). This knowledge graph connects DDD data concepts via the relationship between them, thus enabling machine actionability [20], allowing for semantic querying, inferring implicit knowledge that does not exist, and ensuring data consistency and integrity. The objective is to convert the unstructured DDD data to linked data with dereferenceable IRIs that adheres to W3C standards and best practices. This will enable not only reasoning about the dislocation data but also to integrate it into other MSE-related fields. We have made DisLocKG publicly available via its GitHub repository4. Footnote 4: [https://purls.helmholtz-metadata.de/dislockg](https://purls.helmholtz-metadata.de/dislockg) ## 2 Related work Over the past few years, many researchers gave a particular attention on developing ontologies to represent scientific data in different fields of science, such as physics [21], agriculture [22], and pharmaceutical science [23]. Specifically in the MSE field, several efforts have been up in creating ontologies representing materials-related notions or semantically presenting actual materials data as knowledge graphs. This section will discuss related works of knowledge graphs with and without semantic web technology (including RDF, OWL, and SPARQL). Two examples from the latter are the _Propnet_ Knowledge Graph [24] and the Materials Knowledge Graph (_MatKG_) [25]. The Propnet is a knowledge graph enhancing materials properties data from the Materials Project [26] Repository5. It augments base properties data (e.g., lattice, basis, chemical formula, band gap, and total energy) resulting from the ab-initio calculation into derived properties, e.g., Debye temperature, bulk modulus, and shear modulus. The workflow and input for generating the augmented data are subsequently stored in the knowledge graph. Footnote 5: [https://next-gen.materialsproject.org](https://next-gen.materialsproject.org) On the other hand, the MatKG stores metadata from over 2.9 million materials science articles. This metadata includes abstracts, titles, keywords, and author data (e.g., name, email, affiliation, and ORCID). By accessing the MatKG, we can retrieve information such as the milestones of a material developed by multiple authors. Ashino [27] have developed a "Materials Ontology" which is an ontology describing substances, processes, environments, and properties. This ontology also has been used to exchange data between three different thermal property databases. In the solid-state physics domain, Li et al. [28] have developed the _Materials Design Ontology_ (MDO) which is an ontology covering knowledge in the field of materials design, e.g., with regards to ab-initio methods. MDO is used to represent materials' data related to ab-initio calculations over disparate materials data repositories as RDF triples. At the time of writing the paper, a total of \(\approx\) 4.3K RDF triples have been collected in their repository6. While the work is related to the representation of crystalline material by means of the crystal structure, MDO does not represent data related to crystalline defects. Footnote 6: [https://github.com/LiUSemWeb/materials-design-ontology](https://github.com/LiUSemWeb/materials-design-ontology) Another effort in the experimental materials science community that uses semantic web technologies is the NanoMine Knowledge Graph7[29]. It is a knowledge graph for polymer nanocomposite materials integrating diverse data from more than 1,700 polymer nanocomposite experiments. Moreover, the authors of the NanoMine knowledge graph have developed the NanoMine ontology8, which is a backbone ontology to describe polymer nanocomposite experiments. Footnote 7: [http://nanomine.org/](http://nanomine.org/) Footnote 8: [https://github.com/tetherless-world/nanomine-ontology](https://github.com/tetherless-world/nanomine-ontology) In conclusion, it is evident that even though several efforts and groups utilizing the semantic web in various MSE-related fields have progressed significantly, work for semantically representing dislocations simulation data is still missing. We believe that this work is the first attempt at creating a knowledge graph in an MSE-related domain that deals with data governing details of linear crystallographic defects, i.e. dislocation data. As a result, the unstructured dislocation data is transformed into linked data with dereferenceable IRIs using persistent URLs, adhering to W3C standards and best practices. This enables not only to annotation of dislocation data by an ontology but also to integration of dislocation data into other MSE-related fields. ## 3 Description of the Domain This section briefly describes the relevant notions and concepts of line defects within the crystalline materials domain. ### Representation of Crystalline Materials Most of the metals and metallic materials have a crystalline structure, which implies that the atoms are arranged in a periodic structure with a high degree of symmetry. This periodic arrangement is at the basis of the _crystal structure_ model, idealizing the physical concept of crystalline materials. For example, in Figure 1 atoms are shown in an idealized manner as small spheres. The crystal structure is represented by the _lattice_ together with a _motif_: the lattice is a mathematical concept of an infinite, repeating arrangement of points in space (3D), in a plane (2D), or on a line (1D), in which all points have the same surrounding and coincide with atom positions. The motif (or base) consists of an arrangement of chemical species, which can be atoms, ions, or molecules in crystalline materials. By putting a motif of one or more atoms at every lattice point, the crystal structure can be represented. It is now possible to identify the smallest atom pattern that can be repeated along all spatial directions to cover the entire structure. This pattern is called a _unit cell_, shown as the black cube in Figure 1. The lattice parameters of the unit cell consist of the angles between the edges and the edge lengths. Figure 2 shows the six lattice parameters needed to characterize the unit cell: three lengths \((a,b,c)\) and three angles \((\alpha,\beta,\gamma)\). These parameters also constitute the basis vectors in the crystal coordinate system; they are not necessarily mutually perpendicular. Unit cells are often classified into a systematic based on the lattice parameters (cf. Figure 3). For instance, the cubic system has \(a=b=c,\ \alpha=\beta=\gamma=90^{\circ}\) and the orthorhombic system has \(a\neq b\neq c,\ \alpha=\beta=\gamma=90^{\circ}\). Seven crystal systems are often ordered according to the increasing symmetry: cubic, tetragonal, orthorhombic, hexagonal, rhombohedral, monoclinic, and triclinic. In the unit cell, we can also define _lattice points_, _lattice directions_, and _lattice planes_: A lattice consists of lattice points where the atoms, ions, or molecules are located (the leftmost cube in Figure 4). The vector position of lattice points, \(\overrightarrow{R}\), is described by the equation \[\overrightarrow{R}=n_{1}\mathbf{a}+n_{2}\mathbf{b}+n_{3}\mathbf{c}\, \tag{1}\] where \(n_{i}\) are arbitrary integers and \(\mathbf{a},\mathbf{b},\mathbf{c}\) are basis vectors (pointing along the axes in Figure 2) derived from the lattice parameters. As illustrated in Figure 4, a lattice direction or lattice vector is a vector connecting two lattice points, whereas a lattice plane forms an infinitely stretched plane (characterized through a plane normal) that cuts through lattice points such that a regular arrangement of lattice points in the plane occurs. Figure 1: The crystal structure of face-centered cubic comprises an aggregate of atoms within one unit cell in crystalline materials. Figure 4: On the very left, a unit cell is shown that corresponds to a ”face-centered cubic” structure. The black points indicate the lattice points, i.e. the positions of the atoms. Second from left: a direction vector that connects to these lattice points. Lastly, two different lattice planes are shown. Figure 3: The seven crystal systems. These seven crystal systems are also seven primitive Bravais lattices. Each of them only corresponds to a single lattice point. Figure 2: The geometry of a unit cell is exactly defined through the three lengths, \(a\), \(b\), and \(c\), and the three angles \(\alpha\), \(\beta\), and \(\gamma\). ### Description of Linear Defects In crystalline materials, atoms are not always arranged or positioned perfectly. Typically, different kinds of crystallographic defects lead to disruption of the local order in a material (in addition to thermal fluctuations affecting the atomic positions). A common type of such defect is the dislocation, which causes a strongly localized, tube-like region of disorder (illustrated by the dashed circle in the right panel of Figure 5; this tube-like region stretches along the \(z\)-direction). This region contains the highly disordered dislocation core at the center. Further away from the dislocation core, the perfect lattice structure is restored, even though there is now a row of atoms shifted into the new position as indicated by the red spheres in Figure 5. The "Burgers vector" of a dislocation can be defined through the "Burgers circuit", as shown in 6. A Burgers circuit is an atom-to-atom path that is closed in a perfect crystal (left panel of the figure). The length of the path is given as multiple of the atomic distances in two directions. In the presence of a dislocation, a path of the same lengths would not be closed (right panel of the figure). The step-by-step procedure is as follows: We define a reference point \(C\) as the start point of the path. The line sense of the path is given by \(\boldsymbol{\xi}\) assuming the "right-hand convention" (the thumb points along the vector \(\boldsymbol{\xi}\) into the picture plane and the other fingers curl around that vector; their fingertips indicate the direction of the path). The symbol \(\otimes\) in the figure indicates that the vector \(\boldsymbol{\xi}\) points into the picture plane. In the perfect crystal, this circuit goes up five atoms from the reference point, four atoms to the left, five atoms down, and four atoms to the left to close a circuit at point \(C\) again. With the same reference point and the same number of atomic spacings as in the circuit, the perfect crystalline material can be used with the crystal containing a dislocation. However, the circuit does not end at the same point as in the perfect crystal, but rather at point \(G\) as shown in the right panel of Figure 6. The vector connecting the start point \(C\) with the finish point \(G\) is the Burgers vector \(\mathbf{b}\). There are two fundamentally different types of dislocations: "screw" and "edge" dislocation. A screw dislocation has a line sense parallel to its Burgers vector, \(\boldsymbol{\xi}||\boldsymbol{b}\), whereas for an edge dislocations the line sense is perpendicular to its Burgers vector, \(\boldsymbol{\xi}\perp\boldsymbol{b}\). Thus, Figure 6 shows an edge dislocation. In reality, screw and edge dislocations are extreme cases, while the most general case of a dislocation type in a crystalline material is a "mixed dislocation." This is a dislocation with the line sense, \(\boldsymbol{\xi}\), neither parallel nor perpendicular to its Burgers vector, \(\mathbf{b}\). Since the atoms around dislocations are not positioned at the perfect lattice points, the lattice is distorted near a dislocation. This distortion results in a stress field in the crystalline material around the dislocation which is the reason why dislocations move: they try to minimize energy. In the context of plastic deformation, a dislocation is defined as the boundary of a slipped area within which atoms are displaced by the size of an elementary unit translation given by the Burgers vector. In materials science, often the question arises on which "granularity level" a dislocation should be defined. Clearly, if we are interested in phenomena on the nanometer scale then we should resolve individual atoms (e.g., through high-resolution transmission electron microscopy or molecular Figure 5: The left figure shows the perfect order of atoms in a crystalline material. The right crystal contains a dislocation that destroys the local order of the crystal structure. Note that red-colored atoms only show the ‘irregularity”, and are not different atom types. Figure 6: The Burgers circuit in the crystalline materials. While the left panel has the Burgers circuit in the perfect crystal material. The right panel has the Burgers circuit around the dislocation in the crystalline materials. Due to the closure failure in the defective crystalline materials, we can define the Burgers vector, \(\mathbf{b}\). dynamics simulations). When taking the _mesoscopic_ perspective, typically the individual atoms can not be seen anymore and are not of interest (as, e.g., done through regular transmission electron microscopy or dislocations dynamics simulations). However, the dislocation line itself is still observable: the tube-like defect "region" is reduced to an idealized mathematical line, as demonstrated in the right panel of Figure 7. Therefore, the transition from the atomic scale to the mesoscale requires a conceptual and mathematical idealization that significantly reduces the amount of information. These idealizations require to be accompanied by further details and definitions from the atomic scale, including the crystal structure, the lattice, the lattice plane, and the lattice direction information, all of which have an impact on the dislocation's motion. For example, the motion of a dislocation line through a crystal is constrained to a specific crystallographic plane or lattice plane. Thus, it still requires crystallographic information, even though the "defect region" is now only represented as a mathematical line. These two different levels of information require particular attention when designing the dislocation ontology. The particular crystallographic or lattice plane constraining the dislocation motion is called the _slip plane_ (see the green plane in Figure 8). There are specific _slip directions_, which are lattice directions along which plastic deformation occurs within the slip plane, given by the Burgers vector. A _slip system_ is a set of slip planes with the same unit normal vector and the same slip direction. Thus, the unit normal vector and the slip direction or the Burgers vector (where the latter is not a unit vector) determine the slip system. The mathematical representation of a mesoscale dislocation, as shown in Figure 8, is an oriented curve with a start point and an endpoint. The local line orientation changes along the line, while the Burgers vector is constant for each point. Since the dislocation is a directed curve, it has a line sense. Unlike the local line orientation, it is a property of the whole line. Various computational and experimental techniques are leveraged to predict and observe dislocations in crystalline materials, some of which were already mentioned above. For instance, high-resolution transmission electron microscopy (or field ion microscopy) is used on the atomic scale to image the arrangement of atoms. On the mesoscale, the focus is on examining the characteristics of individual dislocations and analyzing the distribution, arrangement, and density of dislocation in materials. Transmission Electron Microscopy (TEM) and Discrete Dislocation Dynamics (DDD) simulations are techniques for investigating these properties and simulating the dislocation behaviour, respectively. TEM is a microscopy technique that generates a highly-magnified image of a material specimen. This technique involves an electron beam passing through the specimen and several lenses. In strongly simplified terms, if the electron beam hits an atom, then it is deflected. As a result of the deflection, the intensity of the transmitted beam is reduced, and the intensity of the diffracted beam is increased. The dislocation can be seen as a dark line in such a bright-field image. Figure 8: Depiction of the mathematical dislocation line on the mesoscale as a mathematical object that has start and end points. The object is characterized by the Burgers vector and the line sense. Furthermore, the dislocation motion is constrained by the slip plane. Figure 7: The idealization represents the dislocation in the mesoscale. Here, the individual atoms are no longer visible. This idealization reduced the tube-like defect “region” to a mathematical line. Note that the line on the right does not correspond to the dislocation in the left (this would be a vertical, straight line). Above, it was already mentioned that the displaced atoms around a dislocation result in stresses, because the atoms are no longer in their preferred equilibrium position. Dislocations move in such stress fields which are mainly described by the governing equations of (linear) elasticity theory. DDD simulations employ mathematical lines (polygons or splines) to represent dislocations, which are moved based on elastic interactions and further "local rules". The numerical schemes used in DDD simulations require to numerically discretize the mathematical line, e.g., by a number of straight line segments. The discretization steps are illustrated in Figure 9. Further details can be found in [30]. The discretization process can be easiest described based on the example of a polygonal chain. There, the smooth mathematical line is approximated by a polygonal chain, \(\mathbf{C}\). \(\mathbf{C}\) is a curve defined by a sequence of points (\(\mathbf{P}_{0}\), \(\mathbf{P}_{1}\),..., \(\mathbf{P}_{n}\)), and these points are called vertices. In addition, the curve consists of segments connecting consecutive pairs of vertices. In general, we can define the shape of a segment through the _shape function_ which allows to have not only straight line segments but also spline curves of different order. ## 4 The Dislocation Ontology The dislocation ontology, DISO, is developed using several well-known ontology development methodologies, such as [31]. The process is iterative, starting with an initial version and continuously revising and refining the evolving ontology. The development process is outlined in Figure 10, which includes the main phases, their sub-tasks, and the roles involved. ### Metadata It is essential to provide a systematic and comprehensive description of the ontology, also known as ontology metadata, thus supporting its reusability and findability [32]. When ontology metadata is missing, several potential issues can occur. These include reduced accessibility for potential users, decreased reusability, and ontologies not being recognized as relevant for specific use cases. Accordingly, several DCMI Metadata Terms9 have been added to the ontology, involving terms:contributor, terms:created, terms:title, and vann:preferredNamespacePrefix. Footnote 9: [https://www.dublincore.org/specifications/dublin-core/dcmi-terms/](https://www.dublincore.org/specifications/dublin-core/dcmi-terms/) ### Reuse of existing models When developing an ontology, one of the first steps is to utilize or reuse terms (i.e., classes or properties) from existing ontologies that describe the same domain or subject matter. Deciding which ontologies are appropriate for reuse is a challenging task for ontology engineers. Ontology reuse involves several activities, including merging, extending, specializing, or adapting other ontologies. In DISO, we reuse concepts from two related ontologies in the MSE domain: the _Crystal Structure Ontology10_ (CSO) and the _Crystalline Defect Ontology11_ (CDO). CSO describes crystallographic data related to dislocations, while CDO links physical material entities to crystal structures and different defect types within a crystal, such as point defect, dislocation, and planar defect. Footnote 10: [https://purls.helmholtz-metadata.de/disos/cso](https://purls.helmholtz-metadata.de/disos/cso) Footnote 11: [https://purls.helmholtz-metadata.de/disos/cdo](https://purls.helmholtz-metadata.de/disos/cdo) Footnote 12: [https://github.com/emmo-repo/domain-crystallography](https://github.com/emmo-repo/domain-crystallography) In CDO, the EMMO:Crystal class (from the EMMO13 ontology) is reused to describe the physical entity of crystalline materials. The CDO:CrystallineMaterial class is defined as a subclass of EMMO:Crystal which is used to represent crystalline materials. Footnote 13: [https://purls.helmholtz-metadata.de/disos/cdo](https://purls.helmholtz-metadata.de/disos/cdo) In CSO, several MDO [28] classes are reused to describe the crystal coordinate system, the motif in a crystal structure, point groups, and space groups. Furthermore, the CSO defines the unit quantity of a property by reusing several Figure 9: The discretization of dislocation to a numerical representation. The oriented curve dislocation line shown in the left panel is discretized into a number of segments shown in the right panel. classes from QUDT (Quantities, Units, Dimensions and Data Types Ontologies) [33]. Overall, the semantic data value of the developed ontology increases as more ontologies are included, making the reuse of terms from other ontologies a worthwhile undertaking [34]. ### Classes Our ontology classes are separated into two groups: (i) those imported from existing ontologies (as explained in subsection 4.2) and (ii) newly created classes that are not already defined in any existing ontologies. _Imported classes_. DISO reuses several classes from CSO: CSO:Lattice represents the periodic arrangement of one or more atoms, and CSO:Vector represents quantities with both magnitude and direction. Additionally, DISO reuses classes from CDO, including CDO:CrystallographicDefect, which represents lattice irregularity or lattice defects. _Newly defined classes_. For new classes, we focus on specific classes of crystalline materials and line defects, including 1) Dislocation, the focal class in the DISO which represents a linear or one-dimensional defect that causes some atoms to be displaced, 2) SlipPlane, which models the lattice plane to which the dislocation is constrained to move in, 3) SlipDirection, which models the lattice direction where the slip occurs in the crystalline materials, 4) LatticePlane, which represents the lattice plane where it forms an infinitely stretched plane that cuts through the lattice points, 5) LatticeDirection, which models the direction inside the lattice that connects two lattice points, and 6) DiscretizedLine, which provides a numerical representation of the dislocation line as a mathematical line, such as an oriented curve, that is discretized into several segments. ### Properties Similarly, both data and object properties in DISO are divided into two categories which are newly defined properties and reused ones. _Newly defined properties_. Object properties constitute the relationship between various concepts in the ontology. For instance, the relationship between TransmissionElectronMicroscopy and Dislocation classes can be represented through the observedBy object property. Similarly, the hasLineSense object property represents the relationship between Dislocation and LineSense. Additionally, a number of data properties, including directionMillerIndice and planeMillerIndice are defined, which typically provide a relation to attaching an entity instance to some literal datatype value, such as a string or a date. Figure 10: The workflow of the dislocation ontology development, illustrating the main phases, subprocesses, and roles involved in the whole process. _Reused properties_. Several properties from the reused ontologies have been used, e.g., cso:hasPositionVector, cdo:hasCrystallographicDefect, mdo:hasComposition, and emmo:hasProperty from the CSO ontology, CDO ontology, MDO ontology, and EMMO ontology, respectively. Moreover, we reused several data properties from DCterms for adding ontology metadata (see subsection 4.1). After defining new properties and identifying reused ones, the domain and range for each property using rdfs:domain and rdfs:range are defined, respectively. For instance, the domain of the data property diso:planeMillerIndice is diso:LatticePlane and the range is xsd:string. while the domain of the object property diso:hasLatticePoint is diso:LatticePlane and the range is diso:LatticePoint. _Restricting properties_. In the DISO, several classes use property restrictions, e.g., value constraints. For example, the resultsIn property which connects Dislocation and LatticeDisplacement is restricted by a value constraint of owl:someValuesFrom representing the fact that every dislocation individual results in _some_ or at least one lattice displacement individual(s). The hasLineSense property which connects Dislocation and LineSense is restricted by a value constraint of owl:allValuesFrom representing that every dislocation individual can _only_ have a line sense individual. ### Reasoning DISO's inference capability is increased through the use of several property characteristics, such as functional relations, transitivity, and the inverse property [35]. hasMathematicalRepresentation is a functional property because it means that a dislocation can be represented by exactly one mathematical line, i.e., it can not have any different mathematical representation than that. The transitive property can be demonstrated through the hasRepresentation relationship. This relationship refers to the connection between a dislocation and its representation. For instance, if a dislocation has a line representation, and this line has a discretized line representation, it can Figure 11: Core concepts and interconnected relationships in the DISO ontology. Arrows with open arrowheads denote rdfs:subClassOf properties between classes. Regular arrows represent _rdfs:domain_ and _rdfs:range_ restrictions on properties and coloured boxes represent classes belonging to different ontologies, e.g., yellow boxes represent DISO’s classes. be inferred that the dislocation also has a discretized line representation. In order to enable bidirectional navigation between two classes in the ontology, inverse properties are established for each corresponding property. For instance, the isSegmentOf property is the inverse property of hasSegment. This means that if a discretized line A _has a segment_ B, then B _is a segment of_ A. ## 5 Ontology Alignment Ontology alignment is the process of identifying relations between entities among different ontologies in order to establish connections between them [36]. These entities include classes, properties, and individuals. For successful ontology alignment, it is crucial to identify similarities between source and target ontologies. The analysis entails examining concepts that overlap but may have different names (i.e. synonyms) or types in the ontologies [37]. This section will cover the extension of DISO, which involves aligning two ontologies, namely EMMO and MDO. This alignment plays a crucial role in allowing DISO to annotate the DDD data and transform it into linked data while also facilitating knowledge graph generation. ### Alignment with EMMO EMMO is a continuous initiative aimed at establishing semantic standards that can be implemented at the highest level of abstraction. This makes it possible for all potential domain ontologies, especially in the MSE field, to be integrated and to work together seamlessly. Currently, EMMO consists of two modules: a top-level and a mid-level module. The former includes the fundamental axioms that constitute the philosophical foundation of the EMMO, while the latter consists of a set of perspectives to develop more specialized domain ontologies. These two ontologies serve as Figure 12: DISO alignment with EMMO. The starting point to align the DISO with the EMMO by importing the domain ontology crystallography developed by EMMO. the basis for building further domain and applications ontologies, e.g., the application of EMMO in the domain of mechanical testing [38]. The starting point to align the DISO with EMMO is by aligning with the Crystallography Domain Ontology13, a domain ontology based on EMMO and the CIF core dictionary14. As shown in Figure 12, CDO:CrystallographicDefect subsumes Dislocation, while also being a subclass of EMMO:Crystallographical class. Similarly, EMMO:CrystalStructure, an equivalent class to CSO:CrystalStructure, is also a subclass of EMMO:Crystallographical. Overall, EMMO:Crystallographical is a class that ideally represents the physical concepts associated with crystalline materials. Footnote 13: [https://github.com/emmo-repo/domain-crystallography](https://github.com/emmo-repo/domain-crystallography) Footnote 14: [https://www.iucr.org/resources/cif/dictionaries/cif_core](https://www.iucr.org/resources/cif/dictionaries/cif_core) As we mentioned in section 3, on the mesoscale, a dislocation is represented by a mathematical line, which can be further idealized as a pixel or discretized line depending on the application (e.g., microscopy or simulation). To align the dislocation mathematical line concept with an EMMO class, EMMO:MathematicalModel subsumes Line and the discretized representation of the mathematical dislocation line, DiscretizedLine, is subsumed EMMO:Numerical. ### Alignment with MDO The MDO is a domain ontology that defines concepts and relations to cover the knowledge of materials design, especially in the ab-initio calculation. MDO consists of several modules, a _Core_, the _Provenance_ module, and two domain-specific modules: _Structure_ and _Calculation_. To align the DISO with the MDO, we reused several classes in the MDO Core module. The MDO Core module describes the structure or the virtual specimen of interest via MDO:Structure class. As shown in Figure 13, we defined a DislocationStructure class as a subclass of MDO:Structure. This class describes a dislocation (micro)structure, which is a virtual specimen used by a DDD simulation to study the mechanical properties of a crystalline material. Furthermore, DislocationStructure as an idealized representation relates to a physical concept called CDO:CrystallineMaterial. Figure 13: DISO alignment with the MDO Core and Provenance module. In the MDO Core module, an instance of the MDO:Structure class is used as a virtual specimen input or output for a simulation. Here, the simulation concept is represented as the MDO:Calculation class. We subsumed the MDO:Calculation class to define the DDDSimulation class, which is a class to describe the DDD simulation. Thus, the DDDSimulation can have DislocationStructure as an input or an output. Moreover, the DDDSimulation has an input and output relationship with MDO:Property to run a calculation. In addition, the DDDSimulation is related to the DDDSimulationParameter, a simulation parameter concept configuring the DDD simulation, e.g., the activation parameter for cross-slip, junction formation, and external load. To preserve the provenance information of a DDD simulation, we reused several classes from the MDO Provenance module and the PROV ontology [39]. Running a DDD simulation requires specific software to solve materials science problems. It is quite helpful to store information about the software used and its version, as this can help scientists reuse data through post-processing methods specific to DDD software. In this regard, DDDSimulation has a relationship with PROV:SoftwareAgent, which has two data properties: softwareVersion and MDO:softwareName. Furthermore, to preserve the provenance information related to when a DDD simulation starts and ends, PROV:Activity subsumed the DDDSimulation and inherited two data properties: PROV:startedAtTime and PROV:endedAtTime. Apart from that, we reused PROV:Person to annotate the person running or responsible for the simulation. It has three data properties to define a person: FOAF:firstName, Figure 14: The core concepts in DISO ontology after the alignment with MDO and EMMO. Arrows with open arrowheads denote rdfs:subClassOf properties between classes, while regular arrows represent the relationships between them. Classes that belong to the same ontology share the same color. GRAF:family_name, and MWO:hasORCID. The latter is a data property that we reused from the MatWerk Ontology (MWO)15. Footnote 15: [http://purls.helmholtz-metadata.de/mwo/](http://purls.helmholtz-metadata.de/mwo/) To summarize, core concepts and interconnected relationships in DISO after the alignment can be seen in Figure 14. The advantages of ontology alignment for DISO are promoting knowledge transfer from other ontologies when describing the DDD simulation. Furthermore, ontology alignment fosters interoperability between ontologies in MSE-related domains. The objective is to assist in building a knowledge graph for the dislocation domain. ## 6 The Dislocation Knowledge Graph In the field of materials science, researchers use a numerical method called DDD simulation to analyze the behaviour of dislocations within crystalline materials. This technique helps identify the specific characteristics of each dislocation, as well as their interaction, arrangement, and collective behaviour within the material. The simulation observes the motion and interaction of many dislocations which ultimately creates the relationship between the microstructure, loading conditions, and the mechanical properties of a crystalline material. For simulations of dislocations, there are various software options available such as MoDELib [40], ParaDiS [41], and microMegas [42]. Every software has a distinct collection of metadata that organizes the inputs and outputs of the simulation. DISO was utilized in this specific scenario to accurately annotate the information collected from various DDD simulations. The ultimate goal is to generate a comprehensive dislocation knowledge graph (DisLocKG) using this data. The DDD data used in this work was generated through the MoDELib software and took different initial dislocation densities and specimen sizes into account. The cube-shaped Copper specimen, with an edge length of either 50 or 100 nanometers, was randomly filled with dipolar edge loops on all slip systems until the initial density of either \(1\cdot 10^{16}\) m\({}^{-2}\) or \(5\cdot 10^{16}\) m\({}^{-2}\) was reached. A sample of the generated cube-shaped Copper specimen can be seen on the left panel of the Figure 15. During the simulation, the dislocation microstructure was allowed to relax without any external load, meaning that internal stress and image forces solely influenced the dislocation evolution. The simulation resulted in the relaxed dislocation microstructure shown on the right-hand side of Figure 15. An important aspect for a materials scientist is that some simulations do not have cross-slip or junction formation. This has significant implications when it comes to analyzing simulation results. E.g., Demirci et al. [43] investigated the influence of cross-slip on the evolution of dislocation structures and therefore could benefit from this information. Such relaxation calculations are also important for creating a realistic microstructure. For example, Motz et al. [44] investigated how the relaxed dislocation microstructure influences the plasticity in subsequent tensile test simulations. Furthermore, several authors [45, 46, 47, 9] conducted machine learning and data mining studies utilizing the dislocation relaxed microstructure to classify the structure and express the strain energy density of a dislocation microstructure, respectively. This explains why there is a strong need for a detailed and formal representation of such simulations - here, contained in the class of DDDSimulation. For our example, we have collected a total of 25 data points - where each data point is Figure 15: Sample of dislocation microstructures used as input for simulation as well as yielded by the simulation as output. one DDD simulation consisting of initial and final microstructure. Each of those was annotated with DISO. Any data point gives information about the simulation details, such as parameters used for a simulation, initial dislocation microstructure used as input, and the resulting dislocation microstructure produced by the simulation. Additionally, each dislocation microstructure includes information about the crystal structure, Bravais lattice, dislocation, slip plane, Burgers vector, and numerical representation of dislocation. The results of the simulations are stored and parsed in the HDF516 format. Subsequently, we utilize our in-house Python scripts (using the rdflib 6.0 [48] Python library) to create a knowledge graph called _DisLocKG_ from this data using DISO as a reference ontology (cf. Figure 16). The DisLocKG is a semantic network that holds information about dislocations in crystalline materials. Note that, DisLocKG also stores the provenance information related to the data, particularly the creator data, software, and software version used to generate the data. In total, we have generated a number of \(\sim 2.2\)M triples that are stored as RDF files which are available via its persistent identifier17. Footnote 16: [https://www.hdfgroup.org/solutions/hdf5/](https://www.hdfgroup.org/solutions/hdf5/) Footnote 17: [https://purls.helmholtz-metadata.de/dislockg](https://purls.helmholtz-metadata.de/dislockg) Publishing DDD data as linked data has several benefits [49], including 1) establishing links between dislocation-related datasets, enabling machines to understand and discover new information, 2) supporting semantic querying via the SPARQL query language, 3) supporting data enrichment, where machines can infer implicit knowledge that does not exist, and 4) promoting semantic validation of the data, ensuring consistency and accuracy. We have listed some competency questions in Table 1 to give an idea of the vast information available in DisLocKG. For instance, CQ1 can retrieve the history and origin information of DDD data generated by the MoDELib software, and CQ2 and CQ3 can retrieve information on the specimen geometry and the initial dislocation of each dislocation simulation. These CQs are important if one wants to query a dislocation simulation to be reused for the processing step if they need a specific density of a dislocation structure and information concerning the geometry. CQ4 retrieves the input parameters to run the simulation, while CQ5 queries all dislocation structures generated by the relaxation calculation. The SPARQL query corresponding to CQ3 is shown in Listing 1, and the complete set of the competency questions and the corresponding SPARQL queries can also be found in the DISO GitHub repository. Figure 17 visualizes the results of CQ3, which contains three individuals (shown as the red markers) of the DDD simulation class. Each of the DDD simulation individuals has a relationship with dislocation structure individuals. Moreover, the dislocation density data relates to the dislocation structure individual. Figure 16: DDD simulation data as linked data. Colored rectangles on the left depict data types in the DDD simulation: dislocation structure, provenance, simulation parameters, and crystal structure data. The data subsequently is linked using the DISO as a reference ontology. The rdflib Python module supports data linking and generates the _DisLocKG_. Via the SPARQL Endpoint, end users can query the data to retrieve the information in the DisLocKG. ## 7 Evaluation Employing predefined metrics that evaluate an ontology's richness through criteria-based assessment is one way of evaluating its quality [34]. In this section, we evaluate the adapted version of DISO using the OntoQA [50] evaluation model. This model can assess an ontology based on two dimensions: Schema and Instances. Here, we focus on the schema evaluation which evaluates the quality of the ontology's design. We determine the effectiveness of the ontology and its ability to represent rich knowledge using the following metrics: * _Relationship richness (RR)_ shows the diversity of relations and placement of relations in the ontology (Equation 2). \[RR=\frac{|P|}{|SC|+|P|}\] (2) where \(P\) is the number of relationships and _SC_ is the number of sub-classes. The more relations an ontology owns, the richer it is (_is-a_ relations are not considered). * _Attribute richness (AR)_ shows that the more attributes are defined the more knowledge the ontology delivers (Equation 3). \[AR=\frac{|AT|}{|C|}\] (3) where _AT_ is the number of attributes for all classes and \(C\) is the number of classes. * _Inheritance richness (IR)_ describes the distribution of information across different levels of the ontology inheritance tree. IR indicates how knowledge is classified into different classes and subclasses in an ontology. (Equation 4). \[IR=\frac{|SC|}{|C|}\] (4) \begin{table} \begin{tabular}{c p{341.4pt}} \hline **No.** & **Question** \\ \hline CQ1 & Provide detailed information on the dislocation structures simulated using the MODELIB software, including the software version and creator associated with these simulations. \\ CQ2 & Which dislocation structures possess a specimen shape resembling a cube with an edge length greater than 30 nanometers? \\ CQ3 & List all DDD simulations that have an initial density of dislocation = 5\(e\)16 m\({}^{-2}\) \\ CQ4 & List all DDD simulations that do not activate the cross slip formation and junction formation \\ CQ5 & What are dislocation structures generated by the relaxation calculation? List also the initial density of a dislocation structure used for a relaxation calculation, simulation parameters: cross-slip activation, junction formation activation, and external load activation. \\ \hline \end{tabular} \end{table} Table 1: A sample of competency questions for DisLocKG. Figure 17: A visual representation of CQ3 results. Colored boxes represent classes and the red dot represents an individual belonging to that class. Each individual is defined by a directed arrow having the rdf:type relationship to the respective class and connected to other individuals by object properties. In Table 2, we compare the evaluation outcomes of DISO with MDO [28], CSO18 and the previous version of DISO. DISO has the most significant value of _RR_, which implies that it has a greater relation diversity. Moreover, DISO has the highest _IR_ value, representing a more comprehensive knowledge range than MDO, CSO, and the previous version of DISO. The _AR_ value of DISO is lower than that of MDO and higher than CSO and the previous version of DISO. To conclude, DISO possesses the most extensive knowledge representation and diversity in terms of relationships, achieving the highest IR and RR, respectively. Moreover, the adapted version of DISO surpasses its predecessor in all evaluation metrics. Footnote 18: [https://purls.helmholtz-metadata.de/disos/cso](https://purls.helmholtz-metadata.de/disos/cso) ## 8 Conclusion and Outlook This paper showcases how semantic web technologies can be utilized to transform unstructured DDD data into well-organized and structured data. Furthermore, we extended the dislocation ontology by aligning it with commonly used materials science ontologies (i.e., EMMO and MDO core) to be able to model simulation data efficiently. Moreover, we presented a real-world use case that utilized the DISO to construct a semantic network of DDD data (i.e., linked data) called DisLocKG, where individual entities are connected, enabling semantic query and supporting intelligent tasks. To support querying DisLocKG, the graph has been made publicly available and steps to set up a SPARQL endpoint are described in its GitHub repository. The evaluation results indicate that the adapted version of DISO is the most comprehensive and diverse knowledge representation among the state-of-the-art ontologies. In the future, we plan to improve DISO by modelling the linear elasticity theory of dislocations and extending the real-world use case, e.g., another DDD simulation software and TEM data. In addition, developing the DisLocKG Application Programming Interface (API) will also be a worthwhile undertaking. The idea is to develop several interactive features, e.g., querying, data mining, visualizing, updating, and deleting data with the DisLocKG via its API. DISO and DisLocKG will continue to be maintained and extended in the context of the Helmholtz Metadata Collaboration (HMC) and NFDI-MatWerk efforts of facilitating machine readability and reuse of research data. \begin{table} \begin{tabular}{c c c c c|c c c} \hline \hline Ontology & C & SC & AT & P & RR & AR & IR \\ \hline MDO & 37 & 49 & 32 & 32 & 0.40 & 0.86 & 1.32 \\ CSO & 30 & 49 & 19 & 25 & 0.34 & 0.63 & 1.63 \\ DISO v1.0 & 33 & 62 & 12 & 33 & 0.35 & 0.32 & 1.63 \\ \hline **DISO v1.1** & **70** & **116** & **47** & **80** & **0.41** & **0.67** & **1.66** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of DISO compared to DISO v1.0, MDO and CSO using the OntoQA model. C is the number of classes, SC is the number of sub-classes, AT is the number of attributes, and P denotes the number of relationships. Acknowledgments.AI and SS acknowledge financial support from the European Research Council through the ERC Grant Agreement No. 759419 MuDiLingo ("A Multiscale Dislocation Language for Data-Driven Materials Science"). AI, SF, and SS acknowledge Helmholtz Metadata Collaboration (HMC) within the Hub Information at the Forschungszentrum Julich (FZJ). We are grateful to Aytekin Demirci for the MoDELib parser that we used. ## Declarations Conflict of interest.The authors have no conflicts of interest to declare.
2309.03848
Bipartite Friends and Strangers Walking on Bipartite Graphs
Given $n$-vertex simple graphs $X$ and $Y$, the friends-and-strangers graph $\mathsf{FS}(X, Y)$ has as its vertices all $n!$ bijections from $V(X)$ to $V(Y)$, where two bijections are adjacent if and only if they differ on two adjacent elements of $V(X)$ whose mappings are adjacent in $Y$. We consider the setting where $X$ and $Y$ are both edge-subgraphs of $K_{r,r}$: due to a parity obstruction, $\mathsf{FS}(X,Y)$ is always disconnected in this setting. Modestly improving a result of Bangachev, we show that if $X$ and $Y$ respectively have minimum degrees $\delta(X)$ and $\delta(Y)$ and they satisfy $\delta(X) + \delta(Y) \geq \lfloor 3r/2 \rfloor + 1$, then $\mathsf{FS}(X,Y)$ has exactly two connected components. This proves that the cutoff for $\mathsf{FS}(X,Y)$ to avoid isolated vertices is equal to the cutoff for $\mathsf{FS}(X,Y)$ to have exactly two connected components. We also consider a probabilistic setup in which we fix $Y$ to be $K_{r,r}$, but randomly generate $X$ by including each edge in $K_{r,r}$ independently with probability $p$. Invoking a result of Zhu, we exhibit a phase transition phenomenon with threshold function $(\log r)/r$: below the threshold, $\mathsf{FS}(X,Y)$ has more than two connected components with high probability, while above the threshold, $\mathsf{FS}(X,Y)$ has exactly two connected components with high probability. Altogether, our results settle a conjecture and completely answer two problems of Alon, Defant, and Kravitz.
Ryan Jeong
2023-09-07T17:03:21Z
http://arxiv.org/abs/2309.03848v3
# Bipartite Friends and Strangers Walking on Bipartite Graphs ###### Abstract. Given \(n\)-vertex simple graphs \(X\) and \(Y\), the friends-and-strangers graph \(\mathsf{FS}(X,Y)\) has as its vertices all \(n!\) bijections from \(V(X)\) to \(V(Y)\), where two bijections are adjacent if and only if they differ on two adjacent elements of \(V(X)\) whose mappings are adjacent in \(Y\). We consider the setting where \(X\) and \(Y\) are both edge-subgraphs of \(K_{r,r}\): due to a parity obstruction, \(\mathsf{FS}(X,Y)\) is always disconnected in this setting. Sharpening a result of Bangachev, we show that if \(X\) and \(Y\) respectively have minimum degrees \(\delta(X)\) and \(\delta(Y)\) and they satisfy \(\delta(X)+\delta(Y)\geq\lfloor 3r/2\rfloor+1\), then \(\mathsf{FS}(X,Y)\) has exactly two connected components. This proves that the cutoff for \(\mathsf{FS}(X,Y)\) to avoid isolated vertices is equal to the cutoff for \(\mathsf{FS}(X,Y)\) to have exactly two connected components. We also consider a probabilistic setup in which we fix \(Y\) to be \(K_{r,r}\), but randomly generate \(X\) by including each edge in \(K_{r,r}\) independently with probability \(p\). Invoking a result of Zhu, we exhibit a phase transition phenomenon with threshold function \((\log r)/r\): below the threshold, \(\mathsf{FS}(X,Y)\) has more than two connected components with high probability, while above the threshold, \(\mathsf{FS}(X,Y)\) has exactly two connected components with high probability. Altogether, our results settle a conjecture and completely answer two problems of Alon, Defant, and Kravitz. ## 1. Introduction ### Background Let \(X\) and \(Y\) be \(n\)-vertex simple graphs. Interpret the vertices of \(X\) as positions, and the vertices of \(Y\) as people. Two people in the vertex set of \(Y\) are friends if they are adjacent and strangers if they are not. Each person chooses a position, producing a starting configuration. From here, at any point in time, two friends standing on adjacent positions may switch places: we call this operation a friendly swap. Our main interest in this paper will be to understand, in terms of assumptions on the structure of the graphs \(X\) and \(Y\), which configurations are reachable from which other configurations via some sequence of friendly swaps. We may formalize this setup using the following definition, illustrated in Figure 1. **Definition 1.1** ([1]).: Let \(X\) and \(Y\) be simple graphs on \(n\) vertices. The _friends-and-strangers graph_ of \(X\) and \(Y\), denoted \(\mathsf{FS}(X,Y)\), is a graph with vertices consisting of all bijections from \(V(X)\) to \(V(Y)\), with bijections \(\sigma,\tau\in\mathsf{FS}(X,Y)\) adjacent if and only if there exists an edge \(\{a,b\}\) in \(X\) such that 1. \(\{\sigma(a),\sigma(b)\}\in E(Y)\), 2. \(\sigma(a)=\tau(b),\ \sigma(b)=\tau(a)\), 3. \(\sigma(c)=\tau(c)\) for all \(c\in V(X)\setminus\{a,b\}\). In other words, \(\sigma\) and \(\tau\) differ on two adjacent vertices of \(X\) whose images under \(\sigma\) are adjacent in \(Y\). For any such bijections \(\sigma,\tau\), we say that \(\tau\) is reached from \(\sigma\) by an _\((X,Y)\)-friendly swap_. Since they were defined by Defant and Kravitz [1], the study of friends-and-strangers graphs has been a productive area of research. Indeed, Definition 1.1 lends itself to several natural directions of inquiry. One such direction is to assume (without loss of generality, as we will see in Proposition 2.1) \(X\) to be some highly structured graph, and study the structure of \(\mathsf{FS}(X,Y)\) for arbitrary graphs \(Y\): see [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 86, 87, 88, 89, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 12, 14, 16, 18, 19, 13, 15, 17, 19, 14, 18, 19, 16, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 83, 84, 85, 86, 87, 89, 93, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 69, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 9 that many other works in combinatorics and theoretical computer science may be recast using the language of friends-and-strangers. For an incomplete listing of examples, studying the famous 15-puzzle is equivalent to studying \(\mathsf{FS}(X,Y)\) where we let \(X\) be the 4-by-4 grid and \(Y\) be a star graph (this setup was later generalized and studied by [13]), [14, 15] consider the setting in which \(X\) is a path, asking if \(X\) and \(Y\) pack [1, 2, 2] is equivalent to asking if there exists an isolated vertex in \(\mathsf{FS}(X,Y)\), and the token swapping problem [1, 2, 3, 13, 14, 15] on the graph \(X\) corresponds to studying distances between configurations in \(\mathsf{FS}(X,K_{n})\). As suggested in the first paragraph, however, the most fundamental issue concerning friends-and-strangers graphs that one can study is their connectivity. Under what conditions on \(X\) and \(Y\) will \(\mathsf{FS}(X,Y)\) be connected? If we proceed under a regime in which \(\mathsf{FS}(X,Y)\) cannot be connected, how small can the number of connected components get, and what conditions on \(X\) and \(Y\) ensure that we achieve the smallest possible number of connected components? Of course, the resolution of these questions depends on the assumptions on \(X\) and \(Y\) under which we work. In this paper, we assume that \(X\) and \(Y\) are both edge-subgraphs of a complete bipartite graph whose partite sets have the same size \(r\), and investigate what happens as \(r\) grows large. Part of the motivation for studying this setting comes from the observation that if \(X\) and \(Y\) are both bipartite, \(\mathsf{FS}(X,Y)\) cannot be connected: see the discussion around [1, Proposition 2.7] and [1, Subsection 2.3] for a parity obstruction which demonstrates why this is the case.1 Additionally, this particular setup was of interest to many: it was also studied in [1, 2, 13, 14, 15]. Footnote 1: This is for reasons akin to why the 15-puzzle is unsolvable. ### Notation and Conventions We assume in this paper that all graphs are simple. For the sake of completeness, we list the following standard conventions, which we make use of throughout the article. For a graph \(G\), * we let \(V(G)\) and \(E(G)\) respectively denote the vertex and edge sets of \(G\); * if \(S\subseteq V(G)\), then we let \(G|_{S}\) denote the induced subgraph of \(G\) on \(S\); * if \(v\in V(G)\), then we let \(N(v)=\{u\in V(G):\{u,v\}\in E(G)\}\) denote the (open) neighborhood of \(v\); Figure 1. A sequence of \((X,Y)\)-friendly swaps in \(\mathsf{FS}(X,Y)\). Configurations in the bottom row are vertices in \(V(\mathsf{FS}(X,Y))\). Two consecutive configurations differ by an \((X,Y)\)-friendly swap, so the corresponding vertices are adjacent in \(\mathsf{FS}(X,Y)\). The figure and subcaptions are adapted from [1]. * we let \(\delta(G)\) and \(\Delta(G)\) respectively denote the minimum degree and maximum degree of \(G\); * we let \(\mathcal{G}(G,p)\) denote the probability space of edge-subgraphs of \(G\) in which each edge appears with probability \(p\) and the events that different edges appear are independent. We let \(K_{r,r}\) denote the complete bipartite graph whose two partite sets both have size \(r\). If \(\Sigma\) is a finite sequence, then \(\operatorname{rev}(\Sigma)\) denotes the reverse of \(\Sigma\). Finally, for two sets \(S\) and \(T\), we let \(S\bigtriangleup T=(S\setminus T)\cup(T\setminus S)\) denote their symmetric difference. ### Main Results As previously mentioned, when \(X\) and \(Y\) are both edge-subgraphs of \(K_{r,r}\), the best we can hope for is that \(\mathsf{FS}(X,Y)\) has exactly two connected components. A natural extremal problem results from asking for conditions on the minimum degrees of \(X\) and \(Y\) ensuring that \(\mathsf{FS}(X,Y)\) has exactly two connected components. In this direction, we let \(d_{r,r}\) be the smallest nonnegative integer such that whenever \(X\) and \(Y\) are edge-subgraphs of \(K_{r,r}\) with \(\delta(X)\geq d_{r,r}\) and \(\delta(Y)\geq d_{r,r}\), \(\mathsf{FS}(X,Y)\) has exactly two connected components. This problem was first studied in [1, Sections 5 and 6], which proved bounds on \(d_{r,r}\) that were tight up to additive constants. Asymmetrizing the problem by dropping the assumption that \(X\) and \(Y\) must satisfy the same minimum degree condition, [1, Sections 6 and 7] generalized the results of [1], again obtaining bounds that were tight up to additive constants. In the present article, we sharpen the results of [1] by shaving off the additive constants, producing completely tight conditions concerning when \(\mathsf{FS}(X,Y)\) has exactly two connected components. In a different direction (and in the spirit of prior work on the topic, as mentioned earlier), instead of varying both edge-subgraphs \(X\) and \(Y\), we may fix \(Y=K_{r,r}\), and ask for conditions on \(X\) which ensure that \(\mathsf{FS}(X,K_{r,r})\) has exactly two connected components. A stochastic analogue of this problem is obtained by letting \(X\in\mathcal{G}(K_{r,r},p)\) and by asking for both conditions on \(p=p(r)\) which ensure that \(\mathsf{FS}(X,K_{r,r})\) has exactly two connected components with high probability (that is, with probability tending to \(1\) as \(r\to\infty\)) and for conditions on \(p\) which ensure that \(\mathsf{FS}(X,K_{r,r})\) has more than two connected components with high probability. This problem was raised in [1, Question 7.6]. Here, we invoke a recent result of [10] to completely answer this problem, and find a phase transition phenomenon with threshold function \(p(r)=(\log r)/r\). We now state the specific results that we will prove. In Section 3, we prove the following result. **Theorem 1.2**.: _Let \(r\geq 4\), and let \(X\) and \(Y\) be edge-subgraphs of \(K_{r,r}\) such that_ \[\delta(X)+\delta(Y)\geq\lfloor 3r/2\rfloor+1.\] _Then \(\mathsf{FS}(X,Y)\) has exactly two connected components._ Theorem 1.2 sharpens [1, Theorem 1.10], which had a lower bound of \(3r/2+1\). Together with [1, Theorem 1.11] and a computer check for the \(r=3\) case, Theorem 1.2 in the settings \(\delta(X)=\delta(Y)\) and \(\delta(Y)=r\) respectively implies the following statements. **Corollary 1.3**.: _We have \(d_{r,r}=\lceil(3r+1)/4\rceil\)._ **Corollary 1.4**.: _For each \(r\geq 2\), let \(d_{r,r}^{*}\) be the smallest nonnegative integer such that whenever \(X\) is an edge-subgraph of \(K_{r,r}\) with \(\delta(X)\geq d_{r,r}^{*}\), \(\mathsf{FS}(X,Y)\) has exactly two connected components. We have that_ \[d_{r,r}^{*}=\begin{cases}\left\lfloor r/2\right\rfloor+1&r\neq 3,\\ 3&r=3.\end{cases}\] Corollary 1.3 settles [1, Conjecture 7.4], while Corollary 1.4 sharpens [1, Corollary 1.12] and provides a complete answer to [1, Problem 7.7]. We note that it follows from the proof of [1, Theorem 1.11] that \(\left\lfloor 3r/2\right\rfloor+1\) is the cutoff to avoid isolated vertices. Thus, Theorem 1.2 tells us that if \(X\) and \(Y\) are both edge-subgraphs of \(K_{r,r}\), the cutoff for \(\mathsf{FS}(X,Y)\) to avoid isolated vertices is exactly the same as the cutoff for \(\mathsf{FS}(X,Y)\) to have the smallest possible number of connected components. Our results here may thus be interpreted as providing further motivation for [1, Question 7.5], which asks whether there exists an analogue for friends-and-strangers graphs of the well-known phenomenon that for a random graph process, the hitting times for no isolated vertices and the graph being connected are asymptotically almost surely the same. In Section 4, we prove the following result. **Theorem 1.5**.: _Let \(X\) be a random graph in \(\mathcal{G}(K_{r,r},p)\), where \(p=p(r)\) depends on \(r\). Let \(\omega\) be a function of \(r\) such that \(\omega\to\infty\). If_ \[p=\frac{\log r-\omega}{r}\] _and \(p\geq 0\), then \(\mathsf{FS}(X,K_{r,r})\) has more than two connected components with high probability. If_ \[p=\frac{\log r+\omega}{r},\] _and \(p\leq 1\), then \(\mathsf{FS}(X,K_{r,r})\) has exactly two connected components with high probability._ Theorem 1.5, which identifies \(p(r)=(\log r)/r\) as a threshold function, identifies a phase transition and provides an essentially complete answer to [1, Question 7.6].2 Footnote 2: The arXiv version of [1, Question 7.6], but not the journal version, contains a mistake in its statement. Specifically, if \(X\) is a random graph in \(\mathcal{G}(K_{r,r},p)\), the arXiv version asks for conditions on \(p\) ensuring that \(\mathsf{FS}(X,K_{r,r})\) is disconnected with high probability and conditions on \(p\) ensuring that \(\mathsf{FS}(X,K_{r,r})\) is connected with high probability. From our discussion, that problem is trivial, since \(\mathsf{FS}(X,K_{r,r})\) is always disconnected in this setting when \(r\geq 2\). ## 2. Preliminaries In this section, we list results from prior work that will be relevant later in the paper. We mention that some of the results below are special cases of what is stated in the corresponding cited result. **Proposition 2.1** ([1, Proposition 2.6]).: _Definition 1.1 is symmetric with respect to \(X\) and \(Y\): if \(X\) and \(Y\) are both \(n\)-vertex graphs, we have that \(\mathsf{FS}(X,Y)\cong\mathsf{FS}(Y,X)\)._ The following Proposition 2.2 presents an obstruction on the connectivity of \(\mathsf{FS}(X,Y)\) when \(X\) and \(Y\) are edge-subgraphs of \(K_{r,r}\), and also shows that the smallest number of connected components that \(\mathsf{FS}(X,Y)\) may have in this setting is two. **Proposition 2.2** ([1, Proposition 2.6]).: _For \(r\geq 2\), \(\mathsf{FS}(K_{r,r},K_{r,r})\) has exactly two connected components._ We now introduce what might be thought of as an extension of an \((X,Y)\)-friendly swap. Proposition 2.4 demonstrates how this notion will be useful in the proof of Theorem 1.2. **Definition 2.3** ([1, Subsection 2.4]).: Take \(n\)-vertex graphs \(X\) and \(Y\), a bijection \(\sigma:V(X)\to V(Y)\), and distinct vertices \(u,v\in V(Y)\). We say that \(u\) and \(v\) are \((X,Y)\)-_exchangeable from \(\sigma\)_ if \(\sigma\) and \((u\ v)\circ\sigma\) are in the same connected component of \(\mathsf{FS}(X,Y)\). If \(\Sigma\) is a sequence of \((X,Y)\)-friendly swaps that transforms \(\sigma\) into \((u\ v)\circ\sigma\), then we say that applying \(\Sigma\) to \(\sigma\)_exchanges \(u\)_ and \(v\). **Proposition 2.4** ([1, Proposition 2.8]).: _Let \(X\), \(Y\), and \(\tilde{Y}\) be \(n\)-vertex graphs such that \(Y\) is an edge-subgraph of \(\tilde{Y}\). Suppose that for every \(\{u,v\}\in E(\tilde{Y})\) and every bijection \(\sigma\) satisfying \(\{\sigma^{-1}(u),\sigma^{-1}(v)\}\in E(X)\), the vertices \(u\) and \(v\) are \((X,Y)\)-exchangeable from \(\sigma\). Then the number of connected components of \(\mathsf{FS}(X,\tilde{Y})\) is equal to the number of connected components of \(\mathsf{FS}(X,Y)\)._ Finally, we introduce a result of [16], which will be our main tool in proving Theorem 1.5. **Definition 2.5** ([13]).: A path \(v_{1},v_{2},\ldots,v_{k}\) in a graph is a _\(k\)-bridge_ if each edge in the path is a cut edge, \(v_{2},\ldots,v_{r-1}\) have degree \(2\) in the graph, and \(v_{1}\) and \(v_{r}\) do not have degree \(1\). **Theorem 2.6** ([23, Theorem 1.7]).: _Suppose \(r\geq 5\). Let \(X\) be an edge-subgraph of \(K_{r,r}\). It holds that \(\mathsf{FS}(X,K_{r,r})\) has exactly two connected components if and only if \(X\) is connected, is not a cycle, and does not contain an \(r\)-bridge._ ## 3. Minimum Degree Theorem 1.2 is given by [22, Theorem 1.10] for even values of \(r\), so we assume throughout this section (and the statements and proofs of all results within it), unless stated otherwise, that \(r\geq 5\) and is odd. To begin, we establish the following generalization of [1, Proposition 6.2]. The proof of this lemma is inspired by the proofs of [1, Proposition 6.2] and [22, Lemma 6.2], but in order to prove this sharper statement, we need a more winding argument. **Lemma 3.1**.: _Let \(X\) and \(Y\) be edge-subgraphs of \(K_{r,r}\) such that \(\delta(X)\geq\delta(Y)\) and \(\delta(X)+\delta(Y)\geq(3r+1)/2\). Let \(\sigma:V(X)\to V(Y)\) be a bijection. If \(u,v\) are in different partite sets of \(Y\) and are such that \(\{\sigma^{-1}(u),\sigma^{-1}(v)\}\in E(X)\), then \(u\) and \(v\) are \((X,Y)\)-exchangeable from \(\sigma\)._ Proof.: We assume that \(\{u,v\}\notin E(Y)\), as \(u\) and \(v\) are trivially \((X,Y)\)-exchangeable otherwise. We also assume that \(\delta(X)+\delta(Y)=(3r+1)/2\), since the lemma follows from [22, Section 6] otherwise. For later use, we note that the condition \(r\geq\delta(X)\geq\delta(Y)\) implies that \[\delta(X)\geq\lceil(3r+1)/4\rceil, \delta(Y)\geq(r+1)/2. \tag{3.1}\] Let \(\{A_{X},B_{X}\}\) and \(\{A_{Y},B_{Y}\}\) respectively denote the bipartitions of \(X\) and \(Y\). Without loss of generality, we may assume that \(u\in A_{Y}\) and \(v\in B_{Y}\). Let \(u^{\prime}=\sigma^{-1}(u)\) and \(v^{\prime}=\sigma^{-1}(v)\). Our goal is to show that \(\sigma\) and \((u\ v)\circ\sigma\) are in the same connected component of \(\mathsf{FS}(X,Y)\). We may thus assume that the partite set of \(X\) containing \(v^{\prime}\) contains at least \((r+1)/2\) elements of \(\sigma^{-1}(B_{Y})\), as we may simply switch the roles of \(\sigma\) and \((u\ v)\circ\sigma\) otherwise. Without loss of generality, we may assume that \(u^{\prime}\in A_{X}\) and \(v^{\prime}\in B_{X}\), so that \[|\sigma(B_{X})\cap B_{Y}|\geq(r+1)/2. \tag{3.2}\] Now, let \(\mu:V(X)\to V(Y)\) be a bijection such that * \(\mu\) can be obtained from \(\sigma\) by applying a sequence of swaps not involving \(u\) or \(v\); * \(|\mu(A_{X})\cap A_{Y}|\) is maximal amongst all bijections in the same connected component as \(\sigma\). The first condition implies that \(\mu(u^{\prime})=u\) and \(\mu(v^{\prime})=v\). Let \(\Sigma\) be a sequence of \((X,Y)\)-friendly swaps not involving \(u\) or \(v\) that transforms \(\sigma\) into \(\mu\). We will demonstrate that there is a sequence \(\tilde{\Sigma}\) of \((X,Y)\)-friendly swaps such that applying \(\tilde{\Sigma}\) to \(\mu\) exchanges \(u\) and \(v\). It will then follow that \(\Sigma^{*}=\Sigma,\tilde{\Sigma},\mathrm{rev}(\Sigma)\) is a sequence of \((X,Y)\)-friendly swaps such that applying \(\Sigma^{*}\) to \(\sigma\) exchanges \(u\) and \(v\). We break into two cases. **Case 1: We have \(|\mu(A_{X})\cap A_{Y}|=r\).** Here, we have that \(\mu(A_{X})=A_{Y}\) and \(\mu(B_{X})=B_{Y}\). Since \[|B_{X}\setminus N(u^{\prime})|\leq r-\delta(X), |B_{Y}\setminus N(u)|\leq r-\delta(Y),\] and we have that \[(r-\delta(X))+(r-\delta(Y))=2r-(3r+1)/2=(r-1)/2<r,\] there exists \(w\in N(u)\) such that \(w^{\prime}=\mu^{-1}(w)\in N(u^{\prime})\). Since \(|N(v^{\prime})\cap N(w^{\prime})|\geq 2\delta(X)-r\) and \(|N(v)\cap N(w)|\geq 2\delta(Y)-r\) imply \[|A_{X}\setminus(N(v^{\prime})\cap N(w^{\prime}))| \leq r-(2\delta(X)-r)=2r-2\delta(X),\] \[|A_{Y}\setminus(N(v)\cap N(w))| \leq r-(2\delta(Y)-r)=2r-2\delta(Y),\] respectively, and \[(2r-2\delta(X))+(2r-2\delta(Y))=4r-2(\delta(X)+\delta(Y))=4r-(3r+1)=r-1<r,\] there exists \(x\in N(v)\cap N(w)\subseteq A_{Y}\) such that \(x^{\prime}=\mu^{-1}(x)\in N(v^{\prime})\cap N(w^{\prime})\subseteq A_{X}\). Also, \(x\neq u\), since \(u\notin N(v)\). We denote \[D_{A}=A_{X}\setminus(N(v^{\prime})\cap N(w^{\prime})),\quad D_{B}=B_{X} \setminus(N(u^{\prime})\cap N(x^{\prime})),\] \[E_{A}=A_{X}\setminus(D_{A}\cup\{u^{\prime},x^{\prime}\}),\quad E_{B}=B_{X} \setminus(D_{B}\cup\{v^{\prime},w^{\prime}\}).\] Assume that there exists \(y\in N(v)\cap N(w)\) such that \(\mu^{-1}(y)\in N(v^{\prime})\cap N(w^{\prime})\) and \(y\neq x\): a visualization of these vertices and edges is given in Figure 2. Applying the sequence \[\tilde{\Sigma}=xv,yw,xw,uw,yw,xw,xv,yv,yw\] to \(\mu\) exchanges \(u\) and \(v\). An entirely symmetric argument yields that if there exists \(z\in N(u)\cap N(x)\) such that \(\mu^{-1}(z)\in N(u^{\prime})\cap N(x^{\prime})\) and \(z\neq w\), then we may exchange \(u\) and \(v\) from \(\sigma\). Now assume that there is no such \(y\) and no such \(z\). The assumption implies that \[N(v)\cap N(w)\setminus\{x\}\subseteq\mu(D_{A}),\qquad\qquad N(u)\cap N(x) \setminus\{w\}\subseteq\mu(D_{B}). \tag{3.3}\] Furthermore, we have that \[2\delta(Y)-r-1\leq|N(v)\cap N(w)\setminus\{x\}|\leq|\mu(D_{A})| \leq 2r-2\delta(X), \tag{3.5}\] \[2(\delta(X)+\delta(Y))=3r+1\implies 2\delta(Y)-r-1=2r-2\delta(X). \tag{3.4}\] It follows from (3.5) that all inequalities in (3.4) are equalities, so the first subset inclusion in (3.3) holds with equality. We may argue entirely analogously to study \(N(u)\cap N(x)\setminus\{w\}\). Altogether, \[N(v)\cap N(w)\setminus\{x\}=\mu(D_{A}),\quad N(u)\cap N(x)\setminus\{w\}=\mu( D_{B}),\quad|D_{A}|=|D_{B}|=2r-2\delta(X). \tag{3.6}\] The final statement of (3.6) easily implies \(N(v^{\prime})\cup N(w^{\prime})=A_{X}\) and \(N(u^{\prime})\cup N(x^{\prime})=B_{X}\), so that \[N(v^{\prime})\bigtriangleup N(w^{\prime})=D_{A},\qquad\qquad\qquad N(u^{ \prime})\bigtriangleup N(x^{\prime})=D_{B}. \tag{3.7}\] It is also easy to see that \[(N(v)\bigtriangleup N(w))\setminus\{u\}\subseteq\mu(E_{A}),\qquad\qquad(N(u) \bigtriangleup N(x))\setminus\{v\}\subseteq\mu(E_{B}). \tag{3.8}\] Furthermore, we have that \[2r-2\delta(Y)-1\leq|(N(v)\bigtriangleup N(w))\setminus\{u\}| \leq|\mu(E_{A})|=r-(2+|D_{A}|)=2\delta(X)-r-2, \tag{3.10}\] \[3r+1=2(\delta(X)+\delta(Y))\implies 2r-2\delta(Y)-1=r-2-(2r-2 \delta(X)). \tag{3.9}\] It follows from (3.10) that all inequalities in (3.9) are equalities, so the first subset inclusion in (3.8) holds with equality. We may argue entirely analogously to prove that the second subset inclusion in (3.8) also holds with equality, so we have \[(N(v)\bigtriangleup N(w))\setminus\{u\}=\mu(E_{A}),\qquad\qquad\qquad(N(u) \bigtriangleup N(x))\setminus\{v\}=\mu(E_{B}). \tag{3.11}\] Furthermore, both of these sets have exactly \(2\delta(X)-r-2\) vertices. From here, (3.1) implies that both sets are nonempty. We take \(y^{\prime}\in E_{A}\), and denote \(y=\mu(y^{\prime})\). By (3.1), we have that \[|\mu(E_{B}\cap N(y^{\prime}))|\geq(2\delta(X)-r-2)-(r-\delta(X))=3\delta(X)-2r +1\geq 3(3r+1)/4-2r+1>0. \tag{3.12}\] Figure 2. The vertices and edges used in Case 1. Thus, there exists \(z\in\mu(E_{B}\cap N(y^{\prime}))\). Let \(z^{\prime}=\mu^{-1}(z)\). We break into two subcases. **Subcase 1.1: We have \(\delta(X)<r\).** Here, we also have that \[|\mu(D_{A}\cap N(z^{\prime}))|\geq(2r-2\delta(X))-(r-\delta(X))=r-\delta(X)>0.\] Therefore, we may take \(s^{\prime}\in D_{A}\cap N(z^{\prime})\). We let \(s=\mu(s^{\prime})\). Similarly, we may take \(t^{\prime}\in D_{B}\cap N(y^{\prime})\). We let \(t=\mu(t^{\prime})\). Now, (3.7) and (3.11) imply that \[|N(y)\cap\{v,w\}|=|N(z)\cap\{u,x\}|=|N(s^{\prime})\cap\{v^{\prime},w^{\prime} \}|=|N(t^{\prime})\cap\{u^{\prime},x^{\prime}\}|=1. \tag{3.13}\] Figure 2(a) depicts these vertices and edges. In the order they are listed from left to right, denote the four sets with cardinality \(1\) in (3.13) by \(S_{1},S_{2},S_{3},S_{4}\). There are several further subcases induced by (3.13). In Table 1, contained in Appendix A, we present a sequence of \((X,Y)\)-friendly swaps \(\tilde{\Sigma}\) for each of these subcases such that applying \(\tilde{\Sigma}\) to \(\mu\) exchanges \(u\) and \(v\). **Subcase 1.2: We have \(\delta(X)=r\).** Here, \(\delta(Y)=(r+1)/2\). We may adapt (3.12) to observe that \[|\mu(E_{B}\cap N(y^{\prime}))\cap N(y)|\geq(2\delta(X)-r-2)-(r-\delta(Y))= \delta(Y)-2>0,\] so we may also assume that \(z\in N(y)\). We take \(s^{\prime}\in A_{X}\setminus\{u^{\prime},x^{\prime},y^{\prime}\}\) and \(t^{\prime}\in B_{X}\setminus\{v^{\prime},w^{\prime},z^{\prime}\}\) such that \(s=\mu(s^{\prime})\) and \(t=\mu(t^{\prime})\). It follows from the assumption of this subcase that \(s^{\prime}\in E_{A}\) and \(t^{\prime}\in E_{B}\). Now, (3.11) implies that \[|N(y)\cap\{v,w\}|=|N(z)\cap\{u,x\}|=|N(s)\cap\{v,w\}|=|N(t)\cap\{u,x\}|=1. \tag{3.14}\] Figure 2(b) depicts these vertices and edges. In the order they are listed from left to right, denote the four sets with cardinality \(1\) in (3.14) by \(T_{1},T_{2},T_{3},T_{4}\). There are several further subcases induced by (3.14). In Table 2, contained in Appendix A, we present a sequence of \((X,Y)\)-friendly swaps \(\tilde{\Sigma}\) for each of these subcases such that applying \(\tilde{\Sigma}\) to \(\mu\) exchanges \(u\) and \(v\). **Case 2: We have \(|\mu(A_{X})\cap A_{Y}|<r\).** Let \(\tilde{X}=X|_{V(X)\setminus\{u^{\prime},v^{\prime}\}}\) and \(\tilde{Y}=Y|_{V(Y)\setminus\{u,v\}}\). Let the partite sets of \(\tilde{X}\) corresponding to \(A_{X}\) and \(B_{X}\) respectively be \(A_{\tilde{X}}\) and \(B_{\tilde{X}}\), and let the partite sets of \(\tilde{Y}\) corresponding to \(A_{Y}\) and \(B_{Y}\) respectively be \(A_{\tilde{Y}}\) and \(B_{\tilde{Y}}\). We denote \(s=|\mu(A_{\tilde{X}})\cap A_{\tilde{Y}}|\). It follows from (3.2) and the assumption for this case that \((r-1)/2\leq s\leq r-1\). It also follows that \[|\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}|=|\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}| =r-1-s,\quad|\mu(B_{\tilde{X}})\cap B_{\tilde{Y}}|=s.\] There cannot exist vertices \(p\in\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\) and \(q\in\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\) satisfying \(\{p,q\}\in E(Y)\) and \(\{\mu(p),\mu(q)\}\in E(X)\), since the \((X,Y)\)-friendly swap \(pq\) would then result in a bijection contradicting the maximality of \(|\mu(A_{X})\cap A_{Y}|\). Let \(m\) be the number of edges between \(\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\) and \(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\) Figure 3. The vertices and edges used in Subcases 1.1 and 1.2. For each subfigure, exactly one edge of a particular color is present. so that there are at most \((r-1-s)^{2}-m\) edges between \(\mu^{-1}(\mu(A_{\tilde{X}})\cap B_{\tilde{Y}})=A_{\tilde{X}}\cap\mu^{-1}(B_{ \tilde{Y}})\) and \(\mu^{-1}(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}})=B_{\tilde{X}}\cap\mu^{-1}(A_{ \tilde{Y}})\). We let \(a\) be a vertex in \(\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\) with the fewest neighbors in \(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\), and let \(b^{\prime}\) be a vertex in \(B_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})\) with the fewest neighbors in \(A_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}})\). We let \(a^{\prime}=\mu^{-1}(a)\in A_{X}\) and \(b=\mu(b)\in A_{\tilde{Y}}\). It follows that \[|N(a)\cap(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}})|\leq\frac{m}{r-1-s}=:t, \tag{3.15}\] \[|N(b^{\prime})\cap(A_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}}))|\leq\frac{(r-1- s)^{2}-m}{r-1-s}=(r-1-s)-\frac{m}{r-1-s}=r-1-s-t. \tag{3.16}\] We observe that \[|N(a^{\prime})\cap(B_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}}))| =|N(a^{\prime})\cap B_{\tilde{X}}|-|N(a^{\prime})\cap(B_{\tilde{X }}\cap\mu^{-1}(A_{\tilde{Y}}))| \tag{3.17}\] \[\geq(\delta(X)-1)-|B_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})|=( \delta(X)-1)-(r-1-s),\] and that (3.17) may hold with equality only if \(v^{\prime}\in N(a^{\prime})\) and \(B_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})\subseteq N(a^{\prime})\). Similarly, \[|N(b)\cap(\mu(B_{\tilde{X}})\cap B_{\tilde{Y}})| =|N(b)\cap B_{\tilde{Y}}|-|N(b)\cap(B_{\tilde{Y}}\cap\mu(A_{ \tilde{Y}}))| \tag{3.18}\] \[\geq(\delta(Y)-1)-|B_{\tilde{Y}}\cap\mu(A_{\tilde{Y}})|\geq( \delta(Y)-1)-(r-1-s),\] and (3.18) may hold with equality only if \(v\in N(b)\) and \(B_{\tilde{Y}}\cap\mu(A_{\tilde{X}})\subseteq N(b)\). If both (3.17) and (3.18) held with equality, then we would have that \(\{a^{\prime},b^{\prime}\}\in E(X)\) and \(\{a,b\}\in E(Y)\), and the \((X,Y)\)-friendly swap \(ab\) would result in a bijection contradicting the maximality of \(|\mu(A_{X})\cap A_{Y}|\). Thus, we assume that either (3.17) or (3.18) is strict, so that \[|N(a^{\prime})\cap(B_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}}))|+|N(b )\cap(\mu(B_{\tilde{X}})\cap B_{\tilde{Y}})|\] \[>(\delta(X)-1)-(r-1-s)+(\delta(Y)-1)-(r-1-s)\] \[=(3r+1)/2-2r+2s\geq(1-r)/2+(r-1)/2+s=|B_{\tilde{X}}\cap\mu^{-1}(B _{\tilde{Y}})|.\] It follows that there exists \(c^{\prime}\in B_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}})\) such that \(\{a^{\prime},c^{\prime}\}\in E(X)\) and \(\{b,\mu(c^{\prime})\}\in E(Y)\). Arguing similarly, there exists \(d^{\prime}\in A_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})\) such that \(\{b^{\prime},d^{\prime}\}\in E(X)\) and \(\{a,\mu(d^{\prime})\}\in E(Y)\). We let \(c=\mu(c^{\prime})\) and \(d=\mu(d^{\prime})\). Now, we observe that \[|(N(a)\cap A_{\tilde{Y}})\cap(N(c)\cap A_{\tilde{Y}})|\geq 2(\delta(Y)-1)-|A_{ \tilde{Y}}|=2(\delta(Y)-1)-(r-1),\] and since \(a\) has at most \(t\) neighbors in \(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\), the number of common neighbors in \(A_{\tilde{Y}}\setminus(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}})=\mu(A_{\tilde{X }})\cap A_{\tilde{Y}}\) that \(a\) and \(c\) have satisfies \[|N(a)\cap N(c)\cap(\mu(A_{\tilde{X}})\cap A_{\tilde{Y}})|\geq 2(\delta(Y)-1)-(r-1)-t. \tag{3.19}\] Furthermore, (3.19) may hold with equality only if \(\{a,c\}\subseteq N(u)\) and (3.15) holds with equality. Similarly, we observe that \[|(N(b^{\prime})\cap A_{\tilde{X}})\cap(N(c^{\prime})\cap A_{\tilde{X}})|\geq 2 (\delta(X)-1)-|A_{\tilde{X}}|=2(\delta(X)-1)-(r-1),\] and since \(b^{\prime}\) has at most \(r-1-s-t\) neighbors in \(A_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}})\), the number of common neighbors in \(A_{\tilde{X}}\setminus(A_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}}))=A_{\tilde{X }}\cap\mu^{-1}(A_{\tilde{Y}})\) that \(b^{\prime}\) and \(c^{\prime}\) have satisfies \[|N(b^{\prime})\cap N(c^{\prime})\cap(A_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}}))| \geq 2(\delta(X)-1)-(r-1)-(r-1-s-t). \tag{3.20}\] Furthermore, (3.20) may hold with equality only if \(\{b^{\prime},c^{\prime}\}\subseteq N(u^{\prime})\) and (3.16) holds with equality. Now assume (towards a contradiction) that either (3.19) or (3.20) is strict. Then \[|N(a)\cap N(c)\cap(\mu(A_{\tilde{X}})\cap A_{\tilde{Y}})|+|N(b^{ \prime})\cap N(c^{\prime})\cap(A_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}}))|\] \[>(2(\delta(Y)-1)-(r-1)-t)+(2(\delta(X)-1)-(r-1)-(r-1-s-t))\] \[=2(\delta(X)+\delta(Y))-3r-1+s=(3r+1)-(3r+1)+s=|\mu(A_{\tilde{X}} )\cap A_{\tilde{Y}}|.\] Thus, there exists \(w\in\mu(A_{\tilde{X}})\cap A_{\tilde{Y}}\) such that, letting \(w^{\prime}=\mu^{-1}(w)\), \(\{a,c\}\subseteq N(w)\) and \(\{b^{\prime},c^{\prime}\}\subseteq N(w^{\prime})\). Figure 4 depicts these vertices and edges. Now, the sequence of \((X,Y)\)-friendly swaps \[cw,aw,bc\] results in a bijection contradicting the maximality of \(|\mu(A_{X})\cap A_{Y}|\). Therefore, (3.15) and (3.16) both hold with equality. This implies \(\{a,c\}\subseteq N(u)\) and \(\{b^{\prime},c^{\prime}\}\subseteq N(u^{\prime})\): we may argue similarly to deduce that \(\{b,d\}\subseteq N(v)\) and \(\{a^{\prime},d^{\prime}\}\subseteq N(v^{\prime})\). This also implies that for any \(y\in\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\) and \(z\in\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\), exactly one of the two edges \(\{\mu^{-1}(y),\mu^{-1}(z)\}\) and \(\{y,z\}\) is present. In particular, exactly one of \(\{a^{\prime},b^{\prime}\},\{a,b\}\) is an edge, and exactly one of \(\{c^{\prime},d^{\prime}\},\{c,d\}\) is an edge. Figure 5 depicts these vertices and edges. Here, we have that \[\tilde{\Sigma}=\begin{cases}uc,dv,da,bc,dc,dv,uc,ua,bv&\{a^{\prime},b^{\prime} \}\in E(X),\{c,d\}\in E(Y)\\ uc,dv,bv,ua,ba,bc,da,ua,bv&\{a,b\}\in E(X),\{c^{\prime},d^{\prime}\}\in E(Y)\\ uc,dv,da,bv,ba,bc,dc,da,uc,ua,bv&\{a,b\},\{c,d\}\in E(Y)\end{cases} \tag{3.21}\] is a sequence of \((X,Y)\)-friendly swaps such that, when assuming the existence of the edges in a particular row of (3.21), applying the corresponding sequence to \(\mu\) exchanges \(u\) and \(v\). The only remaining setting is that where \(\{a^{\prime},b^{\prime}\},\{c^{\prime},d^{\prime}\}\in E(X)\). Assuming this, we break into two subcases. **Subcase 2.1: We have \(s<r-2\).** The assumption of this subcase implies that there exist \(w\in\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\setminus\{a\}\) and \(x\in\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\setminus\{b\}\). Let \(w^{\prime}=\mu^{-1}(w)\) and \(x^{\prime}=\mu^{-1}(x)\). Assume that upon tracing the preceding argument with the pair \((w,x)\) playing the role of of the pair \((a,b)\), the exchangeability of \(u\) and \(v\) from \(\mu\) still remains to be shown. Note that the argument carries over to the pair \((w,x)\) without issue, since (3.15) and (3.16) both holding with equality (which we deduced while studying the pair \((a,b)\)) implies that all vertices in \(\mu(A_{\tilde{X}})\cap B_{\tilde{Y}}\) have the same number of neighbors in \(\mu(B_{\tilde{X}})\cap A_{\tilde{Y}}\), and all vertices in \(B_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})\) have the same number of neighbors in \(A_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}})\). This implies the existence of \(y^{\prime}\in B_{\tilde{X}}\cap\mu^{-1}(B_{\tilde{Y}})\) and \(z^{\prime}\in A_{\tilde{X}}\cap\mu^{-1}(A_{\tilde{Y}})\) such that, letting \(y=\mu(y^{\prime})\) and \(z=\mu(z^{\prime})\), we have \[\{w^{\prime},y^{\prime}\},\{x^{\prime},z^{\prime}\},\{u^{\prime},x^{\prime}\},\{u^{\prime},y^{\prime}\},\{w^{\prime},v^{\prime}\},\{z^{\prime},v^{\prime}\},\{w^{\prime},x^{\prime}\},\{y^{\prime},z^{\prime}\}\in E(X),\] \[\{w,z\},\{x,y\},\{u,w\},\{u,y\},\{x,v\},\{z,v\}\in E(Y).\] We split into three further subcases. Figure 4. The vertices and edges used to raise a contradiction. Figure 5. The vertices and edges in Case 2. Exactly one red edge and one blue edge are present. **Subcase 2.1.1: We have \(c=y\) and \(d=z\).** Figure 5(a) depicts these vertices and edges. We may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,dv,bv,dw,dv,xv,bv,dv,bc,bv,xc,uc,uw.\] **Subcase 2.1.2: Exactly one of \(c=y\) and \(d=z\) holds.** We assume that \(c=y\) and \(d\neq z\). The argument is entirely analogous if \(c\neq y\) and \(d=z\) holds instead. Figure 5(b) depicts these vertices and edges. We may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,dv,bv,xc,xv,uw,zv,bv,dv,zv,zw,uw,xv,xc,bc,bv.\] **Subcase 2.1.3: We have \(c\neq y\) and \(d\neq z\).** Figure 5(c) depicts these vertices and edges. We may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,zv,zw,xv,zv,bv,xy,ua,uw,da,zw,zv,uy,uw,uc,uy,bc,bv,\] \[xy,xv,dv,uc,bv,zv,dv,da,bv,dv,bc,bv,zv,zw,uw,dv,da,bv,dv,bc,bv.\] **Subcase 2.2: We have \(s=r-2\).** The assumption for this subcase implies \(\mu(A_{X})\cap B_{Y}=\{a\}\), \(\mu(B_{X})\cap A_{Y}=\{b\}\), and \(|\mu(A_{X})\cap A_{Y}|=|\mu(B_{X})\cap B_{Y}|=r-1\). We split into two further subcases. **Subcase 2.2.1: We have \(\delta(X)<r\).** We observe that \[|A_{Y}\setminus(N(v)\cap N(c))|\leq 2r-2\delta(Y),\quad|A_{Y}\setminus N (d)|\leq r-\delta(Y),\] \[|A_{X}\setminus(N(a^{\prime})\cap N(w^{\prime}))|\leq 2r-2\delta(X), \quad|A_{X}\setminus N(c^{\prime})|\leq r-\delta(X).\] Since \[2(r-\delta(Y))+(r-\delta(X))=3r-2(\delta(X)+\delta(Y))+\delta(X)=\delta(X)-1<r -1,\] there exists \(w\in N(v)\cap N(c)\) such that \(w^{\prime}=\mu^{-1}(w)\in N(c^{\prime})\). Since \(u\notin N(v)\) and \(d\notin N(c)\), we have \(w\notin\{u,d\}\). Similarly, there exists \(x\in N(u)\cap N(d)\setminus\{v,c\}\) such that \(x^{\prime}=\mu^{-1}(x)\in N(d^{\prime})\). Also, since \[2(r-\delta(X))+(r-\delta(Y))=3r-2(\delta(X)+\delta(Y))+\delta(Y)=\delta(Y)-1 \leq\delta(X)-1<r-1,\] there exists \(y\in N(d)\) such that \(y^{\prime}=\mu^{-1}(y)\in N(a^{\prime})\cap N(w^{\prime})\). Figure 7 depicts these vertices and edges. If \(y\in\{v,x\}\), we may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=\begin{cases}dv,wc,bv,da,dv,ua,wv,da,dv,uc,bc,bv,wv,dv,wc,uc,da, wv,dv,wc,bv&y=v,\\ uc,dx,bc,ua,uc,ux,ua,uc,da,ua,dv,da,dx,dv,ux,ua,bv&y=x.\end{cases}\] If \(y\notin\{v,x\}\), we may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,bc,ua,ad,bv,dx,ux,dy,ua,uc,ux,ua,wc,da,dx,dv,wv,bv,dv,dx,\] Figure 6. The vertices and edges used in Subcase 2.1. \[da,ua,dy,da,dv,bv,wv,wc,bc,uc,dx,bv,dv,ua,da,dx,dv,da,ux,uc,ua.\] **Subcase 2.2.2: We have \(\delta(X)=r\).** Here, \(X=K_{r,r}\) and \(\delta(Y)=(r+1)/2\). If there existed \(w\in N(v)\cap N(a)\) such that \(w^{\prime}=\mu^{-1}(w)\in A_{X}\setminus\{u^{\prime},a^{\prime},d^{\prime}\}\), then we may exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,dv,bv,ua,wa,da,ua,dv,wv,bv,dv,bc,bv.\] Figure (a)a depicts these vertices and edges. We can argue the exchangeability of \(u\) and \(v\) from \(\mu\) similarly if we replace \(N(v)\cap N(a)\) in the preceding argument with \(N(v)\cap N(c)\) or \(N(a)\cap N(c)\). In an analogous manner, we can argue the exchangeability of \(u\) and \(v\) from \(\mu\) if we replace the condition \(\mu^{-1}(w)\in A_{X}\setminus\{u^{\prime},a^{\prime},d^{\prime}\}\) with the condition \(w^{\prime}=\mu^{-1}(w)\in B_{X}\setminus\{v^{\prime},b^{\prime},c^{\prime}\}\) and also replace \(N(v)\cap N(a)\) in the preceding argument with \(N(u)\cap N(b),N(u)\cap N(d)\), or \(N(b)\cap N(d)\). Therefore, we may further assume that none of these six conditions hold. It follows from \(|N(a)|\geq(r+1)/2>2\) that there exists \(w\in N(a)\setminus\{u,d\}\). Since \(b\notin N(a)\), we have that \(w^{\prime}=\mu^{-1}(w)\in A_{X}\setminus\{u^{\prime},a^{\prime},d^{\prime}\}\). It now follows from \(|B_{Y}\setminus N(b)|\leq r-\delta(Y)\), \(|B_{Y}\setminus N(w)|\leq r-\delta(Y)\), and \((r-\delta(Y))+(r-\delta(Y))=r-1<r\) that there exists \(x\in N(b)\cap N(w)\). Furthermore, Figure 8. The vertices and edges used in Subcases 2.2.2. Figure 7. The vertices and edges used in Subcase 2.2.1. it follows from our most recent assumption and \(a\notin N(b)\) that \(x^{\prime}=\mu^{-1}(x)\in B_{X}\setminus\{v^{\prime},b^{\prime},c^{\prime}\}\). It now follows from \(|N(x)\setminus\{b,w\}|\geq(r+1)/2-2>0\) and our most recent assumption that there exists \(y\in N(x)\setminus\{b,w\}\). Since \(y\neq b\), we have that \(y^{\prime}=\mu^{-1}(y)\in A_{X}\). Similarly, there exists \(z\in N(w)\setminus\{a,x\}\) such that \(z^{\prime}=\mu^{-1}(w)\in B_{X}\). Figure (b)b depicts these vertices and edges. We may now exchange \(u\) and \(v\) from \(\mu\) by applying the sequence \[\tilde{\Sigma}=uc,wz,ua,da,wa,wx,bx,bv,bc,dv,uc,bv,bc,bx,wx,wa,wz.\] This completes the proof of the lemma. Proof of Theorem 1.2.: Suppose \(X\) and \(Y\) are edge-subgraphs of \(K_{r,r}\) such that \(\delta(X)\geq\delta(Y)\) and \(\delta(X)+\delta(Y)\geq(3r+1)/2\). Lemma 3.1 implies that the hypothesis of Proposition 2.4 is satisfied with \(\tilde{Y}=K_{r,r}\), so it follows that \(\mathsf{FS}(X,Y)\) and \(\mathsf{FS}(X,K_{r,r})\) have the same number of connected components. Since \(K_{r,r}\) and \(X\) are both edge-subgraphs of \(K_{r,r}\) with \(\delta(K_{r,r})\geq\delta(X)\) and \(\delta(K_{r,r})+\delta(X)\geq(3r+1)/2\), Lemma 3.1 implies that the hypothesis of Proposition 2.4 is satisfied with the pair \((K_{r,r},X)\) playing the role of \((X,Y)\) and with \(\tilde{Y}=K_{r,r}\), so \(\mathsf{FS}(K_{r,r},X)\) and \(\mathsf{FS}(K_{r,r},K_{r,r})\) have the same number of connected components. Propositions 2.1 and 2.2 respectively imply that \(\mathsf{FS}(X,K_{r,r})\cong\mathsf{FS}(K_{r,r},X)\) and that \(\mathsf{FS}(K_{r,r},K_{r,r})\) has two connected components. Altogether, it follows that \(\mathsf{FS}(X,Y)\) also has two connected components. The theorem follows for edge-subgraphs \(X\) and \(Y\) of \(K_{r,r}\) such that \(\delta(X)<\delta(Y)\) by invoking Proposition 2.1. Since \(2\lceil(3r+1)/4\rceil\geq\lfloor 3r/2\rfloor+1\), Corollary 1.3 follows immediately from Theorem 1.2. We have confirmed via a computer check that for a \(2\)-regular bipartite graph \(Y\) whose partite sets each have three vertices, \(\mathsf{FS}(K_{3,3},Y)\) has exactly \(12\) connected components. Corollary 1.4 follows from this observation together with Theorem 1.2. ## 4. Random Edge-Subgraphs We will invoke the following result in the proof of Theorem 1.5. **Proposition 4.1**.: _Let \(X\in\mathcal{G}(K_{r,r},p)\). Then \(X\) is disconnected with high probability if \(p=\frac{\log r-\omega}{r}\), and \(X\) is connected with high probability if \(p=\frac{\log r+\omega}{r}\)._ The latter statement of Proposition 4.1 is [11, Exercise 4.3.8]. Proposition 4.1 follows from standard techniques in random graph theory, so we have deferred its proof to Appendix B. Proof of Theorem 1.5.: The first part of Theorem 1.5 follows immediately from Proposition 4.1, since \(\mathsf{FS}(X,K_{r,r})\) has more than two connected components whenever \(r\geq 2\) and \(X\) is disconnected. We assume for the rest of the argument that \(r\) is large, and that \(p=\frac{\log r+\omega}{r}\). All asymptotic notation will be with respect to \(r\). If \(p>1/2\), it is clear that \(X\) has no \(r\)-bridge with high probability, since \(\delta(X)\geq r/2\) with high probability; this is straightforward to prove by the Chernoff bound together with a union bound over the vertices of \(K_{r,r}\), for instance. Thus, we also assume that \(\omega\) is such that \(p\leq 1/2\). We let * \(\mathcal{C}_{1}(r)\) denote the collection of all edge-subgraphs of \(K_{r,r}\) with a path \(v_{1},\ldots,v_{r}\) such that \(v_{2},\ldots,v_{r-1}\) have degree \(2\) in \(X\); * \(\mathcal{C}_{2}(r)\) denote the collection of all edge-subgraphs of \(K_{r,r}\) which have a connected component that is a path on a number of vertices in \(\{r-2,r-1,\ldots,2r-2\}\); * \(\mathcal{C}_{2}(r,k)\) denote the subcollection of \(\mathcal{C}_{2}(r)\) consisting of such edge-subgraphs whose largest path component is on \(k\) vertices, so that \(\mathcal{C}_{2}(r)=\bigsqcup_{k=r-2}^{2r-2}\mathcal{C}_{2}(r,k)\); * \(\mathcal{C}_{3}(r)\) denote the collection of all edge-subgraphs of \(K_{r,r}\) which have a connected component that is a tree on a number of vertices in \(\{r-2,r-1,\ldots,2r-2\}\); * \(\mathcal{C}_{3}(r,k)\) denote the subcollection of \(\mathcal{C}_{3}(r)\) consisting of such edge-subgraphs whose largest tree component is on \(k\) vertices, so that \(\mathcal{C}_{3}(r)=\bigsqcup_{k=r-2}^{2r-2}\mathcal{C}_{3}(r,k)\). Notice that the event \(X\in\mathcal{C}_{1}(r)\) contains the event that \(X\) is a cycle and the event that \(X\) contains an \(r\)-bridge. Take \(X\in\mathcal{C}_{1}(r)\), with \(k\)-bridge \(v_{1},\ldots,v_{k}\) such that \(k\) is maximal; in particular, \(r\leq k\leq 2r\). We may remove the edges \(\{v_{1},v_{2}\}\) and \(\{v_{k-1},v_{k}\}\) from \(X\) to form an edge-subgraph with a component that is a path on \(k-2\) vertices, and thus lies in \(\mathcal{C}_{2}(r)\). This defines a map \(\varphi:\mathcal{C}_{1}(r)\to\mathcal{C}_{2}(r)\). Now, any edge-subgraph \(Y\in\mathcal{C}_{2}(r)\) has that \(|\varphi^{-1}(Y)|\leq(r+2)^{2}\). This is immediate from the definition of \(\varphi\) if \(Y\) has exactly one component that is a path on at least \(r-2\) vertices. If \(Y\) has two components that are paths on at least \(r-2\) vertices, then there are at most four remaining vertices which may have been incident to the edges removed from \(\varphi^{-1}(Y)\): it is easy to see that if \(X\in\varphi^{-1}(Y)\) had an edge between these two paths, this would contradict the maximality of \(k\). Also, since \(p\leq 1/2\), any edge-subgraph in \(\mathcal{C}_{1}(r)\) is no more likely in \(\mathcal{G}(K_{r,r},p)\) than any edge-subgraph in \(\mathcal{C}_{2}(r)\). Fix some \(k\in\{r-2,\ldots,2r-2\}\). The number of paths on \(k\) vertices is \(k!/2\), and by Cayley's formula, the number of trees on \(k\) vertices is \(k^{k-2}\). We may partition the collections \(\mathcal{C}_{2}(r,k)\) and \(\mathcal{C}_{3}(r,k)\) based on the number of (respectively) path and tree components on \(k\) vertices and their vertex sets. By fixing some block of the partition and comparing the sizes of the corresponding subsets of \(\mathcal{C}_{2}(r,k)\) and \(\mathcal{C}_{3}(r,k)\), it follows quickly that uniformly over such values of \(k\),3 Footnote 3: We say that \(A\ll B\) if \(A/B\to 0\) as \(r\to\infty\). We say that \(A\lesssim B\) if there exists a constant \(C>0\) such that \(A\leq CB\) for all large \(r\). \[\frac{|\mathcal{C}_{2}(r,k)|}{|\mathcal{C}_{3}(r,k)|}\leq\frac{k!/2}{k^{k-2}} \ll 1/k^{2}\lesssim 1/r^{2}.\] where the \(\ll\) is from (for example) Stirling's approximation. Since all graphs in \(\mathcal{C}_{2}(r,k)\) and \(\mathcal{C}_{3}(r,k)\) have the same number of edges, they all have the same probability of being realized under \(\mathcal{G}(K_{r,r},p)\). Altogether, letting \(X\in\mathcal{G}(K_{r,r},p)\), it now follows that \[\Pr[X\in\mathcal{C}_{1}(r)] \lesssim r^{2}\Pr[X\in\mathcal{C}_{2}(r)]=r^{2}\sum_{k=r-2}^{2r- 2}\Pr[X\in\mathcal{C}_{2}(r,k)]\ll r^{2}\cdot(1/r^{2})\sum_{k=r-2}^{2r-2}\Pr[ X\in\mathcal{C}_{3}(r,k)]\] \[=\Pr\left[X\in\mathcal{C}_{3}(r)\right]\leq\Pr\left[X\text{ is disconnected}\right].\] The desired result now follows easily from Theorem 2.6 and Proposition 4.1. ## Appendix A Sequences of \((X,y)\)-Friendly Swaps in the Proof of Lemma 3.1 In Table 1 and Table 2, we isolate those sequences of swaps \(\tilde{\Sigma}\) corresponding to the relevant subcases in the proof of Lemma 3.1. On the right columns, we write the unique element in each set rather than the corresponding singleton set. Blanks indicate that \(\tilde{\Sigma}\) works regardless of what the unique vertex in the set is. These sequences of \((X,Y)\)-friendly swaps were all found using computer assistance. In particular, all of these sequences are shortest possible. ## Appendix B Proof of Proposition 4.1 Proof.: For \(k\in\{1,\ldots,2r\}\), let \(X_{k}\) denote the number of connected components of \(X\) on \(k\) vertices. For each \(v\in V(X)\), let \(I_{v}\) be the indicator random variable corresponding to the event that \(v\) is an isolated vertex in \(X\), so that \(X_{1}=\sum_{v\in V(X)}I_{v}\). We have (B.1) \[\mathbb{E}[X_{1}] =2r(1-p)^{r}=2e^{\log r+r\log(1-p)}=2e^{\log r-r(p+O(p^{2}))},\] \[\mathbb{E}[X_{1}^{2}] =\sum_{\begin{subarray}{c}u,v\in V(X),\\ u\neq v\end{subarray}}\Pr[I_{u}I_{v}=1]+\sum_{v\in V(X)}\Pr[I_{v}=1]=\sum_{ \begin{subarray}{c}u,v\in V(X),\\ u\neq v\end{subarray}}\Pr[I_{u}I_{v}=1]+\mathbb{E}[X_{1}]\] \[\geq 2r(2r-1)(1-p)^{2r}+\mathbb{E}[X_{1}]\gtrsim(2r)^{2}(1-p)^{2r}+ \mathbb{E}[X_{1}]=\mathbb{E}[X_{1}]^{2}+\mathbb{E}[X_{1}].\] Towards proving the first statement, we begin by assuming that \(p=\frac{\log r-\omega}{r}\). We additionally assume that \(\omega\leq\log r\), so that \(p\geq 0\). Here, we may further (B.1) to get \[\mathbb{E}[X_{1}]=e^{\omega+O((\log r)^{2}/r)}\gg 1.\] Since \(\Pr[X_{1}>0]\geq\frac{\mathbb{E}[X_{1}^{2}]}{\mathbb{E}[X_{1}]^{2}}\geq 1-\frac{1}{ \mathbb{E}[X_{1}]}=1-o(1)\), it holds with high probability that \(X\) contains an isolated vertex, which implies the first statement. Now assume that \(p=\frac{\log r+\omega}{r}\) and that \(\omega\) is such that \(p\leq 1\). It is clear that the probability that \(X\) is connected increases in \(p\), so it suffices to prove the desired result when \(\omega\ll\log r\): henceforth, we assume this. Here, we may further (B.1) to get (B.2) \[\mathbb{E}[X_{1}]=e^{-\omega+O((\log r+\omega)^{2}/r)}\ll 1.\] If \(X\) is disconnected, it must have a component with at most \(r\) vertices. We now closely follow the proof of [16, Theorem 4.1]. We have that \[\Pr\left[X_{1}>0\right]\leq\Pr\left[X\text{ is disconnected}\right]\leq\Pr \left[X_{1}>0\right]+\sum_{k=2}^{r}\Pr\left[X_{k}>0\right].\] \begin{table} \begin{tabular}{|c|c|} \hline \(\tilde{\Sigma}\) & \((S_{1},S_{2},S_{3},S_{4})\) \\ \hline \(uw\), \(sv\), \(sw\), \(xw\), \(xv\), \(uw\), \(uz\), \(sw\), \(uw\), \(xw\), \(ut\), \(xt\), \(xv\), \(sv\), \(yv\), \(xv\), \(sv\), \(sw\), \(xw\), \(uw\), \(ut\), \(uz\), \(xv\), \(xw\), \(xt\), \(xv\), \(xw\), \(yv\), \(uw\), \(xv\), \(xw\) \\ \(uz\), \(sw\), \(yv\), \(sv\), \(xv\), \(yv\), \(xw\), \(sw\), \(uw\), \(ut\), \(uz\), \(xw\), \(sw\), \(sw\), \(sw\), \(xw\), \(ut\), \(sw\), \(xv\), \(xw\), \(sw\), \(ut\), \(xv\), \(uz\), \(sw\), \(ut\), \(yv\), \(xv\), \(sv\), \(yv\), \(uw\), \(sw\), \(xv\), \(uz\), \(uw\), \(sw\), \(xw\), \(uw\), \(uw\), \(sw\), \(uz\), \(xv\), \(sw\), \(xw\), \(yv\), \(sw\), \(xv\), \(yv\), \(sw\), \(xw\), \(uw\), \(ut\), \(sw\), \(uz\), \(uw\), \(xv\), \(xw\), \(yv\), \(sv\), \(xv\), \(yv\), \(sw\), \(xw\), \(uw\), \(ut\), \(sw\), \(xv\), \(yv\), \(xv\), \(yv\), \(xv\), \(yv\), \(xv\), \(ww\), \(xv\), \(yv\), \(xv\), \(ww\), \(xv\), \(xw\), \(yv\), \(xv\), \(yv\), \(uw\), \(xz\), \(xv\), \(xw\), \(yv\), \(uw\), \(xz\), \(xv\), \(xw\), \(yv\), \(uw\), \(xz\), \(xv\), \(xw\), \(yv\), \(uw\), \(xv\), \(z\), \(xv\), \(xw\), \(yv\), \(uw\), \(xv\), \(zz\), \(xv\), \(xw\), \(yv\), \(uw\), \(xv\), \(zz\), \(xv\), \(xw\), \(yv\), \(uw\), \(xv\), \(zz\), \(xv\), \(xw\), \(yv\), \(xv\), \(zz\), \(xw\), \(uw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xz\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(zx\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(zx\), \(xv\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(xw\), \(zx\), \(xv\), \(xw\), \(yw\), \(zx\), \(xv\), \(xw\), \(yw\), \(zx\), \(xv\), \(xw\), \(yw\), \(zx\), \(xv\), \(z\), \(xw\), \(yw Now, where the latter inequality in the first line bounds \(\mathbb{E}[X_{k}]\) by the number of spanning trees for components on \(k\) vertices and invokes Cayley's theorem, \[\sum_{k=2}^{r}\Pr\left[X_{k}>0\right] \leq\sum_{k=2}^{r}\mathbb{E}\left[X_{k}\right]\leq\sum_{k=2}^{r} \binom{2r}{k}k^{k-2}p^{k-1}(1-p)^{k(2r-k)}\] \[\leq\sum_{k=2}^{r}\left(\frac{2re}{k}\right)^{k}k^{k-2}\left( \frac{\log r+\omega}{r}\right)^{k-1}e^{-k(2r-k)\frac{\log r+\omega}{r}}\] \[\lesssim\sum_{k=2}^{r}(2re)^{k}\left(\frac{\log r}{r}\right)^{k-1 }e^{-k(2r-k)\frac{\log r+\omega}{r}}\] \[\lesssim\sum_{k=2}^{10}(re)^{k}\left(\frac{\log r}{r}\right)^{k-1 }e^{k(5-r)\frac{\log r+\omega}{r}}+\sum_{k=11}^{r}(2re)^{k}\left(\frac{\log r} {r}\right)^{k-1}e^{-k(\log r+\omega)/2}\] \[\lesssim\sum_{k=2}^{10}e^{-k\omega}\left(\frac{\log r}{r}\right) ^{k-1}+\sum_{k=11}^{r}\frac{r}{4}\left(\frac{e^{1-\omega/2}\log r}{(r/4)^{1/2 }}\right)^{k}=o(1)+\sum_{k=11}^{r}(r/4)^{1+o(1)-k/2}\ll 1.\] It thus follows that \[\Pr\left[X\text{ is connected}\right]=\Pr\left[X_{1}=0\right]+o(1)=1-o(1),\] since it follows from Markov's inequality and (B.2) that \(\Pr[X_{1}\geq 1]\ll 1\).
2309.10834
Communication-Efficient Federated Learning via Regularized Sparse Random Networks
This work presents a new method for enhancing communication efficiency in stochastic Federated Learning that trains over-parameterized random networks. In this setting, a binary mask is optimized instead of the model weights, which are kept fixed. The mask characterizes a sparse sub-network that is able to generalize as good as a smaller target network. Importantly, sparse binary masks are exchanged rather than the floating point weights in traditional federated learning, reducing communication cost to at most 1 bit per parameter (Bpp). We show that previous state of the art stochastic methods fail to find sparse networks that can reduce the communication and storage overhead using consistent loss objectives. To address this, we propose adding a regularization term to local objectives that acts as a proxy of the transmitted masks entropy, therefore encouraging sparser solutions by eliminating redundant features across sub-networks. Extensive empirical experiments demonstrate significant improvements in communication and memory efficiency of up to five magnitudes compared to the literature, with minimal performance degradation in validation accuracy in some instances
Mohamad Mestoukirdi, Omid Esrafilian, David Gesbert, Qianrui Li, Nicolas Gresset
2023-09-19T14:05:12Z
http://arxiv.org/abs/2309.10834v2
Sparser Random Networks Exist: Enforcing Communication-Efficient Federated Learning via Regularization ###### Abstract This work presents a new method for enhancing communication efficiency in stochastic Federated Learning that trains over-parameterized random networks. In this setting, a binary mask is optimized instead of the model weights, which are kept fixed. The mask characterizes a sparse sub-network that is able to generalize as good as a smaller target network. Importantly, sparse binary masks are exchanged rather than the floating point weights in traditional federated learning, reducing communication cost to at most 1 bit per parameter. We show that previous state of the art stochastic methods fail to find the sparse networks that can reduce the communication and storage overhead using consistent loss objectives. To address this, we propose adding a regularization term to local objectives that encourages sparse solutions by eliminating redundant features across sub-networks. Extensive experiments demonstrate significant improvements in communication and memory efficiency of up to five magnitudes compared to the literature, with minimal performance degradation in validation accuracy in some instances. Federated Learning, Sparse random Networks, Unstructured Sparsity ## I Introduction Federated Learning (FL) was introduced to address privacy concerns associated with centrally training large models that rely on raw data from edge devices [1]. It enables clients to collaboratively train models under the guidance of a parameter server (PS). This approach allows for iterative aggregation of locally optimized models without the need to transfer raw data to a central location. While FL ensures data privacy, it faces challenges in terms of communication efficiency due to the large size of exchanged model updates during each communication round for both uplink (UL) and downlink (DL). Recent efforts have focused on reducing communication overhead leveraging compression via quantization and model sparsification techniques [2, 3, 4] on the exchanged model weights. Despite these efforts, exchanged compressed models are still represented according to float bit-representations (e.g., 32/16 bits per model weight), leading to significant communication overhead as the size of trained models increases (e.g., LLM). A recent work [5] has revealed that in over-parameterized random neural networks, it is possible to find smaller sub-networks that perform just as well as a fully trained target network in terms of generalization. These sub-networks are produced by element-wise multiplication of a sparse binary mask with the initial weights of the over-parameterized network (i.e while fixing the weights). In this case, the binary mask is optimized to identify the initial weights that would constitute a sub-network with similar generalization performance as the target network. Subsequently, the authors in [6] leverage the subset-sum approximation problem [7] to prove the existence of those sub-networks. They show that dense target networks with width \(d\) (neuron count in a layer) and depth \(l\) (number of layers) can be closely approximated by pruning an over-parameterized dense random network with a width \(O(\log(dl))\) times larger and a depth twice as deep. This discovery is particularly interesting for FL training, due to the lower communication overhead associated with exchanging binary masks in the UL and DL instead of float-bit representations of the weight updates. In [8], the authors introduce FedMask, a personalized Federated Learning (FL) algorithm based on pruning over-parameterized random networks. FedMask is a deterministic algorithm that involves pruning a random network by optimizing personalized binary masks using Stochastic Gradient Descent (SGD), aiming to approximate the personalized target networks that fit the heterogeneous datasets found at the devices. Their approach has been shown to ensure a 1-bit-per-parameter (1bpp) communication cost per each round of communication to exchange the updates in FL training. This can be attributed to the nature of their algorithm, which optimizes the binary masks within a constrained search space, wherein the masks demonstrate an equiprobable occurrence of ones and zeros. Recently, a stochastic approach called FedPM [9] was introduced as an alternative to the deterministic FedMask. FedPM requires edge devices to identify a global probability mask, as apposed to the deterministic mask in FedMask. Binary masks then are sampled from the global probability mask, characterizing sub-networks with strong generalization capabilities over the diverse datasets of the edge devices. The results of their approach demonstrate state-of-the-art accuracy and communication efficiency compared to FedMask and other baseline methods. However, our subsequent analysis reveals that their method fails to discover sparse networks, leaving a significant amount of unnecessary redundancy in terms of the size of the found sub-networks. This work builds upon the foundation of stochastic masking techniques [5, 9], leveraging their favorable generalization performance and convergence while aiming to enhance communication and memory efficiency. Our main contributions are summarized as follows: * We introduce a new objective function leading to effectively narrow down the search space to discover a limited set of sub-networks within the over-parameterized random network. These sub-networks offer both communication efficiency and strong generalization performance compared to the literature. * Through simulations, we demonstrate that our approach, which enforces non-structural sparsity through regularization, results in significantly sparser solutions compared to state-of-the-art algorithms such as FedPM. Importantly, this sparsity gain is achieved without sacrificing generalization, leading to up to about 5 magnitudes in terms of communication and memory efficiency during training. ## II System Model and Problem Formulation We suppose that there are \(K\) edge-devices in the system with datasets denoted by \(\{\mathcal{D}_{i}\}_{i=1}^{K}\). The training procedure commences as the parameter server sends a randomly initialized network to the edge devices. This is accomplished by providing the devices with both the network's structure and an initialization seed, enabling them to construct the network's layers and weights. We denote the initialized weights of the network by \(\mathbf{w}_{\text{init}}=(w_{1},\cdots,w_{n})\in\mathbb{R}^{n}\). The primary objective is to identify a global binary mask \(\mathbf{m}\), yielding a sub-network \(y_{\mathbf{m}}\) given according to1: Footnote 1: For ease of representation, we use a linear model in (5). \[y_{\mathbf{m}}(\mathbf{x})=(\mathbf{m}\otimes\mathbf{w}_{\text{init}})^{T}\cdot\mathbf{x}, \tag{1}\] where \(\mathbf{x}\) denotes a data point, \(\otimes\) denotes the element-wise multiplication operator, and \((\cdot)\) denotes vector multiplication. The produced sub-network minimizes the empirical risk function in accordance to: \[\min_{\mathbf{m}}F(\mathbf{m})=\frac{1}{\sum_{i}|\mathcal{D}_{i}|}\sum_{k=1}^{K}| \mathcal{D}_{k}|\ell(y_{\mathbf{m}},\mathcal{D}_{k}), \tag{2}\] where \(F(\mathbf{m})\) denotes the empirical risk of \(y_{\mathbf{m}}\) over the devices datasets. \(\ell(.,.)\) denotes the local loss function and \(|\mathcal{D}_{k}|\) the dataset size of device \(k\). We denote the target network that we aim at approximating by \(y_{\text{target}}\). The number of sub-networks that can be found within the over-parameterized network to approximate \(y_{\text{target}}\) increases with its size [5, 6, 10]. Accordingly, constructing a sufficiently over-parameterized random network according to the rules derived in [6] guarantees with a high probability that a sub-network \(y\) exists, such that \(y\approx y_{\text{target}}\). To this end, we aim at identifying the individual weights of \(\mathbf{w}_{\text{init}}\) that play a role in producing sub-networks capable of generalizing as effectively as \(y_{\text{target}}\). This is achieved by maximizing the likelihood of these weights while disregarding the weights that do not offer any meaningful contribution towards that objective. Akin to [9], along-side the initialized weights, the users receive a global probability mask vector2\(\mathbf{\theta}\in[0,1]^{n}\): Footnote 2: All mask probabilities are set to 0.5 during the first round \[\mathbf{\theta}_{i}\longleftarrow\mathbf{\theta}(t) \tag{3}\] using the probability mask, each user \(i\) derives a local score vector \(\mathbf{s}_{i}\), according to : \[\mathbf{s}_{i}=\sigma^{-1}(\mathbf{\theta}_{i}) \tag{4}\] where \(\sigma(\mathbf{s})\) denotes the sigmoid function applied to each element of the vector \(\mathbf{s}\). The probability mask represents the likelihood of each particular weight in \(\mathbf{w}_{\text{init}}\) contributing to the chosen sub-network in (5). Once each device \(i\) receive the global probability mask \(\mathbf{\theta}(\mathbf{t})\) from the server, training starts by sampling a binary mask which characterizes the local sub-network \(y_{\mathbf{m}_{i}^{k}}\) to minimize its loss, given by \[y_{\mathbf{m}_{i}^{k}}(\mathbf{x})=\left(\mathbf{m}_{i}^{h}\otimes\mathbf{w}_{\text{init}} \right)^{T}\cdot\mathbf{x}\quad,\mathbf{m}_{i}^{h}\sim\text{Bernoulli}(\mathbf{\theta}_ {i}^{h}), \tag{5}\] Here \(h\) denotes the local mini-batch iterations count, where \(\mathbf{\theta}_{i}^{h=0}=\mathbf{\theta}(t)\). Similar to [5], Instead of directly optimizing \(\mathbf{\theta}_{i}^{h}\), the score vector is employed in the optimization process. This ensures smooth and unbiased3 updates of \(\mathbf{\theta}\). The scores and probability masks are updated at _each_ mini-batch iteration \(h\) according to: Footnote 3: For instance, FedMask relies on optimizing a deterministic mask via SGD, and then thresholding the resultant updated mask. The thresholding operation results in severely biased updates which harms the convergence. \[\mathbf{\theta}_{i}^{h}=\sigma(\mathbf{s}_{i}^{h-1}-\frac{\eta}{|\mathcal{B}^{h}|} \nabla_{\mathbf{s}_{i}^{h-1}}\ell(y_{\mathbf{m}_{i}^{h-1}},\mathcal{B}^{h})), \tag{6}\] where \(\eta\) is the learning rate, \(\mathcal{B}^{h}\subseteq\mathcal{D}_{i}\) is a mini-batch, and \(|\mathcal{B}^{h}|\) denotes its' cardinality. \(\nabla_{\mathbf{s}_{i}^{h-1}}\ell(y,\mathcal{B}^{h})\) denotes the gradient of the loss function (i.e the cross entropy loss in classification tasks) of the local sub-network \(y_{\mathbf{m}_{i}^{h-1}}\) - sampled during the current iteration \(h\) - over the mini-batch \(\mathcal{B}^{h}\) at device \(i\), with respect to the scores vector \(\mathbf{s}_{i,k}^{h-1}\). Accordingly, each element indexed \(k\) of the score vector \(\mathbf{s}_{i,k}^{h-1}\) are optimized locally using the chain rule according to: \[\mathbf{s}_{i,k}^{h}=\mathbf{s}_{i,k}^{h-1}-\eta\left(\frac{\partial\ell}{\partial y_{ \mathbf{m}_{i}^{h-1}}}\times\frac{\partial y_{\mathbf{m}_{i}^{h-1}}}{\partial m_{i,k}^ {h-1}}\times\frac{\partial m_{i,k}^{h-1}}{\partial\theta_{i,k}^{h-1}}\times \frac{\partial\theta_{i,k}^{h-1}}{\partial\delta_{i,k}^{h-1}}\right). \tag{7}\] \(m_{i,k}^{h-1}\) and \(\theta_{i,k}^{h-1}\) denote the \(k^{th}\) elements of \(\mathbf{m}_{i}^{h-1}\) and \(\mathbf{\theta}_{i}^{h-1}\) respectively. We omit the local iteration count \(h\) in the following expressions for ease of representation. Note that the sampling operation \(m_{i,k}^{h}\sim\text{Bernoulli}(\theta_{i,k}^{h})\) is not differentiable. Therefore \(\frac{\partial m_{i,k}^{h}}{\partial\theta_{i,k}^{h}}\) can be approximated using straight-through estimators [5, 9]. Next, after optimizing the scores for a number of local iterations, let \(\hat{\mathbf{\theta}}_{i}(t)\) denote the locally produced probability mask at round \(t\). For each client \(i\), a binary mask \(\hat{\mathbf{m}}_{i}\) is sampled according to: \[\hat{\mathbf{m}}_{i}(t)\sim\text{Bernoulli}(\hat{\mathbf{\theta}}_{i}(t)).\] These binary masks are then sent to the server. The sent masks highlight the weights contributing to the best sub-networks. This approach effectively reduces the communication cost (entropy) to a maximum of 1 bit per parameter (1bpp), where the actual entropy depends on the sparsity of the mask. The server then performs averaging to generate a global probability mask according to: \[\boldsymbol{\theta}(t+1)\leftarrow\frac{1}{K}\sum_{i}\hat{\boldsymbol{m}}_{i}( t). \tag{8}\] The resultant global probability mask \(\boldsymbol{\theta}(t+1)\) is redistributed to the devices in the DL to commence the next communication round. The global mask has been demonstrated in [9] to be an unbiased estimate of the true global probability mask \(\bar{\boldsymbol{\theta}}\), which is given by \(\bar{\boldsymbol{\theta}}(t+1)=\frac{1}{K}\sum_{i}\hat{\boldsymbol{\theta}}_{ i}(t)\). ## III Intuition and Proposed Algorithm ### _Intuition_ We first delineate the shortcomings of the current state-of-the-art technique [9] with regards to the sparsity level of the networks identified. Accordingly, we first undertake a thorough analysis of the results outlined in [6]. These outcomes serve as guiding directives stipulating the extent of over-parameterization necessary for a random network to approximate a smaller target network. Subsequently, we conduct a comprehensive evaluation of the original optimization algorithm employed within the framework of FedPM. This evaluation is conducted from the vantage point of each individual learner, under the premise of an absence of regularization in the loss function. #### Iii-A1 Lack of a unique solution to the approximated Subset-Sum Problem In [2], Pensia et al. investigated the estimation of a target weight \(w_{t}\) through the lens of the approximated subset-sum problem [6]. Under this context, they proved that a target weight value \(w_{t}\) can be approximated by a subset-sum of \(n\) random variables \(\mathcal{X}=\{X_{1},\ldots,X_{n}\}\) sampled from a uniform distribution, within a specified margin of error \(\epsilon\), with probability \(1-\gamma\). The number of variables \(n\) required is in the order of \(\mathcal{O}(\log(2/\min(\gamma,\epsilon)))\). Formally, Let \(n=\mathcal{O}\left(\log\left(2/\min(\gamma,\epsilon)\right)\right)\), then, \(\exists\,\mathcal{S}\subseteq\mathcal{X}\) w.p. \(1-\gamma\), a feasible solution of: \[\text{find }\mathcal{S}\subseteq\mathcal{X}\] \[\text{subject to }:|\sum_{X\in\mathcal{S}}X-w_{t}|<\epsilon \tag{9}\] Expanding upon these findings, the authors introduce a systematic approach to discern the required size (e.g. width and depth) of an over-parameterized dense random network, in order to effectuate the accurate approximation of a target dense network weights. Note that (9) does not necessarily deem a single feasible solution. Accordingly, the objective of finding a sub-network within an over-parameterized random network by optimizing a mask via SGD using consistant loss functions [9] (e.g. cross entropy loss in classification tasks), is not synonymous to solving (9), but equivelant to solving : \[\min_{\mathcal{S}}|\sum_{X\in\mathcal{S}}X-w_{t}| \tag{10}\] which entails the identification of a solution that aims at reducing the average loss of the sub-network chosen, without factoring in its size and without investigating alternative sparser feasible solutions that can offer a small trade-off of accuracy in response to large sparsity gains. Therefore, we posit the addition of a regularization term over the average number of chosen weights in the global mask, serving to find those sparser sub-networks that can generalize well. #### Iii-A2 FedPM stochasticity results in redundant trained sub-networks During FedPM training, within every local iteration (e.g. mini-batch update), individual devices sample a distinct instance sub-network based on a received probability mask as outlined in equation (5). As a result of the considerable scale of the over-parameterized random network, the sampled sub-networks may be entirely new for the devices at each local iteration. Subsequently, each device calculates the loss specific to the sampled network and then back-propagates the gradients to minimize the loss. This is done by adjusting the scores in directions that activate or deactivate the fixed random weights appropriately. In subsequent local iterations, additional networks are sampled, and their weight scores are tuned to minimize their corresponding loss. From a broader perspective, the local stochastic sub-network sampling step designed in FedPM implicitly promotes the minimization of the average loss of sub-networks sampled from the probability mask at each device, given by: \[\frac{1}{H}\sum_{h=1}^{H}\ell(y_{\boldsymbol{m}_{h}^{h}},\mathcal{B}_{h}), \quad\boldsymbol{m}_{i}^{h}\sim\text{Bernoulli}(\boldsymbol{\theta}_{i}^{h}) \tag{11}\] where \(H\) denotes the number of local mini-batch updates during each round. Due to the substantial number of existing sub-networks that can generalize well, this sampling step results in redundancy in terms of the number of optimized sub-networks and accordingly the number of activated weights. This considerably increases the size of the sampled sub-networks. To address this issue, the regularization term is introduced aiming at expediting the deactivation of weights with minimal impact on the current sampled network. This approach reduces the likelihood of sampling entirely new and distinct sub-networks in subsequent iterations, favoring the sampling of sub-networks sharing substantial features with early stages samples. Under regularization, upon transmitting the updated mask to the server, the resulting global probability mask defined in (8) introduces redundant sub-networks once again due to the inherent stochasticity in the local sub-network sampling process on each device. However, this global mask also characterizes a more constrained search space for the devices in the subsequent rounds as the training progress, given the limited number of distinct sub-networks optimized by each device during successive iterations. The training proceeds until a probability mask is found that can produce sub-networks with sparsity guided by the parameter \(\lambda\), thereby ensuring both communication and memory efficiency, alongside achieving good generalization performance. ### _Proposed Loss function_ Particularly, we integrate a regularization term alongside the conventional cross-entropy loss between the predicted output and the ground truth value for classification tasks. Therefore, our loss function imposes unstructured sparsity on the sub-networks independently discovered by each individual device, by accounting to the normalized average number of chosen parameters within the original over-parameterized network through a regularization term. Accordingly, the definition of the local loss function at device \(i\) over a mini-batch \(\mathcal{B}\subseteq\mathcal{D}_{i}\) is given as follows: \[L_{i}(y_{\boldsymbol{m}_{i}},\mathcal{B})=\ell(y_{\boldsymbol{m}_{i}}, \mathcal{B})+\frac{\lambda}{n}\sum_{k=1}^{n}\sigma(s_{i,k}), \tag{12}\] where \(\lambda\) serves as a regularization parameter that governs the level of sparsity exhibited by the resulting sub-networks. ## IV Experiments To assess the effectiveness of our proposed approach in comparison to FedPM, we carry out a series of experiments involving image classification tasks. These experiments are conducted under both homogeneous and heterogeneous conditions, as follows: * In an Independent and Identically Distributed (IID) scenario, we evenly distribute the datasets CIFAR10 [11], CIFAR100 [12], and MNIST [13] across 10 devices. * We distribute the CIFAR10 dataset across 10 devices while introducing heterogeneity by randomly assigning each device a subset of \(c=\{2,4\}\) classes from the available 10 classes. For these experiments, we present the average testing accuracy over the population target distribution (top row) and the average bits per parameter required (lower row) as a function of the number of rounds (e.g an average of three simulation runs). The bits per parameter required represents the average entropy of the binary masks transmitted in the UL by the devices. The number of local epochs is set to three with \(|\mathcal{B}|\) = 128. We utilize three feed-forward convolutional networks (4Conv, 6Conv and 10Conv [10]) to train over MNIST, CIFAR10 and CIFAR100 respectively. The initial score vector is sampled from a standard normal distribution with identity covariance matrix. As in [5], the model random weights are sampled from a uniform distribution over \(\{-\varsigma,\varsigma\}\), where \(\varsigma\) denotes the standard deviation of the Kaiming normal distribution [14]. Figure 1 illustrates the validation accuracy of FedPM with our proposed regularization term (\(\lambda=1\)) compared to the original algorithm under IID settings. The validation accuracy of both techniques is similar across all simulations. However, FedPM combined with our proposed regularization term achieves significant improvement in communication efficiency compared to the original algorithm. Specifically, on CIFAR10 experiments, an average efficiency gain of 0.31 bits per parameter (bpp) is achieved using our proposed modification. On MNIST experiment, we achieve 0.8 bpp greater efficiency, Fig. 1: From left to right: CIFAR10, MNIST, CIFAR100 experiments. First row: Validation Accuracy vs Rounds. Second row: The corresponding Average Bit-per-parameter (bpp) required vs Rounds. while on CIFAR100 experiment, we gain 0.25 bpp higher efficiency relative to original algorithm. Therefore, our proposed recipe provides notable gains in communication efficiency while maintaining the generalization performance of FedPM in the IID settings configuration. We now examine Fig. 2, which evaluates the performance of the two algorithms on non-IID CIFAR10 datasets. The regularization term value is varied to highlight the potential trade-off between generalization and communication efficiency in this setting. For \(\lambda=1\), the communication efficiency gain trend persists, where we observe substantial improvements of 0.52 bits per parameter (bpp) when label heterogeneity is present with \(c=2\), and 0.44 bpp when \(c=4\). However, unlike the IID setting, a slight loss in generalization performance is observed (around 3% and 4% respectively). However, when \(\lambda\) is set to 0.1 and 0.2 for \(c=2\) and \(c=4\) respectively, our algorithm converges to a sub-network with comparable generalization to FedPM while ensuring around 0.12 bpp gain in communication efficiency and final model size. In summary, Fig. 2 shows that our approach can identify sub-networks with generalization performance on par with FedPM in non-IID settings too, while still providing moderate gains in communication and memory efficiency. Moreover, it demonstrates that our algorithm allows for flexible trade-off between accuracy and communication and memory efficiency if required by tuning the regularization hyperparameter \(\lambda\). ## V Conclusion In this work, we demonstrate that state-of-the-art training methods for sparse random networks, which rely on consistent objectives, fail to uncover the sparsest sub-networks within the over-parameterized random models. To address this limitation, we propose and validate the incorporation of a regularization term within the local loss functions to discover highly sparse sub-networks. The sparse models obtained through our approach lead to significant improvements in communication and memory efficiency during federated training on resource-constrained edge devices, without sacrificing accuracy. Through extensive experiments, we show that our method outperforms existing state-of-the-art techniques by a large margin in terms of the sparsity and efficiency gains achieved. Additionally, the flexibility of our algorithm enables customizing the trade-off between accuracy and efficiency as per application requirements.
2307.16731
Asynchronous Silent Programmable Matter: Line Formation
Programmable Matter (PM) has been widely investigated in recent years. It refers to some kind of matter with the ability to change its physical properties (e.g., shape or color) in a programmable way. One reference model is certainly Amoebot, with its recent canonical version (DISC 2021). Along this line, with the aim of simplification and to better address concurrency, the SILBOT model has been introduced (AAMAS 2020), which heavily reduces the available capabilities of the particles composing the PM. In SILBOT, in fact, particles are asynchronous, without any direct means of communication (silent) and without memory of past events (oblivious). Within SILBOT, we consider the Line Formation primitive in which particles are required to end up in a configuration where they are all aligned and connected. We propose a simple and elegant distributed algorithm - optimal in terms of number of movements, along with its correctness proof.
Alfredo Navarra, Francesco Piselli
2023-07-31T14:52:35Z
http://arxiv.org/abs/2307.16731v1
# Asynchronous Silent Programmable Matter: ###### Abstract Programmable Matter (PM) has been widely investigated in recent years. It refers to some kind of matter with the ability to change its physical properties (e.g., shape or color) in a programmable way. One reference model is certainly Amoebot, with its recent canonical version (DISC 2021). Along this line, with the aim of simplification and to better address concurrency, the SLIBOT model has been introduced (AAMAS 2020), which heavily reduces the available capabilities of the particles composing the PM. In SILBOT, in fact, particles are asynchronous, without any direct means of communication (silent) and without memory of past events (oblivious). Within SILBOT, we consider the _Line formation_ primitive in which particles are required to end up in a configuration where they are all aligned and connected. We propose a simple and elegant distributed algorithm - optimal in terms of number of movements, along with its correctness proof. Keywords:Programmable Matter Line Formation- Asynchrony- Stigmergy ## 1 Introduction The design of smart systems intended to adapt and organize themselves in order to accomplish global tasks is receiving more and more interest, especially with the technological advance in nanotechnology, synthetic biology and smart materials, just to mention a few. Among such systems, main attention has been devoted in the recent years to the so-called _Programmable Matter_ (PM). This refers to some kind of matter with the ability to change its physical properties (e.g., shape or color) in a programmable way. PM can be realized by means of weak self-organizing computational entities, called _particles_. In the early 90s, the interest in PM by the scientific community was mostly theoretical. In fact, the ideas arising within such a context did not find support in technology that was unprepared for building computational devices at micro/nanoscale. Nowadays, instead, nano-technology has greatly advanced and the pioneering ideas on PM could find a practical realization. The production of nano units that integrate computing, sensing, actuation, and some form of motion mechanism are becoming more and more promising. Hence, the investigation into the computational characteristics of PM systems has assumed again a central role, driven by the applied perspective. In fact, systems based on PM can find a plethora of natural applications in many different contexts, including smart materials, ubiquitous computing, repairing at microscopic scale, and tools for minimally invasive surgery. Nevertheless, the investigation on modeling issues for effective algorithm design, performance analysis and study on the feasibility of foundational tasks for PM have assumed a central and challenging role. Various models have been proposed so far for PM. One that deserves main attention is certainly Amoebot, introduced in [10]. By then, various papers have considered that model, possibly varying some parameters. Moreover, a recent proposal to try to homogenize the referred literature has appeared in [8], with the intent to enhance the model with concurrency. One of the weakest models for PM, which includes concurrency and eliminates direct communication among particles as well as local and shared memory, is SILBOT[6]. The aim has been to investigate on the minimal settings for PM under which it is possible to accomplish basic global tasks in a distributed fashion. Actually, with respect to the Amoebot model, in SILBOT particles admit a 2 hops distance visibility instead of just 1 hop distance. Even though this does not seem a generalization of SILBOT with respect to Amoebot, the information that can be obtained by means of communications (and memory) in Amoebot may concern particles that are very far apart from each other. Moreover, there are tasks whose resolution has been shown to require just 1 hop distance visibility even in SILBOT (see, e.g. [18]), perhaps manipulating some other parameters. Toward this direction of simplification and in order to understand the requirements of basic tasks within PM, we aim at studying in SILBOT the _Line formation_ problem, where particles are required to reach a configuration where they are all aligned (i.e., lie on a same axis) and connected. ### Related work The relevance of the Line formation problem is provided by the interest shown in the last decades within various contexts of distributed computing. In graph theory, the problem has been considered in [13] where the requirement was to design a distributed algorithm that, given an arbitrary connected graph \(G\) of nodes with unique labels, converts \(G\) into a sorted list of nodes. In swarm robotics, the problem has been faced from a practical point of view, see, e.g. [14]. The relevance of line or V-shape formations has been addressed in various practical scenarios, as in [1, 3, 23], based also on nature observation. In fact, ants form lines for foraging activities whereas birds fly in V-shape in order to reduce the air resistance. In robotics, line or V-shape formations might be useful for exploration, surveillance or protection activities. Most of the work on robots considers direct communications, memory, and some computational power. For application underwater or in the outerspace, instead, direct communications are rather unfeasible and this motivates the investigation on removing such a capability, see, e.g. [15, 21]. Concerning more theoretical models, the aim has been usually to study the minimal settings under which it is possible to realize basic primitives like Line formation. In [2, 20], for instance, Line formation has been investigated for (semi-)synchronized robots (punctiform or not, i.e., entities occupying some space) moving within the Euclidean plane, admitting limited visibility, and sharing the knowledge of one axis on direction. For synchronous robots moving in 3D space, in [22], the plane formation has been considered, which might be considered as the problem corresponding to Line formation for robots moving in 2D. In [16], robots operate within a triangular grid and Line formation is required as a preliminary step for accomplishing the Coating of an object. The environment as well as the movements of those robots remind PM. Within Amoebot, Line formation has been approached in [11], subject to the resolution of Leader Election, which is based, in turn, on communications and not on movements. ### Outline In the next section, we provide all the necessary definitions and notation, along with the formalization of the Line formation problem. In Section 3, we give some preliminary results about the impossibility to resolve Line formation within SILBOT. Then, in Section 4, we provide a resolution algorithm for the case of particles sharing a common orientation. In Section 5, we show a possible running example about the proposed algorithm. In Section 6, we prove the correctness as well as the optimality in terms of number of moves of the proposed algorithm. Finally, in Section 7, we provide some conclusive remarks and possible directions for future work. ## 2 Definitions and notation In this section, we review the SILBOT model for PM introduced in [5, 6], and then we formalize the Line formation problem along with other useful definitions. In SILBOT, particles operate on an infinite triangular grid embedded in the plane. Each node can contain at most one particle. Each particle is an automaton with two states, contracted or expanded (they do not have any other form of persistent memory). In the former state, a particle occupies a single node of the grid while in the latter, the particle occupies one single node and one of the adjacent edges, see, e.g. Figure 1. Hence, a particle always occupies one node, at any time. Each particle can sense its surrounding up to a distance of 2 hops, i.e., if a particle occupies a node \(v\), then it can see the neighbors of \(v\), denoted by \(N(v)\), and the neighbors of the neighbors of \(v\). Hence, within its visibility range, a particle can detect empty nodes, contracted, and expanded particles. Any positioning of contracted or expanded particles that includes all \(n\) particles composing the system is referred to as a _configuration_. Particles alternate between active and inactive periods decided by an adversarial schedule, independently for each particle. In order to move, a particle alternates between expanded and contracted states. In particular, a contracted particle occupying node \(v\) can move to a neighboring node \(u\) by expanding along edge \((v,u)\), and then re-contracting on \(u\). Note that, if node \(u\) is already occupied by another particle then the expanded one will reach \(u\) only if \(u\) becomes empty, eventually, in a successive activation. There might be arbitrary delays between the actions of these two particles. When the particle at node \(u\) has moved to another node, the edge between \(v\) and \(u\) is still occupied by the originally expanded particle. In this case, we say that node \(u\) is _semi-occupied_. _A particle commits itself into moving to node \(u\) by expanding in that direction. At the next activation of the same particle, it is constrained to move to node \(u\), if \(u\) is empty. A particle cannot revoke its expansion once committed._ The SILBOT model introduces a fine grained notion of asynchrony with possible delays between observations and movements performed by the particles. This reminds the so-called Async schedule designed for theoretical models dealing with mobile and oblivious robots (see, e.g. [4, 7, 12]). All operations performed by the particles are non-atomic: there can be delays between the actions of sensing the surroundings, computing the next decision (e.g., expansion or contraction), executing the decision. The well-established fairness assumption is included, where each particle must be activated within finite time, infinitely often, in any execution of the particle system, see, e.g., [12]. Particles are required to take deterministic decisions. Each particle may be activated at any time independently from the others. Once activated, a particle looks at its surrounding (i.e., at 2 hops distance) and, on the basis of such an observation, decides (deterministically) its next _action_. If two contracted particles decide to expand on the same edge simultaneously, exactly one of them (arbitrarily chosen by the adversary) succeeds. If two particles are expanded along two distinct edges incident to a same node \(w\), toward \(w\), and both particles are activated simultaneously, exactly one of the particles (again, chosen arbitrarily by the adversary) contracts to node \(w\), whereas the other particle does not change its expanded state according to the commitment constraint described above. A relevant property that is usually required in such systems concerns connectivity. A configuration is said to be _connected_ if the set of nodes occupied by particles induce a connected subgraph of the grid. Definition 1: A configuration is said to be _initial_, if all the particles are contracted and connected. Figure 1: (\(a\)) A possible initial configuration with emphasized the _floor_ (dashed line); (\(b\)) a possible evolution of the configuration shown in (a) with an expanded particle. The shaded parallelogram is the minimum bounding box containing all the particles. **Definition 2**.: _[Line formation] Given an initial configuration, the Line formation problem asks for an algorithm that leads to a configuration where all the particles are contracted, connected and aligned._ **Definition 3**.: _Given a configuration \(C\), the corresponding bounding box of \(C\) is the smallest parallelogram with sides parallel to the West-East and SouthWest-NorthEast directions, enclosing all the particles._ See Figure 1.b for a visualization of the bounding box of a configuration. Note that, in general, since we are dealing with triangular grids, there might be three different bounding boxes according to the choice of two directions out of the three available. As it will be clarified later, for our purposes we just need to define one by choosing the West-East and SouthWest-NorthEast directions. In fact, as we are going to see in the next section, in order to solve Line formation in SILBOT, we need to add some capabilities to the particles. In particular, we add a common orientation to the particles. As shown in Figure 2.a, all particles commonly distinguish among the six directions of the neighborhood that by convention are referred to as the cardinal points \(\mathbb{NW}\), \(\mathbb{NE}\), \(\mathbb{W}\), \(\mathbb{E}\), \(\mathbb{SW}\), and \(\mathbb{SE}\). Furthermore, in order to describe our resolution algorithm, we need two further definitions that identify where the particles will be aligned. **Definition 4**.: _Given a configuration \(C\), the line of the triangular grid containing the southern side of the bounding box of \(C\) is called the floor._ **Definition 5**.: _A configuration is said to be final if all the particles are contracted, connected and lie on floor._ By the above definition, a final configuration is also initial. Moreover, if a configuration is final, then Line formation has been solved. Actually, it might be the case that a configuration satisfies the conditions of Definition 2 but still it is not final with respect to Definition 5. This is just due to the design of our algorithm that always leads to solve Line formation on floor. ## 3 Impossibility results As shown in the previous section, the SILBOT model is very constrained in terms of particles capabilities. Since its first appearance [6], where the Leader Election problem has been solved, the authors pointed out \begin{table} \begin{tabular}{l l l l l} _Problem_ & _Schedule_ & _View_ & _Orientation_ & _Reference_ \\ \hline Leader Election & Async & 2 hops & no & [5] \\ Scattering & ED-Async & 1 hop & no & [18] \\ Coating & Async & 2 hops & chirality & [19] \\ Line formation & Async & 2 hops & yes & **this paper** \\ \hline \end{tabular} \end{table} Table 1: Literature on SILBOT. Figure 2: (\(a\)) A representation of the orientation of a particle; (\(b\)) An initial configuration where Line formation is unsolvable within SILBOT; (\(c\)) Enumerated visible neighborhood of a particle; the two trapezoids emphasize two relevant areas for the definition of the resolution algorithm. the need of new assumptions in order to allow the resolution of other basic primitives. In fact, due to the very constrained capabilities of the particles, it was not possible to exploit the election of a leader to solve subsequent tasks. The parameters that can be manipulated have concerned the type of schedule, the hop distance from which particles acquire information, and the orientation of the particles. Table 1 summarizes the primitives so far approached within SILBOT and the corresponding assumptions. Leader Election was the first problem solved when introducing SILBOT [5]. Successively, the Scattering problem has been investigated [18]. It asks for moving the particles in order to reach a configuration where no two particles are neighboring to each other. Scattering has been solved by reducing the visibility range to just 1 hop distance but relaxing on the schedule which is not Async. In fact, the ED-Async schedule has been considered. It stands for Event-Driven Asynchrony, i.e., a particle activates as soon as it admits a neighboring particle, even though all subsequent actions may take different but finite time as in Async. For Coating [19], where particles are required to surround an object that occupies some connected nodes of the grid, the original setting has been considered apart for admitting chirality, i.e., a common handedness among particles. In this paper, we consider the Line formation problem, where particles are required to reach a configuration where they are all aligned and connected. About the assumptions, we add a common orientation to the particles to the basic SILBOT model. The motivation for endowing the particles with such a capability comes by the following result: Theorem 3.1: _Line formation is unsolvable within SILBOT, even though particles share a common chirality._ Proof: The proof simply comes by providing an instance where Line formation cannot be accomplished within the provided assumptions. By referring to Figure 2.b, we note that even if particles share chirality, they are all indistinguishable. No matter the algorithm designed for solving Line formation, an adversary may activate all particles synchronously so that they all behave symmetrically to each other. Hence, any action performed by a particle will be applied by all of them in a symmetric way. It means that any reachable configuration maintains the initial symmetry. Since a configuration solving Line formation for the provided instance requires to distinguish a particle which lies between the other two, we conclude that such a solution cannot be achieved. Note that, the arguments provided in the proof of Theorem 3.1 can be extended to any configuration where the initial symmetry is 'not compatible' with the formation of a line. Motivated by Theorem 3.1, we assume a common orientation to the particles. Consequently, each particle can enumerate its neighborhood, up to distance of 2 hops, as shown in Figure 2.c. This will be useful for the definition of the resolution algorithm. Actually, it remains open whether it is possible to design an algorithm even when particles share just one direction instead of the full orientation. ## 4 Algorithm _Wrain_ The rationale behind the name _Wrain_ of the proposed algorithm comes by the type of movements allowed. In fact, the evolution of the system on the basis of the algorithm mimics the behavior of particles that fall down like drops of rain subject to a westerly wind. The Line formation is then reached on the lower part of the initial configuration where there is at least a particle - what we have called _floor_. In order to define the resolution Algorithm _Wrain_, we need to define some functions, expressing properties related to a node of the grid. We make use of the enumeration shown in Figure 2.c, and in particular to the neighbors enclosed by the two trapezoids. Definition 6: Given a node \(v\), the next Boolean functions are defined: * \(\mathrm{Upper}(v)\) is _true_ if at least one of the visible neighboring nodes from \(v\) at positions \(\{1,2,4,5,6\}\) is occupied by a particle; * \(\mathrm{Lower}(v)\) is _true_ if at least one of the visible neighboring nodes from \(v\) at positions \(\{13,14,15,17,18\}\) is occupied by a particle; * \(\mathrm{Pointed}(v)\) _is_ true _if there exists a particle \(p\) occupying a node \(u\in N(v)\) such that \(p\) is_ expanded _along edge_\((u,v)\)_;_ * \(\mathrm{Near}(v)\) _is_ true _if there exists an empty node \(u\in N(v)\) such that_\(\mathrm{Pointed}(u)\) _is true._ For the sake of conciseness, sometimes we make use of the above functions by providing a particle \(p\) as input in place of the corresponding node \(v\) occupied by \(p\). We are now ready to formalize our Algorithm _WRain_. ``` 0: Node \(v\) occupied by a contracted particle \(p\). 0: Line formation. 1:if\(\neg\mathit{Near}(v)\)then 2:if\(\mathit{Pointed}(v)\)then 3:\(p\) expands toward \(\mathbb{E}\) 4:else 5:if\(\neg\mathit{Upper}(v)\)\(\wedge\)\(\mathit{Lower}(v)\)then 6:\(p\) expands toward \(\mathbb{SE}\) ``` **Algorithm 1**_WRain_. It is worth noting that Algorithm _WRain_ allows only two types of expansion, toward \(\mathbb{E}\) or \(\mathbb{SE}\). Moreover, the movement toward \(\mathbb{E}\) can happen only when the node \(v\) occupied by a particle is intended to be reached by another particle, i.e., \(\mathit{Pointed}(v)\) holds. Another remarkable property is that the algorithm only deals with expansion actions. This is due to the constraint of the SILBOT model that does not permit to intervene on expanded particles, committed to terminate their movement. An example of execution of _WRain_ starting from the configuration of Figure 1.a is shown in the next section. ## 5 Running example In this section, we show a possible execution of Algorithm _WRain_, starting from the configuration shown in Figure 1.a (or equivalently by starting directly from the configuration shown in Figure 3.a). Being in an asynchronous setting, there are many possible executions that could occur. In our example, we consider the case where all the particles that can move according to the algorithm apply the corresponding rule. It is basically an execution subject to the fully synchronous schedule (which is a special case of Async). From the considered configuration of Figure 1.a, Algorithm _WRain_ allows only the particle on top to move. In fact, considering the node \(v\) occupied by such a particle, we have that \(\mathit{Near}(v)\), \(\mathit{Pointed}(v)\) and \(\mathit{Upper}(v)\) are all _false_, whereas \(\mathit{Lower}(v)\) is true. Note that, none of the nodes occupied by the other particles imply function \(\mathit{Upper}\) to be true but the leftmost for which function \(\mathit{Lower}\) is false. Hence, the configuration shown in Figure 1.b is reached, eventually. After the movement of the expanded particle, see Figure 3.a, the configuration is basically like an initial one with contracted and connected particles. The only movement occurring in initial configurations is given by Line 6 of Algorithm _WRain_. In fact, when there are no expanded particles, only Line 6 can be activated, as Line 3 requires function \(\mathit{Pointed}\) to be true for a node occupied by a contracted particle. From the configuration of Figure 3.a, there are two particles - the top ones, that can move according to the algorithm. If both are activated, configuration of Figure 3.b is obtained. Successively, the rightmost expanded particle is free to move, whereas the other expanded particle allows the pointed particle to expand, as shown in Figure 3.c, by means of Line 3 of the algorithm. As already observed, the movement toward \(\mathbb{SE}\) is generated by the rule at Line 6 of Algorithm _WRain_, whereas the movement toward \(\mathbb{E}\) can only be induced by expanded particles as specified by the rule at Line 3. By keep applying the two rules among all particles, the execution shown in the subsequent Figures 3.d-k is obtained, hence leading to the configuration where all particles are contracted and aligned along _floor_. It is worth noting that the configuration shown in Figure 3.g is disconnected. However, as we are going to show, the possible disconnections occurring during an execution are always recovered. In particular, in the specific example, connectivity is recovered right after as shown in Figure 3.i. ## 6 Correctness and Optimality In this section, we prove the correctness of Algorithm _WRain_ as well as its optimality in terms of number of moves performed by the particles. We prove the correctness of Algorithm _WRain_ by showing that the four following claims hold: **Claim 1 - Configuration Uniqueness.**: Each configuration generated during the execution of the algorithm is unique, i.e., non-repeatable, after movements, on the same nodes nor on different nodes; **Claim 2 - Limited Dimension.**: The extension of any (generated) configuration is confined within a finite bounding box of sides \(O(n)\); **Claim 3 - Evolution guarantee.**: If the (generated) configuration is connected and not final there always exists at least a particle that can expand or contract; **Claim 4 - Connectivity.**: If two particles initially neighboring to each other get disconnected, they recover their connection sooner or later (not necessarily becoming neighbors). The four claims guarantee that a final configuration is achieved, eventually, in finite time, i.e., Line formation is solved. In fact, by assuming the four claims true, we can state the next theorem. **Theorem 2**.: _Given \(n\) contracted particles forming a connected configuration, Algorithm WRain terminates in a connected configuration where all the particles are aligned along floor._ Proof.: By Claim 3 we have that from any non-final configuration reached during an execution of _WRain_ there is always at least one particle that moves. Hence, by Claim 1, any subsequent configuration must be different from any already reached configuration. However, since Claim 2 states that the area where the particles move is limited, then a final configuration must be reached as the number of achievable configurations is finite. Figure 3: A possible execution when starting from the configuration shown in Figure 1.a. Actually, if we imagine a configuration made of disconnected and contracted particles, all lying on _floor_, then the configuration is not final according to Definition 5 but none of the particles would move. However, by Claim 4, we know that such a type of configurations cannot occur, and in particular, if two particles initially neighboring to each other get disconnected, then they recover their connection, eventually. Since the initial configuration is connected, then we are ensured that also the final configuration is connected as well. We now provide a proof for each of the above claims. Proof (of Claim 1 - Configuration Uniqueness): Since the movements allowed by the algorithm are toward either \(\mathbb{E}\) or \(\mathbb{SE}\) only, then the same configuration on the same nodes cannot arise during an execution as it would mean that some particles have moved toward \(\mathbb{W}\), \(\mathbb{NW}\), or \(\mathbb{NE}\). Concerning the case to form the same configuration but on different nodes, it is sufficient to note that a particle lying on a node \(v\) of _floor_ can only move toward \(\mathbb{E}\) (since \(\mathit{Lower}(v)\) is false, cf. Line 6 of Algorithm _WRain_). Hence, either none of the particles on _floor_ move, in which case the same configuration should appear on the same nodes - but this has been already excluded; or the same configuration may appear if all the particles move toward \(\mathbb{E}\). However, based on the algorithm, the only movement that can occur from an initial configuration is toward \(\mathbb{SE}\), hence the claim holds. Proof (of Claim 2 - Limited Dimension): From the arguments provided to prove Claim 1, we already know that any configuration obtained during an execution of _WRain_ never overpasses _floor_, defined by the initial configuration. Moreover, since the movements are toward either \(\mathbb{E}\) or \(\mathbb{SE}\) only, then the northern and the western sides of the bounding box of the initial configuration are never overpassed as well. Concerning the eastern side, we show that this can be shifted toward east in the generated configurations at most \(n\) times About movements toward \(\mathbb{SE}\) that overpass the eastern side, they cannot happen more than \(n-1\) times according to Algorithm _WRain_. In fact, each time it happens, the northern side moves toward south. About the movement toward \(\mathbb{E}\), it requires a pushing-like process by another particle that either comes from \(\mathbb{W}\) or from \(\mathbb{NW}\). The claim then follows by observing that a particle can be pushed at most \(n-1\) times, one for each other particle. In fact, if a particle \(p\) is pushed toward \(\mathbb{E}\), then the pushing particle \(p^{\prime}\) either comes from \(\mathbb{W}\) or from \(\mathbb{NW}\), i.e., after the pushing \(p\) and \(p^{\prime}\) are on the same WestEast axis. Hence, in order to push again \(p\) toward \(\mathbb{E}\), it is necessary that a third particle, \(p^{\prime\prime}\) pushes \(p^{\prime}\) that in turn pushes \(p\). This may happen, for instance, if initially the particles are all aligned along the western side of the bounding box. Hence, by making the union of the bounding boxes of all the configurations obtained during an execution of _WRain_, the obtained box has the sides of size upper bounded by \(n\). Proof (of Claim 3 - Evolution guarantee): Let us assume the configuration does contain a particle \(p\), occupying node \(v\), expanded toward node \(u\). If \(u\) is empty, then \(p\) (or possibly another particle) will reach \(u\), eventually. If \(u\) is occupied, then the particle \(p^{\prime}\) in \(u\) - if not already expanded, will be pushed to move toward \(\mathbb{E}\). In any case, there must be a particle at the end of a chain of expanded particles that either expands itself or moves toward the empty node toward which it is expanded. In any case, the configuration evolves. Let us consider then the case where all the particles are contracted and connected. If all the particles lie on _floor_, then the configuration is final. Hence, if the configuration is not final, there must exist a particle \(p\) occupying a node \(v\) which is not on _floor_ such that, \(\neg Near(v)\ \land\ \neg Pointed(v)\ \land\ \neg Upper(v)\ \land\ Lower(v)\) holds, i.e., according to Algorithm _WRain_, \(p\) expands toward \(\mathbb{SE}\). The existence of \(p\) is guaranteed by the fact that \(\neg Near(v)\ \land\ \neg Pointed(v)\) clearly holds since none of the particles are expanded, whereas \(\neg Upper(v)\ \land\ Lower(v)\) holds for at least one of the topmost particles that of course does not admit neighboring particles on top, but admits particles below, due to connectivity. Proof (of Claim 4 - Connectivity): Let us consider two neighboring particles \(p\) and \(p^{\prime}\) of the initial configuration. Without loss of generality, let us assume that the two particles become disconnected due to the movement of \(p\) from node \(v\) to node \(u\) In fact, expansions do not cause disconnections as an expanded particle still maintains the node occupied. If the movement is toward \(\mathbb{E}\), then we are sure there is another particle expanded toward \(v\), i.e., \(v\) remains semi-occupied. Consequently, either \(p^{\prime}\) moves and recovers its connection with \(p\) or another particle moves to \(v\), again recovering the connection between \(p\) and \(p^{\prime}\). Moreover, after its movement, \(p\) cannot move again as long as \(v\) remains semi-occupied since \(Near(p)\) is true during that time; whereas, if \(p^{\prime}\) moves during that time (necessarily toward \(\mathbb{E}\) or \(\mathbb{SE}\)), it becomes neighbor of \(p\) again. Then, the movement of \(p\) must be toward \(\mathbb{SE}\). According to Algorithm _WRain_, \(p\) has decided to move toward \(\mathbb{SE}\) because: \(Near(v)\) is false, i.e., none of the nodes in \(N(v)\) is semi-occupied; \(Pointed(v)\) is false; \(Upper(v)\) is false and in particular the are no particles in positions \(\{4,5,6\}\) according to the enumeration of its neighborhood shown in Figure 2.c; whereas there is at least one particle \(p^{\prime\prime}\) among positions \(\{13,15,17,18\}\). In fact, \(14\) must be empty as \(p\) is moving there. Hence, the movement toward \(14\) makes \(p\) neighboring \(p^{\prime\prime}\). It follows that, if the movement of \(p\) has caused a disconnection from \(p^{\prime}\), then \(p^{\prime}\) is in position \(9\), with respect to \(v\), that represents the connection to \(p\) before the movement. In fact, we know that positions \(\{5,6\}\) are empty, whereas the movement to \(14\) maintains \(p\) neighboring with \(\{10,13\}\), i.e., only the connection to \(9\) can get lost. Hence, \(p^{\prime}\) makes \(Upper(p)\) true, and \(p\) makes \(Lower(p^{\prime})\) true. It follows that \(p\) won't move anymore unless another particle \(\overline{p}\) (possibly arriving successively) pushes it from \(v\) or from \(13\). In either cases, \(\overline{p}\) connects \(p\) with \(p^{\prime}\). If \(p\) doesn't move before \(p^{\prime}\), then \(p^{\prime}\) must move, eventually. In fact, this happens as soon as either it is pushed or the \(Upper\) function evaluated from \(9\) becomes false. By Claims 1, 2 and 3, this must happen, eventually, since the configuration is not final. We are now ready to prove the optimality of Algorithm _WRain_ in terms of number of total moves performed by the robots. Lemma 1: _Given \(n\) contracted particles forming a connected configuration, Algorithm WRain terminates within \(O(n^{2})\) movements._ Proof: In order to prove the lemma, it suffices to remark that any particle moves at most \(n-1\) times toward \(\mathbb{E}\) and \(n-1\) times toward \(\mathbb{SE}\), hence obtaining a number of total movements upper bounded by \(O(n^{2})\). Theorem 6.1: _Algorithm WRain is asymptotically optimal in terms of number of movements._ Proof: As proven in [11], Line formation requires \(\Omega(n^{2})\) movements. That proof simply comes by assuming the initial configuration formed by \(n\) particles composing a connected structure of diameter at most \(2\sqrt{n}+2\) (e.g., if they form a hexagonal or square shape), and then summing up all the necessary movements required to reach a configuration where particles form a line. Hence, by combining such a result with Lemma 1, the claim holds. ## 7 Conclusion We investigated on the Line formation problem within PM on the basis of the SILBOT model. With the aim of considering the smallest set of assumptions, we proved how chirality was not enough for particles to accomplish Line formation. We then endowed particles with a common sense of direction and we proposed _WRain_, an optimal algorithm - in terms of number of movements, for solving Line formation. Actually, it remains open whether by assuming just one common direction is enough for solving the problem. Furthermore, although in the original paper about SILBOT[5] it has been pointed out that \(1\) hop visibility is not enough for solving the Leader Election, it is worth investigating what happens for Line formation. Other interesting research directions concern the resolution of other basic primitives, the formation of different shapes or the more general pattern formation problem. Also variants on the original SILBOT model deserve main attention. As shown in Table 1, small modifications to the original model may allow the resolution of challenging tasks. It would be interesting, for instance, to understand what might change if expanded particles are allowed to revoke from their commitment on moving forward, i.e., if algorithms could deal also with expanded particles. Furthermore, adding a few bits of visible memory like allowing the particles to assume different states other than contracted and expanded, or being endowed with visible lights similar to those studied in robot systems as in [9], might reveal higher potentials for PM.
2306.17841
Domain wall interpretation of the PTA signal confronting black hole overproduction
Recently, Pulsar Timing Array (PTA) collaborations have detected a stochastic gravitational wave background (SGWB) at nano-Hz frequencies, with Domain Wall networks (DWs) proposed as potential sources. To be cosmologically viable, they must annihilate before dominating the universe energy budget, thus generating a SGWB. While sub-horizon DWs shrink and decay rapidly, causality requires DWs with super-horizon size to continue growing until they reach the Hubble horizon. Those entering the latest can be heavier than a Hubble patch and collapse into Primordial Black Holes (PBHs). By applying percolation theory, we pioneer an estimation of the PBH abundance originating from DW networks. We conduct a Bayesian analysis of the PTA signal, interpreting it as an outcome of SGWB from DW networks, accounting for PBH overproduction as a prior. We included contributions from supermassive black hole binaries along with their astrophysical priors. Our findings indicate that DWs, as the proposed source of the PTA signal, result in the production of PBHs about ten times heavier than the sun. The binary mergers occurring within these PBHs generate a second SGWB in the kilo-Hz domain which could be observable in on-going or planned Earth-based interferometers if the correlation length of the DW network is greater than approximately 60$\%$ than the cosmic horizon, $L \gtrsim 0.6 t$.
Yann Gouttenoire, Edoardo Vitagliano
2023-06-30T17:58:14Z
http://arxiv.org/abs/2306.17841v2
# Domain wall interpretation of the PTA signal confronting black hole overproduction ###### Abstract Recently, NANOGrav has reported the observation of a stochastic gravitational wave background (SGWB) at nano-Hertz frequencies. String-wall networks and domain walls have been proposed as possible sources. To be cosmologically viable, these topological defect networks must annihilate before they dominate the energy budget of the universe, producing a SGWB. However, a part of the network can copiously produce primordial black holes that exceed current bounds. Performing a Bayesian analysis of pulsar timing residual datasets we find that the SGWB detected in PTA data is therefore hardly compatible with such an origin. This lends credibility to other interpretations, including supermassive black hole mergers, first order phase transitions, Nambu-Goto strings, and curvature-induced gravitational waves. ## I Introduction The North American Nanohertz Observatory for Gravitational Waves (NANOGrav), a pulsar timing array (PTA) part of the International Pulsar Timing Array (IPTA), comprising the European Pulsar Timing Array (EPTA), the Parkes Pulsar Timing Array (PPTA) in Australia, and the Indian Pulsar Timing Array Project (InPTA), has recently reported the observation of a stochastic gravitational wave background (SGWB) with a strain of \(2.7^{+0.7}_{-0.6}\times 10^{-15}\) (median with 90% credible interval) at the frequency \(\mathrm{yr}^{-1}\), corresponding to a total energy density in the sensitivity band of \(\Omega_{\mathrm{GW}}h^{2}=6.5^{+4.1}_{-2.8}\times 10^{-9}\)[1]. This result confirms the hint to a SGWB observed in previous years [2], by EPTA [3], PPTA [4] and IPTA [5]. Sources of nanoHz GWs could be a population of supermassive black hole binaries [6; 7; 8] or could be related to early universe phenomena [9], such as first order phase transitions [10], second-order gravitational waves produced during the formation of primordial black holes [11; 12; 13; 14], and topological defects [15; 16; 17; 18; 19; 20]. The simplest mechanism forming Domain walls (DW) is the spontaneous breaking of a discrete symmetry, e.g. \(\mathcal{Z}_{2}\)[21; 22; 23; 24]. DW dilute slower than radiation in expanding cosmology [25; 26]. To be viable, there must exist an energy bias between distinct vacua so that DW are pulled toward annihilating with each other. Upon annihilation, the DW system can abundantly produce GWs [27; 28; 29; 30; 24] with a nanoHz peak frequency within PTA window if the system annihilates at a temperature \(T_{\mathrm{ann}}\sim 10\,\mathrm{MeV}\)[31]. A another DW formation scenario is when a global \(U(1)\) symmetry is first spontaneously broken to form cosmic strings and then is later explicitly broken when the Goldstone mode receives mass corrections [32]. A network of global strings alone cannot be the source of the observed PTA signal, as the string tension needed to source such a large amplitude would imply the abundant production of Goldstone bosons [33; 34; 35] in conflict with the Big-Bang Nucleosynthesis (BBN) bound on the effective number of neutrino species \(N_{\mathrm{eff}}\). The evolution of the string-DW network can be more complicated and depends on the number \(N\) of minima along the orbit of vacua. For \(N=1\), DW gets bounded by strings and rapidly annihilate. The presence of DW only weakly enhance the GW signal from global strings [36] and we conclude that interpretation of PTA signal in term of string-wall network are excluded by \(N_{\mathrm{eff}}\) bounds [33; 34; 35]. If \(N>1\), a _stable_ string-wall network is produced, and the evolution is similar to the discrete symmetry breaking domain-wall system described above. Such system, often considered in the context of the QCD axion [37; 38; 39; 40; 41; 42; 43] and more recently in the context of axion-like particles [44; 45] and high-quality QCD axion models [46; 47], has also been considered as a source of a signal compatible with PTA observations (see e.g. [20; 48]). In this _Letter_, we perform a Bayesian analysis of PTA datasets NG12.5 [2] and IPTADR2 [5] in presence of GW from DW annihilation. We find that the interpretation of pure domain-wall and \(N>1\) string-wall systems as a possible source of the PTA signal are in tension with the B overproduction of primordial black holes (PBHs). In both cases, the system can feature spherical domains which collapse to PBHs when they shrink below their Schwarzschild radius [45; 31; 47]. While this mechanism might potentially be related to the production of PBH dark matter [47] or of supermassive black holes [45], we show that the same mechanism would overproduce PBHs if the annihilation temperature and energy stored in the system are tuned to produce the amplitude and frequency of the SGWB observed by PTAs. ## II GWs from DW annihilation _Friction vs scaling regime._ Denoting by \(v\) their typical DW velocity, we can estimate that DW have typical curvature radius \(R\simeq vt\). Initially, the work of their surface tension \(\sigma\) with equivalent pressure \(\mathcal{P}_{T}=\sigma/R\) toward straightening the DW is dampened by friction pressure \(\mathcal{P}_{V}\simeq\,\beta\,T^{4}\) where the dimensionless \(\beta\) sets the strength of DW-plasma interactions with the plasma. DW starts moving with relativistic velocity \(v\simeq\mathcal{O}(0.1)\) below the temperature \[T_{\mathrm{rel}}\simeq 0.8g_{\star}^{1/4}\sqrt{\frac{\sigma}{\beta M_{\mathrm{pl} }}}\simeq\frac{530\,\mathrm{MeV}}{\sqrt{10v\beta}}g_{\star}^{\frac{1}{4}} \left(\frac{\sigma^{1/3}}{10^{5}\,\mathrm{GeV}}\right)^{\frac{3}{2}}, \tag{1}\] where \(M_{\mathrm{pl}}\simeq 2.44\times 10^{18}\,\mathrm{GeV}\), \(g_{\star}\) the number of relativistic degrees of freedom and where we used Friedmann's equation \(T=1.2\sqrt{M_{\rm pl}/t}/g_{*}^{1/4}\). The size of the friction coefficient \(\beta\) is model dependent [49]. In the present work, we set \(\beta\ll 1\) and briefly discuss its implication at the end. Numerical simulations have shown that the energy stored in friction-less DW reaches the scaling regime as cosmic strings do, [23; 26] \[\rho_{\rm DW}=\frac{\sigma}{R},\qquad\text{with }R\simeq t/\mathcal{A}, \tag{2}\] where \(\mathcal{A}\simeq 0.8\pm 0.1\) is fitted on numerical simulations [30]. This results in the DW energy density redshifting slower than the main background fluid \(\rho_{\rm bkg}\simeq M_{\rm pl}^{2}/t^{2}\) such that DW rapidly dominate the energy density of the universe [32] below the temperature \[T_{\rm dom}\simeq\frac{1.4}{g_{\star}^{1/4}}\sqrt{\frac{\mathcal{A}\,\sigma}{M _{\rm pl}}}\simeq 30\ {\rm MeV}\ \mathcal{A}^{1/2}\,\mathrm{g}_{\star}^{1/4}\left(\frac{\sigma^{1/3}}{10^{5} \ {\rm GeV}}\right)^{3/2}. \tag{3}\] _Bias potential terms._ We assume the presence of high dimensional operators that explicitly break \(U(1)\) symmetry. This transforms the flat direction into a discrete collection of vacua. The vacuum energy difference \(\mathcal{P}V=V{\rm bias}\) between these new minimum points acts as a source of pressure which make DW repel or attract each other until their eventual annihilation. DW annihilate when the vacuum pressure surpasses the pressure \(\mathcal{P}_{T}=\sigma/R\) arising from their surface tension \(\sigma\), below the temperature \[T_{\rm ann}\simeq\frac{100\ {\rm MeV}}{\mathcal{A}^{1/2}g_{\star}^{1/4}}\left( \frac{10^{5}\ {\rm GeV}}{\sigma^{1/3}}\right)^{3/2}\left(\frac{V_{\rm bias}^{1/4}}{40\ {\rm MeV}} \right)^{2}. \tag{4}\] _GW signal._ During the annihilation process, DWs are driven to relativistic speed and radiate GW [21; 22; 27; 40]. The GW power spectrum today produced by long-lived DWs annihilating at \(T_{\rm ann}\) can be expressed as [28; 29; 30] \[\Omega_{\rm GW}h^{2}=\Omega_{\rm peak}h^{2}S_{\rm DW}(f) \tag{5}\] where the peak amplitude today follows from the quadrupole formula [23] \[\Omega_{\rm peak}h^{2}\simeq 7.2\times 10^{-10}\tilde{\epsilon}_{ \rm gw}\mathcal{A}^{2}\left(\frac{10}{g_{*s}(T_{\rm ann})}\right)^{4/3}\\ \times\left(\frac{\sigma^{1/3}}{100\ {\rm TeV}}\right)^{6} \left(\frac{100\ {\rm MeV}}{T_{\rm ann}}\right)^{4}. \tag{6}\] while \(\tilde{\epsilon}_{\rm gw}\simeq 0.7\pm 0.4\) is fitted on lattice simulations. [30] The peak frequency today is given by \[f_{\rm peak}=\frac{a(t_{\rm ann})}{a(t_{0})}H(t_{\rm ann})\simeq 1.1\ {\rm nHz}\left(\frac{g_{*}(T_{\rm ann})}{10}\right)\\ \times\left(\frac{10}{g_{*s}(T_{\rm ann})}\right)^{1/3}\left( \frac{T_{\rm ann}}{10\ {\rm MeV}}\right). \tag{7}\] We model the spectral function \[S_{\rm DW}(f)=\frac{2}{\left(f/f_{\rm peak}\right)+\left(f_{\rm peak}/f\right) ^{3}}, \tag{8}\] where the IR slope is \(\Omega_{\rm GW}\propto f^{3}\) to respect causality [50; 51; 52; 53] and the UV slope is \(\Omega_{\rm GW}\propto f^{-1}\) as suggested by lattice simulations results [30]. ## III Bayesian analysis of PTA data We performed a comprehensive Bayesian analysis of the DW interpretation of PTA signal. Waiting for NANOGrav \(15\ {\rm yr}\)[1] to release their data publicly, we used the first \(5\) frequency bins of NANOGrav \(12.5\ {\rm yr}\)[2] and the first 13 frequency bins of IPTA DR2 [5]. To extract the GW signal from the various source of noises, we closely followed the methodologies employed by the NANOGrav [2] and IPTA [5] research groups, with additional insights from other relevant literature [48; 55]. We modified the software tools known as enterprise[56] and enterpriseextensions[57] to include the spectrum from DW annihilation. The parallel-tempering Markov Chain Monte-Carlo sampler, PTMCMC[58], was used to explore the posterior distribution and, see the mean values in Tab. 1, and GetDist tool [59] was used to visualize it, see Fig. 3. ## IV BBN constraints Ref. [48] has shown that the DW interpretation of the PTA GW signal is in slight tension with BBN if DW annihilate into a hidden sector. We now revisit the argument and minoring slight numerical differences, we reach similar conclusions. The energy density fraction in DWs at temperature \(T_{\rm ann}\), normalized to radiation background reads: \[\alpha_{\rm DW}\equiv\frac{\rho_{\rm DM}}{\rho_{\rm rad}}=\sqrt{\frac{g_{*}(T _{\rm ann})}{10.75}}\left(\frac{\sigma^{1/3}}{100\ {\rm TeV}}\right)^{3}\left(\frac{14\ {\rm MeV}}{T_{\rm ann}}\right)^{2}, \tag{9}\] where we approximated \(g_{*,s}=g_{*}\). The presence of extra number \(\Delta N_{\rm eff}\) of relativistic degrees of freedom at BBN and CMB would change the expansion rate of the universe and impact the CMB data or the abundance of light elements [60; 61]. In App. A, we show that relics with energy fraction \(\alpha_{\rm DW}\) contribute to the number of relativistic degrees of freedom by: \[\Delta N_{\rm eff}=7.4\,\alpha_{\rm DW}\ \lesssim\ 0.3, \tag{10}\] We must distinguish two scenarios. If DW annihilate into a secluded sector, then Eq. (10) applies for all \(T_{\rm ann}\). Instead if \begin{table} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{\(\log_{10}\ [{\rm GeV}]\)} & \multicolumn{2}{c|}{**Posterior mean**} \\ \cline{2-3} & NG12.5 & IPTA \\ \hline \hline \(\sigma^{1/3}\) & \(4.37^{+0.25}_{-0.29}\) & \(4.928\pm 0.086\) \\ \hline \(T_{\rm ann}\) & \(-2.21\pm 0.43\) & \(-1.44^{+0.14}_{-0.11}\) \\ \hline \end{tabular} \end{table} Table 1: Mean parameter values of the posterior distribution for the domain wall interpretation of the PTA common red process. DW annihilate dominantly into Standard Model (SM) degrees of freedom, then Eq. (10) relaxes for \(T_{\rm ann}\gtrsim 1~{}{\rm MeV}\)[62]. Another bounds from the possibility for DWs to dominate the energy budget of the universe. In this case the universe starts expanding as \(a\propto t^{2}\)[26] and it is uncertain if DWs can efficiently annihilate in such a rapidly expanding universe. To be conservative, we impose that DW must disappear before dominating the universe: \[T_{\rm ann}\;\gtrsim\;T_{\rm dom}. \tag{11}\] The corresponding \(N_{\rm eff}\) and DW domination constraints are shown in purple and gray in Figs. 2 and 3. ## V PBH constraints Both pure domain-wall and \(N>1\) string-wall systems feature closed configurations. During the scaling regime, DW have a size comparable to the cosmic horizon \(\simeq t\). Closed DW collapse into PBHs if they shrink below their Schwarzschild radius which can happen if the ratio \[p(t)=\frac{R_{\rm Sch}(t)}{t}=\frac{2GM(t)}{t} \tag{12}\] becomes smaller than one [72]. Close to the start of the annihilation process, the mass within a closed wall reads \[M(t)\simeq\frac{4}{3}\pi t^{3}V_{\rm bias}+4\pi t^{2}\sigma. \tag{13}\] DW annihilate at \(t_{\rm ann}\) when the volume term in Eq. (13) dominates over the surface term. Therefore, the ratio \(p(t)\) increases with time as \(t^{2}\). The temperature at which PBH collapse is defined by \(p(T_{\rm PBH})=1\), which implies \[T_{\rm PBH}\simeq 120~{}{\rm MeV}\mathcal{A}^{1/4}\,g_{\rm eff,1}^{1/8}\left( \frac{T_{\rm ann}}{1~{}{\rm GeV}}\right)^{1/2}\left(\frac{\sigma^{1/3}}{10^{5}~ {}{\rm MeV}}\right)^{3/4} \tag{14}\] where \(g_{\rm eff,1}\equiv g_{\star}^{2}(T_{\rm ann})/g_{\star}(T_{\rm PBH})\), and corresponding time is \(t_{\rm PBH}=1/2H(T_{\rm PBH})\). The PBH mass is Figure 1: _GW spectra from DW annihilation **(solid)** with surface tension \(\sigma\) and annihilation temperature \(T_{\rm ann}\) set at the mean values of the posterior distribution in Tab. 1. The mean posterior for the GW signal from SMBH binaries **(dotted)** is shown with strain amplitude \(A_{\rm SMBH}\simeq 2.1-3.7\times 10^{-15}\) at \(1~{}{\rm yr}^{-1}\) for NG12.5 and IPTADR2 respectively. The **gray** band is the \(90\%\) CL for the SGWB calculated from a Monte-Carlo generated binary population [54]. Figure 2: _In **orange** and **blue**, we show the \(68\%\) and \(90\%\) confidence levels of the DW interpretation of the PTA GW signal after performing a Bayesian analysis of NG12.5 and IPTADR2 datasets. The scenario with dark reheating is in slight tension with the \(N_{\rm eff}\lesssim 0.3\) constraints. However, it is in strong tension with PBH overproduction bounds, from DM overclosure **green**, microlensing [63]**(purple)**, \({\rm kHz}\) GW interferometers [64, 65, 66, 67, 68]**(blue)** and CMB [69, 70, 71]** (yellow)**. \(\frac{4\pi}{3}V_{\rm bias}t_{\rm PBH}^{3}\): \[M_{\rm PBH}\simeq\frac{19\;{\rm M}_{\odot}}{{\cal A}^{1/2}g_{\star}^{1/4}(T_{\rm ann })}\;\left(\frac{1\;{\rm GeV}}{T_{\rm ann}}\right)\left(\frac{10^{5}\;{\rm GeV} }{\sigma^{1/3}}\right)^{3/2}. \tag{15}\] The PBH contribution to the DM abundance is: \[f_{\rm PBH}=\frac{\rho_{\rm DW}(T_{\rm PBH})}{\rho_{\rm DM}(T_{\rm PBH})}=\frac {\rho_{\rm DW}(T_{\rm PBH})}{\rho_{\rm DM}(T_{\rm ann})}\frac{\rho_{\rm DW}(T_ {\rm ann})}{\rho_{\rm DM}(T_{\rm PBH})}. \tag{16}\] The first factor describes how fast the energy stored in the DW network disappears as annihilation proceeds which according to lattice simulations appears to follow a power-law [43] \[\frac{\rho_{\rm DW}(T)}{\rho_{\rm DW}(T_{\rm ann})}=\left(\frac{T}{T_{\rm ann} }\right)^{\alpha}. \tag{17}\] Results collected in Tab. VI of [43] suggests that \(\alpha\), which parameterizes how fast the network annihilates, can take values between 9 and and 28 [43, 47] (though smaller values like \(\alpha=7\) have also been considered in the literature [31]). The second factor in Eq. (16) can be evaluated from evolving DW, DM and radiation energy densities until today and we get \[f_{\rm PBH}\simeq g_{\rm eff,2}^{1/2}\;\frac{T_{\rm PBH}^{(\alpha-3)}\;T_{\rm dom }^{2}}{T_{0}\;T_{\rm ann}^{(\alpha-2)}}\left(\frac{\rho_{\rm rad}}{\rho_{\rm DM }}\right)_{0}, \tag{18}\] where \[g_{\rm eff,2}\equiv\left(\frac{g_{s\star}(T_{0})}{g_{\star}(T_{0})}\right)^{2 }\frac{g_{\star}(T_{\rm ann})g_{\star}(T_{\rm dom})}{g_{s\star}^{2}(T_{\rm PBH })}. \tag{19}\] In Fig. 2, we show the constraints due to PBH overclosure \(f_{\rm PBH}\gtrsim 1\), but also from distortion of the Cosmic Microwave Background [69, 70, 71], LIGO-Virgo-Kagra (LVK) bounds [64, 65, 66, 67, 68] and microlensing limits from Eros datasets [63]. We collect all the constraints in Fig. 2 and vary the exponent \(\alpha\) which sets how fast the DW network is annihilating \((\rho_{\rm DW}^{2}\propto 1/t^{\alpha})\). We conclude that the DW interpretation of PTA signal is excluded by PBH overproduction. _Impact of friction_. We know briefly discuss the impact of friction which we have neglected in our analysis. The PBH abundance might be strongly impacted in presence of friction. However this does not relieve the DW interpretation of the PTA signal. In fact, in presence of friction the GW signal gets suppressed [49] so that the confidence levels (blue and orange ellipses) rendering the PTA signals will move to the DW domination region in gray in Fig. 3, excluding the DW interpretation of PTA signal without having to study the PBH abundance in friction-dominated DW network. ## VI Discussion and outlook Several PTA have reported the observation of a SGWB with an energy fraction of \(5\times 10^{-9}\) at nano-Hertz frequencies. The annihilation of topological defect systems has been listed among the possible sources. In this paper, we have shown that pure domain-wall and \(N>1\) string-wall systems is in tension with the overproduction of primordial black holes (PBHs). Parameterizing the annihilation of the DW network by a power-law \(\rho_{\rm DW}^{2}\propto 1/t^{\alpha}\), values \(\alpha\lesssim 50\) re Figure 3: _Same as Fig. 2 where PBH constraints combined in **brown**. Results from numerical simulations [43] suggest the annihilation rate exponent to be \(9\lesssim\alpha\lesssim 28\) which is exclude the DW interpretation of the PTA signal. In theory, only a DM network annihilating faster than \(\rho_{\rm GW}^{2}\propto 1/t^{50}\) could explain PTA signal, assuming that the GW production does not get suppressed._ sult in a tension between the amplitude and frequency of the SGWB observed in the different PTA datasets and the overproduction of PBHs. This has been missed by previous works [46, 48, 73, 9, 20] claiming a DW interpretation of PTA signals. To further strengthen these results, dedicated simulations of the late evolution of domain-wall and string-wall networks should be realized. Hence, we add the DW to the graveyard of early universe phenomena failing short at explaining PTA GW signal, together with global strings (see introduction) and scalar-induced GW [55] in the Gaussian limit [75, 76]. Recent works have shown that first-order phase transition can produce PBHs abundantly in the supercooled limit [77, 78, 79, 80, 81]. Further studied are needed to infer whether 1stOPT interpretation of the PTA signal [82, 83, 9, 20] is in the PBH graveyard too. Our conclusions suggest that the only viable topological-defect origin of the PTA signal is one arising from Nambu-Goto strings. ###### Acknowledgements. YG thanks Simone Blasi, Alberto Mariotti, Oriol Pujolas and Fabrizio Rompineve for useful discussions. YG is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship. EV acknowledges support by the European Research Council (ERC) under the European Union's Horizon Europe research and innovation programme (grant agreement No. 101040019). ### BBN bound Domain walls (DW) form a component of the total energy density of the universe. As such, they contribute to increase the expansion rate of the universe which makes neutron freeze-out earlier, increase the \(n/p\) ratio which in turn increases the Helium abundance [84]. The presence of DW can be described in terms of an extra number of neutrino species \[N_{\rm eff}=\frac{8}{7}\left(\frac{\rho_{\rm DW}}{\rho_{\gamma}}\right)\left( \frac{11}{4}\right)^{4/3}, \tag{10}\] where \(\rho_{\gamma}\) is the photon number density. We introduce the DW energy fraction in unit of radiation energy density at annihilation temperature \[\alpha_{\rm DW}(T)=\frac{\rho_{\rm DW}(T)}{\frac{\pi^{2}}{30}g_{*}(T)T^{4}}. \tag{11}\] where \(T\) is the SM photon temperature. From Eq. (10) and Eq. (11), the maximal DW contribution to \(N_{\rm eff}\) occurs at annihilation temperature \[\Delta N_{\rm eff}(T)=2.20g_{*}(T)\alpha_{\rm DW}(T), \tag{12}\] To apply the BBN bound \(\Delta N_{\rm eff}\lesssim 0.3\)[60, 61], the effective number of extra relativistic degrees of freedom must be evaluated below the neutrino decoupling temperature where \(g_{*}(T<T_{\rm dec})\equiv 2+(7/8)\cdot 6\cdot(4/11)^{4/3}\simeq 3.36\). Hence, we obtain \[\Delta N_{\rm eff}=7.4\ \alpha_{\rm GW}\ \lesssim\ 0.3, \tag{13}\] which is slightly different from [48]. As discussed in the main text, we must distinguish the scenario in which DW reheates to dark radiation, in which case Eq. 13 is the BBN constraints, from the scenario in which DW reheates to SM, in which case Eq. 13 applies only if DW annihilate below the neutrino decoupling temperature \(T_{\rm ann}\lesssim 1\ {\rm MeV}\).
2302.14667
Bistable electric field control of single-atom magnetocrystalline anisotropy
We reversibly switch the polar environment of an individual magnetic atom with an electric field to control the energy barrier for reversal of magnetization. By applying an electric field in the gap between the tip and sample of a scanning tunneling microscope, we induce bistable changes in the polarization of the region surrounding a chlorine vacancy in a monolayer of sodium chloride on copper terminated by a monolayer of copper nitride. The displacement of the sodium chloride ions alters the local electric polarization and modifies the magnetocrystalline anisotropy experienced by a single cobalt atom. When a cobalt atom is near a chlorine vacancy, spin-sensitive inelastic electron tunneling spectroscopy measurements can reveal the change in anisotropy. The demonstration of atomic-scale control of magnetic properties with electric fields opens new possibilities for probing the origins of magnetoelectric coupling and will stimulate the development of model artificial mutliferroic systems.
Jose Martinez-Castro, Cyrus F. Hirjibehedin, David Serrate
2023-02-28T15:32:09Z
http://arxiv.org/abs/2302.14667v1
## Bistable electric field control of single-atom magnetocrystalline anisotropy ### Abstract **We reversibly switch the polar environment of an individual magnetic atom with an electric field to control the energy barrier for reversal of magnetization. By applying an electric field in the gap between the tip and sample of a scanning tunneling microscope, we induce bistable changes in the polarization of the region surrounding a chlorine vacancy in a monolayer of sodium chloride on copper terminated by a monolayer of copper nitride. The displacement of the sodium chloride ions alters the local electric polarization and modifies the magnetocrystalline anisotropy experienced by a single cobalt atom. When a cobalt atom is near a chlorine vacancy, spin-sensitive inelastic electron tunneling spectroscopy measurements can reveal the change in anisotropy. The demonstration of atomic-scale control of magnetic properties with electric fields opens new possibilities for probing the origins of magnetoelectric coupling and will stimulate the development of model artificial multiferroic systems.** ## Main Text ### Introduction Achieving electric field control of magnetic properties is a major challenge in the development of novel materials and devices, offering access to new material properties as well as potential technological improvements such as significantly reduced power consumption [1]. A variety of driving mechanisms to couple a material's electronic and magnetic degrees of freedom have been proposed. For example, an electrostatic gate voltage can force electronic transport in quantum systems to proceed through discrete spin states with a well-defined conductivity [2, 3, 4, 5]. In thin ferromagnetic metals and semiconductors, the charge redistribution near a strong gating electric field can significantly alter the magnetic coercivity [6] or ordering temperatures [7]. In addition, spin-orbit coupling enables magnetization control by electrical currents through spin-torque effects [8, 9, 10]. Alternatively, direct coupling of electrostatic and magnetic degrees of freedom has been achieved in single phase multiferroic materials like BiFeO\({}_{3}\)[11] and hexagonal manganites [12, _13_), or in heterostructures interfacing ferroelectric and magnetic thin films [14, 15]. In spite of the intense research activity on multiferroic phenomena, a framework enabling fundamental studies on the coupling of the electric polarization and the spin moment at interfaces is not yet fully developed [15]. Magnetocrystalline anisotropy energy (MAE) is one of the most relevant parameters for defining the behavior of magnetic materials. It determines the susceptibility of a material's magnetization to thermal activation, external magnetic fields, and electromagnetic radiation [16]. Different studies have demonstrated that MAE in metallic thin films can be continuously tuned at the nanoscale by the application of an electric field [17, 6, 18]. In these cases, the coupling mechanism is restricted to the response of the electronic density of states to the unscreened part of the electric field in the metal. At the nanoscale, the electric field can influence the MAE barrier and modify the magnetization reversal attempt frequency in the superparamagnetic regime [18]. In the fundamental limit of an individual magnetic atom, the atom's MAE is mainly controlled by the structure and charge distribution of the immediate environment [19, 20, 21]. At the single molecule level, it is possible to modify the MAE by charging [22] as well as through controlled (reversible or non-reversible) deformation of the bond distance between the ion carrying the magnetic moment and its surrounding ligands [23, 24, 25]. Analogously, an observable change in MAE can be achieved by local strain arising from deformation of the substrate supporting the magnetic moments [26]. This suggests that controlling the arrangement of the atoms surrounding a magnetic ion using an electric field would also modify the single-atom MAE, resulting in an efficient way to implement external electric field control of magnetism. In this work, we show that the MAE of an individual magnetic atom can be manipulated through bistable atomic displacements controlled by an external electric field applied to a supporting dipolar substrate. By depositing a monolayer (ML) of NaCl on the atomically thin polar insulator copper nitride (Cu\({}_{2}\)N) capping bulk Cu(001), we induce a distortion in the NaCl that results in a net out of plane dipole similar to what has been observed for a bilayer of NaCl on Cu\({}_{2}\)N [27]. In the presence of a Cl vacancy, the NaCl can be bistably switched between two dipolar orientations using an electric field applied from the tip of a scanning tunneling microscope (STM) used to study the system. By performing spin-sensitive inelastic electron tunneling spectroscopy (IETS) on a Co atom adsorbed on the NaCl-ML, we observe that the characteristic MAE measured for Co on bare Cu\({}_{2}\)N [28, 29] is altered between two distinct values following the bistable electric polarization of the NaCl-ML that can be programmed by opposite electric fields in the tip-sample gap. These results show that electric field control of magnetic properties can be achieved in the limit of single atoms on surfaces. Extending this technique to other materials in which more detailed characterization can be performed would enable the development of model systems for understanding the interplay between MAE and polar order, shedding light on the atomic-scale origins of multiferroic coupling. **Bistable polarization switching in a monolayer.** Figure 1A shows a topographic STM image of a NaCl ML covering Cu\({}_{2}\)N nanoislands on Cu(001). As is seen for ultra-thin films of NaCl on many other substrates [30], the NaCl ML on Cu\({}_{2}\)N also contain Cl vacancies (Fig. 1A). Co atoms can also be deposited on top of the NaCl ML, and as seen in Fig. 1B can be recognized as a bright protrusions similar to those observed for Co adsorbed on top of Cu\({}_{2}\)N/Cu(001) [28]. However, the appearance of Co adatoms on the NaCl ML strongly depends on the adsorption site. As seen in Fig. 1C, multiple adsorption sites can be identified for Co, for example on top of Na or Cl sites (Fig S1A and S1B, respectively). In addition, the Co atom has an unusual appearance near a Cl vacancy (Fig. S1), where it does not have the round characteristic shape of a Co atom adsorbed on Cu\({}_{2}\)N (Fig. 1C) or the four-fold symmetry expected above Cl sites. This irregular shape is attributed to a Co atom because the appearance on high symmetry sites can be systematically and repeatedly recovered by displacing a Co atom using lateral manipulation near a Cl vacancy on the NaCl ML (Fig. S1). Furthermore, its identity can be confirmed because its spectroscopic signature is similar to that of Co on Cu\({}_{2}\)N (Fig. 3). As has been observed for the NaCl BL on Cu\({}_{2}\)N [27], the polarization of the NaCl ML can be bistably switched in the presence of a Cl vacancy. When the tip is positioned above a Cl vacancy, the strongly self-poled polarization of the NaCl layer, which is induced by the polar orientation of the underlying Cu\({}_{2}\)N, can be switched by ramping the bias to positive values, thus applying a positive electric field (i.e. pointing from the sample to the tip) (Fig. 2A). The polarization also can be reversibly switched back to the original state by applying large enough negative electric fields. The sharp change in the tunneling current is attributed to tunneling electro resistance (TER), where the reversal of the dipole orientation modifies the work function of the substrate and therefore the height of the tunneling barrier [27, 31]. Simultaneously acquired Kelvin probe measurements obtained using atomic force microscopy (AFM) further confirm the change in the work function of the substrate from dipolar reversal. As see in Fig. 2B, the shift of the resonance frequency as a function of voltage \(\Delta\!f(V)\) shows the expected parabolic behavior [32], with the contact potential difference \(V_{\rm cpd}\) between the tip and substrate work functions marked by the parabola's maximum. The value of \(V_{\rm cpd}\) clearly shifts for the two states, indicating the difference in work function and electric polarization. **Bistable switching of magnetic anisotropy.** Having seen that switching the Cl vacancy in the NaCl ML results in a change of its electric polarization, we explore the impact of this change on the magnetic properties of Co atoms adsorbed nearby. A Co atom on bare Cu\({}_{2}\)N has a quantum spin \(S\)=3/2 [28]. Because of the anisotropic arrangement of charge in the Cu\({}_{2}\)N below the magnetic atom [21], the crystal field splits the states of spin projection along the z-axis \(S_{z}\)=1/2 and \(S_{z}\)=3/2 by an energy \(E_{an}\)[19]. These states further split in an applied magnetic field according to the spin Hamiltonian \[\widehat{H}=-g\mu_{B}\widehat{\mathbf{B}}\cdot\widehat{\mathbf{S}}+DS_{z}^{2}\] where \(g\) is the Lande factor, \(\mu_{B}\) is the Bohr magneton, and \(D\) the is the uniaxial anisotropy parameter. For Co on Cu\({}_{2}\)N, \(D>0\) so \(S_{z}\)=\(\pm\)1/2 is the doubly degenerate ground state. Exchange coupling to the underlying conduction electrons results in Kondo screening of this state [28], manifesting as a sharp resonance at the Fermi energy \(E_{F}\) (\(V\)=0). STM-based IETS induces transitions between the \(S_{z}\) states [21], resulting in conductance (d/d\(V\)) steps when the sample bias matches \(E_{an}\)/\(e\) at \(\sim\pm\) 5 mV. As seen in Fig. 3, a similar spectrum is observed for a Co atom near a Cl vacancy in the NaCl ML: the relative amplitude of the Kondo resonance is reduced and no additional conductance steps (i.e. inelastic spin excitations) are observed. This suggests that the additional NaCl ML above the Cu\({}_{2}\)N decreases the exchange coupling between the magnetic impurity and the nearby conducting electrode without changing the total spin of the Co atom [28, 29]. Spectroscopic measurements performed on Co atoms adsorbed on Na or Cl sites on top of the NaCl ML but away from Cl vacancies did not show any characteristic IETS features in this energy range. This may be because the inelastic component of the tunneling current, which induces spin-flip excitations, is too small to be resolved for experimental conditions in which Co atoms remains stable. To study the impact of the polarization switching on the Co atom, we position the tip above a Co atom located near a Cl vacancy and ramp the applied voltage. Current jumps at critical voltages corresponding to critical electric fields (Fig. 4) confirm that polarization switching occurs even with the Co atom present (Fig. 5A). To ensure that no other process happened while setting the conditions for low bias spectroscopy, variations in \(V_{\rm set}\) and \(I_{\rm set}\) before and after the polarization switch were continuously monitored. As seen in Fig. 5B, two different spectroscopic signatures can be distinguished for the two different polarization states (labeled A and B). Note that the electric field conditions during the spectroscopy in both states are identical. As demonstrated by the change in voltage of the IETS step, the MAE of the Co atom is decreased by a factor of two when the underlying NaCl ML is switched from state A to B, while the amplitude of the Kondo resonance is enhanced. The similarity of the spectra (i.e. same number of inelastic excitations and the persistence of a Kondo resonance) implies that the value of \(S\) and the sign of \(D\) remain the same for both states. This excludes a charging process on the Co atom as the origin of the bistable switching. Additional confirmation of the change in MAE experienced by the Co atom for the two different surface polarization states is obtained by observing the evolution of the spectra with magnetic field (Fig. 6), which is illustrated in Fig. 5C. For the relatively small magnetic fields (up to \(B=3\) T) accessible in these studies, only a very small change in the energy of the inelastic tunneling step is expected [28]. The small shift that is observed is consistent with such changes. A much more prominent change, however, can be observed in the splitting of the Kondo resonance, which is much more sensitive to changes in magnetic field [28]. As seen in Fig. 6, the splitting of the Kondo resonance is different for the two different polarization states. Figure S2 shows low energy spectroscopy for three other Co atoms near Cl vacancies as a function of the polarization. The general features are the same in all cases, though the MAE either increases or decreases from state A to B depending on the local environment. Variations are also observed for changes in the Kondo resonance, which may be modified due to changes in the strength of the coupling between the Co atom and the underlying metallic substrate. It is difficult to quantify these variations because the adsorption site of the Co atom near the Cl vacancy as well as the underlying lattice structure nearby cannot be resolved; this may be due to the low symmetry around the Cl vacancy and the relatively weak adsorption energy of the Co atom, which precludes high-resolution imaging at small tip-sample distances. ## Discussion We have demonstrated that electric field induced modification of the polarization of a substrate can bistably switch the magnetocrystalline anisotropy experienced by a single magnetic atom through the rearrangement of the atomic positions of the neighboring ions. As is the case for nanoscale magnetic data storage [33], bistable modulation of an individual atom's MAE could have applications for classical and quantum information processing, potentially allowing for switching to an easily modifiable state when writing data and then back a more stable state for longer-term storage. Further development of atomic manipulation techniques on this or other switchable substrates would also facilitate the construction of coupled spin systems, enabling construction of the smallest possible multiferroic systems in which a collective electric degree of freedom could be used to control the collective magnetic degree of freedom at the atomic scale. If this can be performed on a surface for which the switched polarization state can be fully characterized, then the system would provide an ideal venue for studying the coupling between collective polar and magnetic order at the level of a single atomic spin. This would represent a model system for understanding the fundamentals of multiferroic behavior. ## Materials and Methods Scanning Tunneling Microscopy and Atomic Force Microscopy.Scanning tunneling microscopy (STM) experiments were performed using a Specs JT-STM, a commercial adaptation of the design described by Zhang et al. [34] as well as an Omicron Nanotechnology LT-STM with a qPlus force sensor [35] installed for combined operation of both STM and atomic force microscopy (AFM). The qPlus sensor, with a resonance frequency \(f_{0}=23379.5\) Hz and a stiffness \(k\sim 1800\) N/m [35], was operated in non-contact AFM mode with a phase-locked excitation at a constant oscillation amplitude of \(A=20\) pm. Both systems were operated in ultrahigh vacuum conditions, with typical chamber pressures below 2\(\times 10^{-10}\) mbar, and at base temperatures of 1.1 K and 4.5 K respectively. In the Specs JT-STM, a magnetic field up to 3 T can be applied perpendicular to the sample surface. The bias voltage \(V\) is quoted in sample bias convention. Topographic images were obtained in the constant current imaging mode with \(V\) and tunnel current \(I\) set to \(V_{\mathrm{set}}\) and \(I_{\mathrm{set}}\) respectively. Differential conductance d/d/\(V\) measurements are obtained using a lock-in amplifier, with typical modulation voltages of 150 uV at \(\sim\)840 Hz added to \(V\). Spectroscopy is acquired by initially setting \(V\)=\(V_{set}\) and \(I\)=\(I_{set}\), disabling the feedback loop to maintain position of the tip, and then sweeping \(V\) while recording \(I\), d/d/\(V\), and/or the shift of the resonance frequency \(\Delta\!f\). Cu(001) samples (MaTeck single crystal with 99.999% purity) were prepared by repeated cycles of sputtering with Ar and annealing to 500 \(\mathrm{\SIUnitSymbolCelsius}\). Cu\({}_{2}\)N is prepared on top of clean Cu(001) samples by sputtering with N\({}_{2}\) and annealing to 350 \(\mathrm{\SIUnitSymbolCelsius}\)[36]. Deposition of NaCl was performed using a Knudsen effusion cell operated at 490 \(\mathrm{\SIUnitSymbolCelsius}\) and with the Cu\({}_{2}\)N/Cu(001) substrate at room temperature [27]. Topographic images obtained using STM were processed using WSxM [37]. ## Lateral atomic manipulation. Co atoms on top of Na sites on NaCl on Cu\({}_{2}\)N/Cu(001) can be laterally manipulated to the next Na atomic position by approaching the STM tip in closed feedback mode to maintain a constant \(I_{set}\) and (simultaneously) decreasing \(V\). Typical starting conditions are \(V_{\mathrm{set}}\)=-1.3 V and \(I_{\mathrm{set}}\)=10 pA. \(V\) is then decreased until a sharp jump in the tip position is observed, typically when \(V\)\(<100\) mV. The tip is then moved across the NaCl ML island while the Co follows the tip. The speed of the tip during lateral manipulation is below 10 nm/s.
2309.16302
Forming complex neurons by four-wave mixing in a Bose-Einstein condensate
A physical artificial complex-valued neuron is formed by four-wave mixing in a homogeneous three-dimensional Bose-Einstein condensate. Bragg beamsplitter pulses prepare superpositions of three plane-waves states as an input- and the fourth wave as an output signal. The nonlinear dynamics of the non-degenerate four-wave mixing process leads to Josephson-like oscillations within the closed four-dimensional subspace and defines the activation function of a neuron. Due to the high number of symmetries, closed form solutions can be found by quadrature and agree with numerical simulation. The ideal behaviour of an isolated four-wave mixing setup is compared to a situation with additional population of rogue states. We observe a robust persistence of the main oscillation. As an application for neural learning of this physical system, we train it on the XOR problem. After $100$ training epochs, the neuron responds to input data correctly at the $10^{-5}$ error level.
Kai Niklas Hansmann, Reinhold Walser
2023-09-28T09:54:28Z
http://arxiv.org/abs/2309.16302v1
# Forming complex neurons by four-wave mixing in a Bose-Einstein condensate ###### Abstract A physical artificial complex-valued neuron is formed by four-wave mixing in a homogeneous three-dimensional Bose-Einstein condensate. Bragg beamsplitter pulses prepare superpositions of three plane-waves states as an input- and the fourth wave as an output signal. The nonlinear dynamics of the non-degenerate four-wave mixing process leads to Josephson-like oscillations within the closed four-dimensional subspace and defines the activation function of a neuron. Due to the high number of symmetries, closed form solutions can be found by quadrature and agree with numerical simulation. The ideal behaviour of an isolated four-wave mixing setup is compared to a situation with additional population of rogue states. We observe a robust persistence of the main oscillation. As an application for neural learning of this physical system, we train it on the XOR problem. After 100 training epochs, the neuron responds to input data correctly at the \(10^{-5}\) error level. ## I Introduction Neural networks and deep learning methods have evolved dynamically into a far-reaching research field [1]. Nowadays, applications can be found in diverse areas like bio-chemistry [2], medicine [3], image analysis [4], computer-games [5; 6], gravitational-wave detection [7] and sundry more. There exists a variety of implementations of artificial neural networks: electronic implementations using graphical processing units [8; 9], but also other physical implementations can be considered [10; 11]. In particular, optical implementations receive a lot of attention [12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. The key issue in setting up a novel physical implementations of artificial neural networks is the description of their constituents, the artificial neurons. Diverse approaches can realize artificial neurons in photonic systems [22; 23; 24; 25; 26]. In this paper, we consider an artificial neuron using the inherent nonlinearity of ultracold coherent bosonic matter-waves. Coherent matter-waves show a wide range of nonlinear effects, which, for example, have been used to detect the phase transition towards a Bose-Einstein condensate (BEC) experimentally [27; 28]. For our purposes, we investigate the process of four-wave mixing (FWM) in coherent matter-waves, which is well-known from nonlinear optics [29]. If phase-matching conditions are fulfilled in a nonlinear optical medium, three frequencies interact in a way such that an initially absent fourth frequency can be observed. Following the advent of the BEC, theoretical investigations [30; 31; 32; 33; 34], as well as experiments [35; 36] demonstrated the equivalent FWM process. There, momentum components of the BEC took over the role of optical frequencies from the initial scenario. In an idealized homogeneous BEC, we can show that the FWM process of plane waves exhibits Josephson-like oscillations [37; 38; 39; 40; 41; 42; 43]. We utilize this highly nonlinear process to implement a complex-valued neuron, where we identify three momentum components as input and the fourth component as output. The input-output-relations of the neuron are highly nonlinear and will be investigated in detail. As an application, we train the FWM neuron on the benchmark XOR problem. The paper is organized as follows. In Sec. II, we introduce the isolated FWM problem in a three-dimensional homogeneous BEC, revealing the dynamics of populations and phases of the four momentum components. In Sec. III, we solve the FWM dynamics analytically in form of Josephson oscillations. After the investigation of FWM under ideal conditions, we look at the influence of additional population in momentum components outside of the FWM manifold in Sec. IV. Finally, we introduce the artificial FWM neuron in Sec. V, discuss the nonlinear activation function of the neuron and introduce the steepest descent learning method for complex-valued neurons. As an application, we train the FWM neuron on the XOR problem. In the appendix, we discuss the preparation of FWM input states using Bragg pulses well known in atom interferometry [44; 45; 46]. ## II Ideal four-wave mixing The dynamics of a weakly interacting BEC described by the order parameter \(\psi(\mathbf{r},t)\) are given by the Gross-Pitaevskii equation [47; 48] \[i\hbar\partial_{t}\psi= \left[-\frac{\hbar^{2}}{2m}\nabla^{2}+U+gn\right]\psi, \tag{1}\] where \(n(\mathbf{r})=|\psi(\mathbf{r})|^{2}\) is the density, \(N=\int\mathrm{d}^{3}r\,n\) is the total particle number, \(U(\mathbf{r})\) describes an external potential and the coupling constant \(g=4\pi\hbar^{2}a_{s}/m\) is proportional to the atomic s-wave scattering length \(a_{s}\) and the mass of the atoms \(m\). The Gross-Pitaevskii Lagrangian func tional for such a system reads [49; 50] \[L= \int\mathrm{d}^{3}r\;(i\hbar\psi^{*}\partial_{t}\psi-\mathcal{E}), \tag{2}\] \[\mathcal{E}= \frac{\hbar^{2}}{2m}|\nabla\psi|^{2}+Un+\frac{g}{2}n^{2}. \tag{3}\] In the following, we consider the case of a homogeneous BEC with \(U=0\) and periodic boundary conditions. Then, a wave function \(|\psi\rangle=|\psi_{\alpha}\rangle+|\psi_{\beta}\rangle\), is a coherent superposition of plane waves \(|\mathbf{k}_{j}\rangle\) with complex amplitudes \(\alpha_{j}\) and \(\beta_{l}\). It consists of a FWM state \[|\psi_{\alpha}\rangle=\sum_{j=1}^{4}\sqrt{N}\alpha_{j}\left|\mathbf{k}_{j}\right\rangle \tag{4}\] and a residual wave \[|\psi_{\beta}\rangle=\sum_{l>4}\sqrt{N}\beta_{l}\left|\mathbf{k}_{l}\right\rangle, \tag{5}\] which is orthogonal \(\langle\psi_{\alpha}|\psi_{\beta}\rangle=0\) to the FWM state. The complex amplitudes \(\alpha_{j}\), in terms of absolute value and phase, are given by \[\alpha_{j}=\sqrt{n_{j}}e^{-i\varphi_{j}}. \tag{6}\] Thus, \(n_{j}=|\alpha_{j}|^{2}\) is the probability to be in the momentum state \(|\mathbf{k}_{j}\rangle\). The mode functions \(\langle\mathbf{r}|\mathbf{k}_{j}\rangle=\mathrm{e}^{i\mathbf{k}_{j}\mathbf{r} }\,/\sqrt{V}\) are normalized in a cuboid with lengths \((L_{1},L_{2},L_{3})\) and a volume \(V=L_{1}L_{2}L_{3}\). For periodic boundary conditions, the wave-numbers \(k_{j}=2\pi\kappa_{j}/L_{j}\) are quantized with \(\kappa_{j}\in\mathbb{Z}\) and the plane wave states are orthonormal \(\langle\mathbf{k}_{i}|\mathbf{k}_{j}\rangle=\delta_{ij}\). The conditions for FWM are energy and momentum conservation [35] \[\omega_{1}+\omega_{2}=\omega_{3}+\omega_{4},\hskip 28.452756pt\mathbf{k}_{1}+ \mathbf{k}_{2}=\mathbf{k}_{3}+\mathbf{k}_{4}, \tag{7}\] for the dispersion relation of massive particles \[\omega_{j}=\omega(\mathbf{k}_{j})=\frac{\hbar|\mathbf{k}_{j}|^{2}}{2m}. \tag{8}\] Experimentally, momentum states fulfilling these conditions can be prepared using atomic beamsplitters based on Bragg diffraction [44; 51], as discussed in App. A. In the ideal FWM scenario, the residual wave is absent, \(\beta_{l}=0\). Consequently, \(\sum_{j=1}^{4}n_{j}=1\). In this case, the nondimensionalization of the physical Lagrangian functional (2) is achieved by measuring time \(\tau=\gamma t\) by a clock that ticks with frequency \(\gamma=gN/\hbar V\), by scaling the frequencies \(\bar{\omega}_{j}=\omega_{j}/\gamma\), as well as scaling and shifting the Lagrangian function \(\mathcal{L}=1+VL/gN^{2}\). Thus, the mathematical Lagrangian functional reads \[\mathcal{L} =\sum_{j=1}^{4}i\alpha_{j}^{*}\dot{\alpha}_{j}-\mathcal{E}, \tag{9}\] \[\mathcal{E} =\sum_{j=1}^{4}\varepsilon_{j}+2(\alpha_{1}^{*}\alpha_{2}^{*} \alpha_{3}\alpha_{4}+\mathrm{c.c.}), \tag{10}\] where \(\dot{\alpha}_{j}\) denotes \(\partial_{\tau}\alpha_{j}\) and the mean-field shifted single particle energies \(\varepsilon_{j}\) and chemical potentials \(\mu_{j}\) are defined as \[\varepsilon_{j}=\bar{\omega}_{j}n_{j}-\frac{n_{j}^{2}}{2},\hskip 21.681pt\mu_{j} =\frac{\partial\varepsilon_{j}}{\partial n_{j}}=\bar{\omega}_{j}-n_{j}. \tag{11}\] According to the Euler-Lagrange equations [41] \[\frac{\mathrm{d}}{\mathrm{d}\tau}\frac{\partial\mathcal{L}}{\partial\dot{ \alpha}_{j}}=\frac{\partial\mathcal{L}}{\partial\alpha_{j}}, \tag{12}\] the complex amplitudes evolve as \[\begin{split} i\dot{\alpha}_{1}&=\mu_{1}\alpha_{1}+2 \alpha_{2}^{*}\alpha_{3}\alpha_{4},\\ i\dot{\alpha}_{2}&=\mu_{2}\alpha_{2}+2\alpha_{1}^{* }\alpha_{3}\alpha_{4},\\ i\dot{\alpha}_{3}&=\mu_{3}\alpha_{3}+2\alpha_{4}^{* }\alpha_{1}\alpha_{2},\\ i\dot{\alpha}_{4}&=\mu_{4}\alpha_{4}+2\alpha_{3}^{* }\alpha_{1}\alpha_{2}.\end{split} \tag{13}\] Clearly, these equations are highly symmetric, which can be explored using the polar decomposition of the complex amplitudes (6). Most notably is the interaction term in (10). It will coherently couple the subspaces \(\{\left|\mathbf{k}_{1}\right\rangle,\left|\mathbf{k}_{2}\right\rangle\} \longleftrightarrow\{\left|\mathbf{k}_{3}\right\rangle,\left|\mathbf{k}_{4} \right\rangle\}\) through the relative phase-difference \(\phi=\varphi_{1}+\varphi_{2}-\varphi_{3}-\varphi_{4}\) and population imbalance \(m=n_{1}+n_{2}-n_{3}-n_{4}\). To confirm this first impression, we evaluate the dynamics of the system (13) numerically. In Fig. 1 (a), we show periodic oscillations of the populations \(n_{j}(\tau)\) and quasi-linear evolution of the phases \(\varphi_{j}(\tau)\). In the \(\phi\)-\(m\) phase-space, one finds periodic as well as aperiodic orbits of a mathematical pendulum with a separatrix in between (see Fig. 1 (b)). The population imbalance \(m\) is constrained by the conserved quantities \(m_{12}=n_{1}-n_{2}\) and \(m_{34}=n_{3}-n_{4}\) discussed in Sec. III. ## III Josephson oscillations of four-wave mixing amplitudes ### Coordinate transformation Due to the Lagrangian field theory, the time-independent Hamiltonian energy (2) and the FWM state ansatz (4), we obtain a discrete nonlinear set of four Hamiltonian equations with a number of symmetries. This constrains the dynamics to a two-dimensional phase-space, analogous to the mathematical pendulum. Due to the phase-invariant structure of the self-energy \(gn^{2}\), typical Josephson oscillations [37; 38; 39; 40; 41; 42; 43] emerge. Similar equations appear in the study of semiclassical methods in the theory of Rydberg atoms [52]. Guided by this idea, we introduce adapted coordinates \[\alpha_{1}=\sqrt{n_{1}}\,\mathrm{e}^{-i(\Phi+\phi/4+\varphi)}, \alpha_{2}=\sqrt{n_{2}}\,\mathrm{e}^{-i(\Phi+\phi/4-\varphi)}, \tag{14}\] \[\alpha_{3}=\sqrt{n_{3}}\,\mathrm{e}^{-i(\Phi-\phi/4+\theta)}, \alpha_{4}=\sqrt{n_{4}}\,\mathrm{e}^{-i(\Phi-\phi/4-\theta)}\,.\] From the global phase invariance of (2) or (9), one finds that the total occupation \(\sum_{j=1}^{4}n_{j}=\mathrm{const}\). This can be used to construct a generating function \(R(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\Phi,\phi,\varphi,\theta)\) as \[R=\frac{i}{2}\,\mathrm{e}^{2i\Phi}\Big{(}\alpha_{1}^{2}\,\mathrm{ e}^{2i(\phi/4+\varphi)}\,+\alpha_{2}^{2}\,\mathrm{e}^{2i(\phi/4-\varphi)}\\ +\alpha_{3}^{2}\,\mathrm{e}^{2i(-\phi/4+\theta)}\,+\alpha_{4}^{2} \,\mathrm{e}^{2i(-\phi/4-\theta)}\Big{)}. \tag{15}\] According to the rules of Hamiltonian mechanics [53], this generating function relates old coordinates \((\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) to new coordinates \((\Phi,\phi,\varphi,\theta)\). In turn, one can obtain the old momenta \[\pi_{j}=\frac{\partial R}{\partial\alpha_{j}}=i\alpha_{j}^{*}, \tag{16}\] as well as the new momenta \[P_{\Phi}= -\frac{\partial R}{\partial\Phi}=n_{1}+n_{2}+n_{3}+n_{4}, \tag{17}\] \[P_{\phi}= -\frac{\partial R}{\partial\phi}=\frac{n_{1}+n_{2}-n_{3}-n_{4}}{ 4}\equiv\frac{m}{4},\] (18) \[P_{\varphi}= -\frac{\partial R}{\partial\varphi}=n_{1}-n_{2}\equiv m_{12},\] (19) \[P_{\theta}= -\frac{\partial R}{\partial\theta}=n_{3}-n_{4}\equiv m_{34}. \tag{20}\] In terms of the new coordinates the dimensionless Lagrangian \(\mathcal{L}\) reads \[\mathcal{L}= \dot{\Phi}+\frac{m}{4}\dot{\phi}+m_{12}\dot{\varphi}+m_{34}\dot{ \theta}-H(m,\phi), \tag{21}\] with a generic Josephson Hamiltonian energy \[H(m,\phi)=\frac{\eta}{4}\cos\phi-\frac{m^{2}}{8}+\mathcal{C}, \tag{22}\] \[\eta=\sqrt{[(1+m)^{2}-4m_{12}^{2}]\,[(1-m)^{2}-4m_{34}^{2}]}. \tag{23}\] Here, we have denoted an energy offset \(\mathcal{C}=(m_{12}^{2}+m_{34}^{2}+2(\bar{\omega}_{12}m_{12}+\bar{\omega}_{34 }m_{34})-7/2+\sum_{j=1}^{4}\bar{\omega}_{j})/4\) and transition energies \(\bar{\omega}_{12}=\bar{\omega}_{1}-\bar{\omega}_{2}\) and \(\bar{\omega}_{34}=\bar{\omega}_{3}-\bar{\omega}_{4}\). As \(\mathcal{L}\) does not depend on \(\Phi\), \(\varphi\) or \(\theta\), these phases are cyclic [54]. Therefore, the conjugate momenta, total particle number \(N\) and population differences \(m_{12}\) (19), \(m_{34}\) (20) are conserved. Consequently, the equations of motion for \(\Phi\), \(\varphi\) and \(\theta\) can be solved by quadrature. Clearly \(H\) (22) is the Legendre transform of \(\mathcal{L}\) (21). Accordingly, the dynamics of the system, using (18) reads \[\dot{\phi}=4\partial_{m}H=\cos\phi\partial_{m}\eta-m, \tag{24}\] \[\dot{m}=-4\partial_{\phi}H=\eta\sin\phi.\] These are Josephson-like differential equations [38; 39; 40]. ### General solution In simple classical mechanics problems of particles with position \(x\) and momentum \(p\), Hamiltonian energies \(H(x,p)=T(p)+V(x)\) separate into kinetic \(T(p)\) and potential \(V(x)\) energy. At the turning points \(\dot{x}=\partial_{p}H=0\), the Hamilton function is purely determined by potential energy \(H(x,p=0)=V(x)\). A similar investigation can be performed in the given case [55; 52]. Through a canonical transformation, we can exchange the role of position and momentum and consider \(m\) as the position and \(\phi\) as the momentum variable. Thus, at the turning points \(\dot{m}=-4\partial_{\phi}H=0\), remarkably two momenta \[\phi^{+}= 0, \phi^{-}= \pi. \tag{25}\] are possible. In turn, this defines two potentials \[H(m,\phi^{\pm})=V^{\pm}(m)=\pm\frac{\eta}{4}-\frac{m^{2}}{8}+\mathcal{C}. \tag{26}\] Physical solutions with energies \(\varepsilon=H(m,\phi)\) must be constraint by these two potentials, \(V^{-}<\varepsilon<V^{+}\). This limits the value range of \(m\) and \(\phi\) depending on the system parameters \(m_{12}\) and \(m_{34}\) (see Fig. 2). As the energy of the system is conserved, the equation of motion (24) for \(m(\tau)\) can be expressed using the potentials \(V^{\pm}\) as \[\dot{m}=\pm 4\sqrt{(V^{+}(m)-\varepsilon)(\varepsilon-V^{-}(m))}. \tag{27}\] Thus, the dynamical solution \(\tau(m)\) can be calculated as \[\tau(m)-\tau_{0}=\int_{m_{0}}^{m}\frac{\pm\mathrm{d}\zeta}{4\sqrt{(V^{+}( \zeta)-\varepsilon)(\varepsilon-V^{-}(\zeta))}}. \tag{28}\] This relation can be inverted piecewise to obtain \(m(\tau)\). ### Analytical solution for \(m_{12}=m_{34}=0\) For the special case \(m_{12}=m_{34}=0\), implying \(n_{1}=n_{2}\) and \(n_{3}=n_{4}\), an analytical expression for the dynamical solution \(m(\tau)\) can be given in terms of the elliptic cosine \(\mathrm{cn}(u)\)[56] as \[m(\tau)=\pm\sqrt{\frac{\mu+2}{3}}\ \mathrm{cn}\left(\xi(\tau-\tau_{0}),\rho^{2} \right), \tag{29}\] where \(\mu=m_{0}^{2}+2(m_{0}^{2}-1)\cos\phi_{0}\), \(\xi=\sqrt{6-3\mu}/2\) and \(\rho^{2}=(\mu+2)/(6-3\mu)\). With that, the dynamical solution of the phase \(\phi(\tau)\) can be calculated by integration of (24), yielding \[\phi(\tau)= 2\mathrm{arctan}\big{\{}\sqrt{3}\tanh\bigl{[}\ln(1-\rho)\] \[-\ln\bigl{(}\mathrm{d}\ln\big{(}\xi(\tau-\tau_{0}),\rho^{2}\big{)} -\rho\,\mathrm{cn}\left(\xi(\tau-\tau_{0}),\rho^{2}\right)\bigr{)}\] \[+\mathrm{arctanh}\bigl{(}\mathrm{tan}(\phi_{0}/2)/\sqrt{3}\bigr{)} \bigr{]}\big{\}}, \tag{30}\] with the delta ampltiude \(\mathrm{d}\mathrm{n}(u)\)[56]. The analytical solutions for \(m(\tau)\) and \(\phi(\tau)\) as well as visualizations of the potentials \(V^{+}\) and \(V^{-}\) can be seen in Fig. 2. The period of the motion can be calculated as \[T=\frac{4\ \mathrm{K}(\rho^{2})}{\xi}. \tag{31}\] There, K is the complete elliptic integral of first kind [56]. The basic frequency of the oscillation \(T_{0}=T(m_{0}=0)\) can be calculated as \[T_{0}=4\pi/\sqrt{12}. \tag{32}\] As can be seen in Fig. 3, the period of the FWM oscillation diverges when nearing the regime of aperiodic solutions. ## IV Four-wave mixing with background population In the ideal FWM setting, the residual wave \(|\psi_{\beta}\rangle\) is absent. However, additional momentum states might be populated accidentally during the initialization procedure or system evolution. To investigate this scenario, we simulate the dynamics of the system, described by the Gross-Pitaevskii equation (1), using a Runge-Kutta scheme and Fast Fourier Transforms (FFT) on a discrete periodic lattice. We use a two-dimensional lattice with \(16\times 16\) sites while setting \(\gamma=1/\mathrm{s}\) and discretizing dimensionless time with \(\Delta\tau=10^{-6}\). For implementation we choose the geometry of FWM states described in App. A, yielding \(\bar{\omega}_{j}=1\) for \(j=1,\ldots,4\). The populations are set to \(n_{1}=n_{2}=0.375\) and \(n_{3}=n_{4}=0.125\), resulting Figure 3: Period of FWM oscillation \(T\) versus initial population imbalance \(m_{0}\), normed to \(T_{0}\). The period diverges when nearing aperiodic solutions. Figure 2: (a)-(c) Potentials \(V^{+}(m)\) (blue) and \(V^{-}(m)\) (orange) versus population imbalance \(m\) (26) for \(\bar{\omega}_{j}=1\). For \(m_{12}=m_{34}\) equal to (0.1 in (a), 0.3 in (b)), the potentials are symmetric around the origin. Otherwise, this symmetry is broken (\(m_{12}=0.1\), \(m_{34}=0.3\) in (c)). (d) The potentials can be shifted in dimensionless energy by varying the value of the recoil frequencies (\(\bar{\omega}_{j}=1.1\)). (e) For \(m_{12}=m_{34}=0\), \(V^{+}\) and \(V^{-}\) are symmetric in \(m\) and \(m\in[-1,1]\). For initial values \(m_{0}=0.5\), \(\phi_{0}=0\) and \(\tau_{0}=0\), the dynamics of the Josephson variables described by (29) and (30) are shown in (f) and (g). The energy is constant during the oscillation (green in (e)). in \(m_{12}=m_{34}=0\) and \(m_{0}=0.5\). All phases are set to \(\varphi_{j}=0\), yielding \(\phi_{0}=0\). As can be seen in Fig. 4, the numerical results of the GP simulation start to deviate from the four-mode approximation (13) already after a few cycles, noticeably. Looking at \(m(\tau)\) and \(\phi(\tau)\), the numerical results show a larger period of the oscillation. However, the general shape of the oscillations remains unchanged. This behaviour is caused by an instability of the simulation due to numerical noise of the FFT producing population on the grid outside of the FWM states. As depicted in Fig. 5 (a), the system is prepared at \(\tau=0\) with population only present in the FWM states. However, the histogram in Fig. 5 (b) at \(\tau=5\) clearly shows that additional states in the vicinity of the FWM states have been populated. As this background population is located at the center of the lattice, the chosen grid is large enough such that no edge effects occur during the simulation. Yet, the instability caused by accidental population of additional momentum states is not destructive in nature. Looking at Fig. 5 (c), the total background population \[n_{B}=\sum_{l>4}|\beta_{l}|^{2} \tag{33}\] grows rapidly at the beginning of the oscillation. Subsequently, the dynamics of \(n_{B}(\tau)\) stabilize and show oscillations with a maximum value of around \(n_{B}\simeq 5\cdot 10^{-4}\). As can be seen in Fig. 5 (d), the frequency of the ensuing oscillation is about 50 times larger than the FWM frequency \[\nu_{F}=\frac{1}{T(m_{0}=0.5)}\simeq 0.244. \tag{34}\] The non-negligible background population is the cause of change in the dynamics of the FWM process. Because of \[n_{F}+n_{B}= 1, n_{F}= \sum_{j=1}^{4}|\alpha_{j}|^{2}, \tag{35}\] growing \(n_{B}\) reduces the population in the FWM states \(n_{F}\) in comparison to the ideal case. As the FWM process is caused by the density-density-interaction terms in the Gross-Pitaevskii equation (1), even small changes in the particle number participating in the process have profound effects on the dynamics. The analytical solution can be recovered by eliminating all numerical noise produced by FFT after each simulation step. Using such masks in \(k\)-space, the numerical simulation and analytical solution agree within about Figure 4: Population imbalance \(m(\tau)\) and relative phase \(\phi(\tau)\) versus dimensionless time \(\tau\) for analytical (orange, dashed) and numerical GP simulation on a discrete periodic lattice (blue, solid). Figure 5: Histograms of populations on discrete \(16\times 16\) lattice in \(k_{x}\)-\(k_{y}\)-plane at \(\tau=0\) (a) and \(\tau=5.0\) (b). (c) Background population \(n_{B}\) starts oscillating and quickly reaches maximum value. (d) Oscillation frequency of \(n_{B}\) is about 50 times bigger than the FWM frequency \(\nu_{F}\). \(10^{-5}\) (see Fig. 6). However, this procedure yields a loss in total particle number of about \(\Delta N/N=10^{-6}\), far surpassing typical numerical noise. For the implementation of the FWM neuron, we are interested in rather short time scales and more qualitative behaviour of the system. Therefore, we accept the change in frequency of the FWM oscillations and use the simulation on a discrete periodic lattice in the investigations without additionally applying a filter mask in \(k\)-space. This is beneficial due to the high flexibility of the simulation regarding initial conditions of the FWM states. However, the deviation between the ideal case and with present background population should be kept in mind, especially when looking at increasing simulation times. ## V Four-wave mixing neuron Artificial neurons are the basic computation units in neural networks [57]. In addition to the processing of real numbers, such systems are also able to operate with complex-valued inputs and outputs [58]. As the FWM process is described in terms of complex amplitudes \(\alpha_{j}\), the presented implementation of the FWM neuron constitutes a complex-valued neuron. Due to the experimental accessibility of particle numbers and phases, we choose to describe the nonlinear activation function and the learning process in terms of absolute values and phases, rather than using real and imaginary parts of the complex amplitudes \(\alpha_{j}\)[58]. In general, complex-valued artificial neurons process an \(n\)-dimensional input \(x_{j}=|x_{j}|e^{i\kappa_{j}}\), \(j=1,\ldots,n\), by multiplying individually with weights \(w_{j}=|w_{j}|e^{i\theta_{j}}\), summing up the weighted inputs \(v_{j}=w_{j}x_{j}\) and yielding an output \(y\) via a nonlinear activation function \(\Omega\), \[y= \Omega(u), u=\sum_{j=1}^{n}v_{j}. \tag{36}\] We implement such a computational unit with the FWM process on coherent matter-waves. The phase-flow \[\tilde{\mathbf{\alpha}}=\Phi(\mathbf{\alpha};\tau_{F}), \tag{37}\] maps the initial state \(\mathbf{\alpha}=(\alpha_{1},\ldots,\alpha_{4})\) to the evolved state \(\tilde{\mathbf{\alpha}}=(\tilde{\alpha}_{1},\ldots,\tilde{\alpha}_{4})\) after the duration \(\tau_{F}\) of the FWM process (13). Identifying the three amplitudes \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) as weighted inputs \(v_{j}\) and \(\tilde{\alpha}_{4}\) as output \(y\), a similar, though not identical, rule to (36) can be established \[\tilde{\alpha}_{4}=\Phi_{4}(\alpha_{1},\alpha_{2},\alpha_{3},0;\tau_{F}). \tag{38}\] The fourth component of the phase-flow map constitutes a nonlinear activation function of a complex-valued FWM neuron with three input channels. In an experiment, we use externally stored weights \(w_{j}\) for the neuron, the classical input data \(x_{j}\) and prepare the weighted input amplitude \[\alpha_{j}=w_{j}x_{j} \tag{39}\] by a sequence of Bragg pulses (see App. A). ### Nonlinear activation function In order to quantify the nonlinear activation function, \(\tau_{F}\) has to be determined. To do so, we choose \(n_{1}=n_{2}=0.45\) and \(n_{3}=0.1\), while setting \(\varphi_{j}=0\). The resulting FWM oscillation can be seen in Fig. 7. To maximize the output in terms of \(\tilde{n}_{4}\) for this scenario, we set \[\tau_{F}=T/2, \tag{40}\] where \(T\) is the oscillation period as in (31). The FWM neurons response is calculated for varying weighted inputs numerically (cf. Sec. IV). We tune \(n_{j}\) from 0 to 1 subject to the constraint \(\sum_{j=1}^{4}n_{j}=1\). Due to probability (number) conservation, all admissible combinations of \(n_{j}\) form a plane in \(n_{1}\)-\(n_{2}\)-\(n_{3}\)-space. The input phases \(\phi_{j}\) are varied from 0 to \(2\pi\). The results can be seen in Fig. 8. The output particle number \[\tilde{n}_{4}=|\Phi_{4}(\alpha_{1},\alpha_{2},\alpha_{3},0;\tau_{F})|^{2} \tag{41}\] is independent of the input phases \(\varphi_{j}\). Hence, only the input particle numbers \(n_{j}\) determine this part of the output. While there is no analytical expression for the relation, it can be extracted from Fig. 8, that there has to be an exchange symmetry regarding \(n_{1}\) and \(n_{2}\). Figure 7: Initialisation sequence for a FWM neuron. Classical inputs \(x_{j}\) are weighted with \(w_{j}\), yielding amplitudes \(\alpha_{j}\). The nonlinear relation \(\Phi_{4}(\alpha_{1},\alpha_{2},\alpha_{3},0;\tau_{F})\) yields output \(\tilde{\alpha}_{4}\). The duration \(\tau_{F}=T/2\) is determined for \(n_{1}=n_{2}=0.45\) (blue, solid; green, dashed) and \(n_{3}=0.1\) (orange, dash-dotted; \(n_{4}\): red, dotted), while \(\varphi_{j}=0\), as a half-oscillation period leading to maximal response. The output phase \[\tilde{\varphi}_{4}=\arg\left[\Phi_{4}(\alpha_{1},\alpha_{2},\alpha_{3},0;\tau_{F} )\right] \tag{42}\] exhibits a remarkable simple behaviour. By analyzing Fig. 8 (b), we find \[n_{\varphi}= 3n_{1}+3n_{2}+5n_{3},\ \ \ \ \varphi_{\varphi}= \varphi_{1}+\varphi_{2}-\varphi_{3}. \tag{43}\] Accordingly, the input-output-relation reads \[\tilde{\varphi}_{4}=sn_{\varphi}+\varphi_{\varphi}+d, \tag{44}\] where the slope and offset of phase were determined from a fit as \(s=(-1.77\pm 0.01)\) and \(d=(2.67\pm 0.04)\). The numerical results in Fig. 8 can be used to determine the partial derivatives \(\partial\tilde{n}_{4}/\partial n_{j}\), \(\partial\tilde{\varphi}_{4}/\partial n_{j}\) and \(\partial\tilde{\varphi}_{4}/\partial\varphi_{j}\). These are needed to be able to train the neuron according to a steepest descent method. ### Steepest descent learning for complex-valued neurons Steepest descent methods are common procedures in optimization, as well as in supervised learning in neural networks [59]. We consider the case of a single output neuron. In so-called error-correction learning, this neuron is stimulated by an input vector \(\mathbf{x}^{(i)}=(x_{1}^{(i)},x_{2}^{(i)},x_{3}^{(i)})\), where \(i\) denotes an instant in time at which the excitation is applied to the system. The training dataset is described by \[\mathcal{T}:\left\{\mathbf{x}^{(i)},\hat{\alpha}_{4}^{(i)};i=1,\ldots, \mathcal{M}\right\}, \tag{45}\] where \(\hat{\alpha}_{4}^{(i)}\) is the desired response associated with \(\mathbf{x}^{(i)}\) and \(\mathcal{M}\) is the size of the dataset. In response to this stimulus, the neuron produces an output \(\tilde{\alpha}_{4}^{(i)}\). Starting from initial weights \(\mathbf{w}=(w_{1},\ldots,w_{n})\), the goal of the learning procedure is to adjust the weights to minimize the difference between the desired and actual outputs, described by means of a cost function \(\mathcal{F}\). A typical cost function is the squared error averaged over the training sample set [60] \[\mathcal{F}=\frac{1}{\mathcal{M}}\sum_{i=1}^{\mathcal{M}}\mathcal{F}^{(i)}, \ \ \ \ \ \mathcal{F}^{(i)}=\frac{1}{2}\left|\tilde{\alpha}_{4}^{(i)}-\hat{\alpha}_{4}^{ (i)}\right|^{2}. \tag{46}\] This yields an unconstrained optimization problem with the necessary condition for optimality \(\nabla\mathcal{F}=0\), where \(\nabla\) denotes the gradient operator in weight space. In a steepest descent method, adjustments applied to the weight vector are performed in the direction of the negative gradient \[\Delta\mathbf{w}(n)=\mathbf{w}(n+1)-\mathbf{w}(n)=-\lambda\nabla\mathcal{F}, \tag{47}\] where \(n\) symbolizes one iteration and \(\lambda\) is a positive learning rate. In the on-line learning approximation [61], adjustments to the weights are performed on an example-by-example basis. The cost function to minimize is therefore the instantaneous error energy \(\mathcal{F}^{(i)}\). An epoch consists of \(\mathcal{M}\) training samples. At an instant \(i\), a pair \(\{\mathbf{x}^{(i)},\hat{\alpha}_{4}^{(i)}\}\) is presented to the neuron and weight adjustments are performed. Subsequently, the next sample is presented to the network until all \(\mathcal{M}\) samples have been evaluated. The absolute values and phases of the weights can be updated independently [58] \[\Delta|w_{j}^{(i)}|= -\lambda_{a}\partial_{|w_{j}|}\mathcal{F}^{(i)},\quad\Delta\vartheta _{j}^{(i)}=-\lambda_{p}\partial_{\vartheta_{j}}\mathcal{F}^{(i)}, \tag{48}\] where \(\lambda_{a}\) and \(\lambda_{p}\) are the learning rates for absolute value and phase respectively. The required gradients for the update rules (48), keeping in mind the variable dependencies of the nonlinear activation function, are calculated using the chain rule as \[\frac{\partial\tilde{n}_{4}}{\partial|w_{j}|}= \frac{\partial\tilde{n}_{4}}{\partial n_{j}}\frac{\partial n_{j}}{ \partial|w_{j}|}=|x_{j}|\frac{\partial\tilde{n}_{4}}{\partial n_{j}},\] \[\frac{\partial\tilde{\varphi}_{4}}{\partial|w_{j}|}= \frac{\partial\tilde{\varphi}_{4}}{\partial n_{j}}\frac{\partial n _{j}}{\partial|w_{j}|}=|x_{j}|\frac{\partial\tilde{\varphi}_{4}}{\partial n _{j}}, \tag{49}\] \[\frac{\partial\tilde{\varphi}_{4}}{\partial\vartheta_{j}}= \frac{\partial\tilde{\varphi}_{4}}{\partial\varphi_{j}}\frac{ \partial\varphi_{j}}{\partial\vartheta_{j}}=\frac{\partial\tilde{\varphi}_{4}} {\partial\varphi_{j}}.\] Hence, the update rules for \(|w_{j}|\) and \(\vartheta_{j}\) are \[\Delta|w_{j}^{(i)}|= -\lambda_{a}\Big{[}\Big{(}\tilde{n}_{4}^{(i)}-\hat{n}_{4}^{(i)} \cos\Big{(}\tilde{\varphi}_{4}^{(i)}-\hat{\varphi}_{4}^{(i)}\Big{)}\Big{)} \,\partial_{n_{j}}\tilde{n}_{4}^{(i)}\] \[+\tilde{n}_{4}^{(i)}\sin\Big{(}\tilde{\varphi}_{4}^{(i)}-\hat{ \varphi}_{4}^{(i)}\Big{)}\partial_{n_{j}}\hat{\varphi}_{4}^{(i)}\Big{|}|x_{j} ^{(i)}|, \tag{50}\] \[\Delta\vartheta_{j}^{(i)}= -\lambda_{p}\tilde{n}_{4}^{(i)}\hat{n}_{4}^{(i)}\sin\Big{(} \tilde{\varphi}_{4}^{(i)}-\hat{\varphi}_{4}^{(i)}\Big{)}\partial_{\varphi_{j} }\hat{\varphi}_{4}^{(i)}. \tag{51}\] ### Application: XOR problem To investigate the calculation and learning abilities of the FWM neuron, we use it to solve the XOR problem. The input-output mapping for this problem is shown in Tab. 1. The XOR problem consists of two real-valued binary inputs. The output is supposed to be 0 if the two inputs are identical and 1 if they are different. It has been shown, that this problem is not solvable for a single real-valued neuron, i.e. hidden layers are required [62]. However, a single complex-valued neuron is able to solve this problem [63]. #### iv.3.1 Input and output encoding To use the full value range of the nonlinear activation function of the FWM neuron to solve the XOR problem, an encoding scheme for the inputs and the output has to be developed. The inputs \(x_{1,2}\) are chosen to lie on the positive real axis (\(\kappa_{j}=0\)). While an input 0 is identified by \(|x_{j}|=0.3\), an input 1 is given by \(|x_{j}|=0.45\). The weights \(w_{j}\) of the neuron are still allowed to possess non-vanishing phases \(\vartheta_{j}\). Therefore, the weighted inputs presented to the FWM neuron will be given by \[\sqrt{n_{j}}= |w_{j}||x_{j}|, \varphi_{j}= \vartheta_{j}. \tag{52}\] As two input particle numbers, chosen to be \(n_{1}\) and \(n_{2}\), of the FWM neuron are set using this encoding, the third, in this case \(n_{3}\), is automatically determined to ensure \(\sum_{j}n_{j}=1\). Consequently, the combinations of inputs \(n_{1}\) \begin{table} \begin{tabular}{|c c|c||c||c|} \hline Input 1 & Input 2 & Output & \(\tilde{n}_{4}\) & \(\overline{\varphi}_{4}\) \\ \hline 0 & 0 & 0.3 & 0 & 0.125 & 1.5 \\ 0 & 1 & 0.3 & 0.45 & 1 & 0.155 & 2.0 \\ 1 & 0 & 0.45 & 0.3 & 1 & 0.155 & 2.0 \\ 1 & 1 & 0.45 & 0.45 & 0 & 0.435 & 2.5 \\ \hline \end{tabular} \end{table} Table 2: Encoded input-output mapping for the XOR problem using the FWM neuron. Figure 9: Input-output relations (a) \(\tilde{n}_{4}(n_{1},n_{2})\) and (b) \(\tilde{\varphi}_{4}(n_{1},n_{2},0,0)\) of the FWM neuron to solve the XOR problem. (c) By choosing the outputs of the individual cases according to Tab. 2 (green, output 0; red, output 1), the XOR problem is solvable using a single FWM neuron. and \(n_{2}\) are constrained by \(0\leq n_{1}+n_{2}\leq 1\). The particle number response of the FWM neuron to the inputs is completely determined by the input particle numbers \(\tilde{n}_{4}(n_{1},n_{2})\). The neuron response in terms of the phase follows \[\tilde{\varphi}_{4}(n_{1},n_{2},\varphi_{1},\varphi_{2})=\tilde{\varphi}_{4}(n_ {1},n_{2},0,0)+\varphi_{1}+\varphi_{2} \tag{53}\] These input-output relations can be seen in Fig. 9. The possible outputs of the XOR problem are encoded in a similar fashion. An output \(0\) is encoded via \(\tilde{n}_{4}=0.125\) and \(\tilde{\varphi}_{4}=1.5\) or \(\tilde{n}_{4}=0.435\) and \(\tilde{\varphi}_{4}=2.5\) for the input cases \([0,0]\) and \([1,1]\) respectively. The output \(1\) is always encoded as \(\tilde{n}_{4}=0.155\) and \(\tilde{\varphi}_{4}=2\). The complete encoding of the XOR problem for the FWM neuron can be seen in Tab. 2. The presented encoding is completely equivalent to the original XOR problem. Hence, it can be used to solve the problem by means of the FWM neuron. #### v.2.2 Training results Starting from random initial weights, the update rules (50) and (51) are used to train the FWM neuron to solve the XOR problem. Training epochs are performed with \(\mathcal{M}=1000\) random samples. The learning rate of the phase \(\lambda_{p}=10^{-8}\) is kept constant for all epochs while the absolute value learning rate \(\lambda_{a}\) is gradually reduced from \(10^{-3}\) to \(10^{-4}\) during the training. After each epoch, the performance of the neuron is evaluated by calculating the averaged squared error \(\mathcal{F}\) according to (46) using all four possible input-output pairs of the XOR problem. As can be seen in Fig. 10, the FWM neuron is able to learn to solve the XOR problem. After 100 training epochs, the initial error is reduced to \(\mathcal{F}=7.8\cdot 10^{-6}\). A sample is categorized as being identified correctly, if the neuron output is within \(\pm 0.005\) in terms of particle number and within \(\pm 0.05\) in terms of phase of the desired value. At the end of the training procedure, every test sample is identified correctly. ## VI Conclusion & perspectives We investigated the ideal FWM process in a three-dimensional homogeneous BEC. By introducing appropriate coordinates, we showed, that the dynamics of the system exhibit Josephson-like oscillations, which can be described analytically by means of elliptic functions. These analytical expressions agree with numerical simulations of the Gross-Pitaevskii equation on a discrete periodic lattice. We investigated the influence of additional population outside of the FWM states on these dynamics. While the frequency of the oscillations changes, the main characteristics of the dynamics persist. Identifying three complex amplitudes of the FWM setup as input and the fourth amplitude as output, we introduced a new implementation for a complex-valued artificial neuron. We investigated the nonlinear activation function of the FWM neuron and showed its learning capabilities using steepest descent learning for complex-valued neurons. These are demonstrated by solving the XOR problem using the FWM neuron. After completing 100 learning epochs, the FWM neuron was able to identify every test sample presented to it correctly. Looking ahead, we aim to implement the FWM neuron in a deep neural network. For this, two key aspects have to be investigated: parallelization ability and communication between layers of the network. Preliminary investigations showed, that multiple FWM neurons can be run in parallel by stacking the FWM setup in momentum space. Furthermore, light sheets can be used to stack multiple FWM neurons in real space and run them in parallel. However, more effort has to be put into investigations whether and how such stacked FWM neurons influence each other. Transporting information through a FWM neural network would require a delicate sequence of Bragg pulses to initialize output of one layer as input for the subsequent one. The exact nature of these pulse sequence has to be developed and investigated in detail. ###### Acknowledgements. This work is supported by the DLR German Aerospace Center with funds provided by the Federal Ministry for Economic Affairs and Energy (BMWi) under Grant No. 50WM2250E. ## Appendix A Four-wave mixing state preparation The desired state after initialization for FWM is a superposition of plane waves with wave vectors \(\mathbf{k}_{1}\), \(\mathbf{k}_{2}\), \(\mathbf{k}_{3}\) and \(\mathbf{k}_{4}\), fulfilling the conditions (7). There, all possible combinations for populations \(n_{1}\), \(n_{2}\), \(n_{3}\) and \(n_{4}\) with \(\sum_{j=1}^{4}n_{j}=1\) should be realizable. We suggest to use atomic beamsplitters based on Bragg diffraction to populate the momentum states. This method is based on the interaction between the BEC Figure 10: Averaged squared error \(\mathcal{F}\) (46) over all four possible input-output pairs of the XOR problem versus number of training epochs. in its internal ground state and two counterpropagating laser beams. In this scenario, energy and momentum conservation have to hold [44], \[\hbar\omega_{1}+\frac{\hbar^{2}k_{i}^{2}}{2m}= \hbar\omega_{2}+\frac{\hbar^{2}k_{f}^{2}}{2m},\ \ \ \mathbf{k}_{i}+\mathbf{k}_{1}=\mathbf{k}_{f}+\mathbf{k}_{2}, \tag{10}\] with the initial \(\mathbf{k}_{i}\) and the final \(\mathbf{k}_{f}\) wave vector of the BEC and the frequencies \(\omega_{1}\) and \(\omega_{2}\) and wave vectors \(\mathbf{k}_{1}\) and \(\mathbf{k}_{2}\) of the laser beams respectively. If the two laser beams are perfectly anti-collinear, the momentum transfer in the BEC can be characterized as \[\mathbf{k}_{f}-\mathbf{k}_{i}=\mathbf{k}_{1}-\mathbf{k}_{2}=2\mathbf{k}_{L}, \tag{11}\] where \(\mathbf{k}_{L}=(\mathbf{k}_{1}+\mathbf{k}_{2})/2\). For a shallow lattice (\(U(\mathbf{r})=0\)), the ground state energy \(\hbar\omega_{g}\) of the BEC scales quadratically with the wave number, \(\omega_{g}\propto k^{2}\). Hence, the laser frequencies have to be chosen carefully, such that population transfer between momentum states is energetically permitted (see Fig. 11). A controlled initialization of momentum states can be performed, as initial states can be targeted individually and final states are given by the momentum and energy conditions (10). In Bragg diffraction, the portion of the population \(0\leq p_{j}\leq 1\) transferred between the momentum states can be controlled via the interaction duration between the BEC and the laser beams [44]. To avoid unwanted transitions outside of the FWM states, the preparation sequence shown in Fig. 11 and Table 3 is developed. For visualization, we choose \(\mathbf{k}_{1}=\hat{\mathbf{k}}_{x}\), \(\mathbf{k}_{2}=-\hat{\mathbf{k}}_{x}\), \(\mathbf{k}_{3}=\hat{\mathbf{k}}_{y}\) and \(\mathbf{k}_{4}=-\hat{\mathbf{k}}_{y}\). However, all combinations fulfilling (7) can be prepared by the described procedure. After the pulse sequence, the total particle number is transferred into the FWM states, \[(1-p_{3})(1-p_{2})+p_{3}(1-p_{2})+(1-p_{4})p_{2}+p_{4}p_{2}=1, \tag{12}\] and all combinations can be realized (see Fig. 12).
2309.14515
High efficiency muon registration system based on scintillator strips
Experiments such as mu2e (FNAL, USA) and COMET (KEK, Japan), seeking the direct muon-to-electron conversion as part of the study of Charged Leptons Flavor Violation processes, should have a extremely high, up-to 99.99\%, efficiency muon detection system with a view to their subsequent suppression as background. In this article, the possibility to achieve such efficiency for a short and long term is discussed for modules based on 7- or 10-mm-thick but same 40-mm-wide plastic scintillation strips with single 1.2 mm WLS fiber glued into the groove along the strip and using MPPC/SiPM for light detection. A Simplified Light Yield Distribution method to estimate the efficiency of the module was proposed and the simulation results obtained with GEANT 4 for a system based on a 4-by-4 array of 7x40x3000 mm strips compared with the experimental data. Found that for the systems required the high level registration efficiency at the 99.99\% and more, it is important to improve the light yield as much as possible and achieve the gap between neighbor scintillation volumes as small as possible.
A. Artikov, V. Baranov, A. Boikov, D. Chokheli, Yu. I. Davydov, V. Glagolev, A. Simonenko, Z. Tsamalaidze, I. Vasilyev, I. Zimin
2023-09-25T20:22:16Z
http://arxiv.org/abs/2309.14515v2
# High efficiency muon registration system based on scintillator strips ###### Abstract Experiments such as mu2e (FNAL, USA) and COMET (KEK, Japan), seeking the direct muon-to-electron conversion as part of the study of Charged Leptons Flavor Violation processes, should have a extremely high, up-to 99.99%, efficiency muon detection system with a view to their subsequent suppression as background. In this article, the possibility to achieve such efficiency for a short and long term is discussed for modules based on 7- or 10-mm-thick but same 40-mm-wide plastic scintillation strips with single 1.2 mm WLS fiber glued into the groove along the strip and using MPPC/SiPM for light detection. A Simplified Light Yield Distribution method to estimate the efficiency of the module was proposed and the simulation results obtained with GEANT 4 for a system based on a 4-by-4 array of 7x40x3000 mm strips compared with the experimental data. Found that for the systems required the high level registration efficiency at the 99.99% and more, it is important to improve the light yield as much as possible and achieve the gap between neighbor scintillation volumes as small as possible. keywords: plastic scintillator, light yield, scintillation strip counter with WLS fiber, Cosmic Ray Veto Msc: [2010] 00-01, 99-00 + Footnote †: journal: Journal of Nucl. Instrum. Methods Phys. Res. A [inst=] ###### Contents * 1 Introduction * 2 Theoretical basement for calculation of the efficiency of Cosmic Ray Veto (CRV) system * 2.1 Charged particle registration probability for 4-layer CRV module * 2.2 Combined efficiency for one layer consisted with N-strips * 2.3 Charged particle registration probability for the particular strip * 3 The simplified method for light yield simulation * 3.1 Strip transverse scan as an essential part of SLYD method * 3.2 Transverse scan with cosmic rays for 7 mm and 10 mm thick and 40 mm wide strips * 3.3 Strip transversal scan using collimated \({}^{90}Sr/^{90}Y\)\(\beta-\)source * 3.4 Traverse scan results for 7 mm and 10 mm thick and 40 mm wide strips * 4 Simulation of the CRV module charged particle registration probability by GEANT-4 * 4.1 Calculation for charged particle registration probability for 4 layers by 15 strips on each CRV module * 4.2 Influence of the natural aging on charged particle registration probability for CRV module * 5 Experimental study of the 4x4 CRV module prototype with cosmic muons * 5.1 Testing the geometry of the strips * 5.2 Efficiency calculation for a 4x4 CRV module on cosmic muons * 5.3 Efficiency simulation for a 4x4 CRV module * 6 Conclusions * 7 Acknowledgments ## 1 Introduction Detectors based on a plastic scintillator were widely used in most modern experiments in High Energy Physics. Typically, the detection efficiency of minimum ionizing particles for such systems is sufficient to be more than 90%. However, some experiments need to have so-called active shield system from background muons, particularly for cosmic muons. This means that the cosmic muon should first be detected and then rejected through online/offline data processing. We call such a system as a Cosmic Ray Veto (CRV) system. CRV system always requires a higher efficiency for muon registration compared to regular muon systems. For instance, both the Mu2e experiment [1] (FNAL, USA) and the COMET experiment [2] (KEK, Japan) require that the CRV system registration efficiency be on the 99.99% level to establish the sufficient suppression of the cosmic background and thus achieve the required sensitivity of these experiments in order of on \(10^{-17}\) for a so called single-event - for a direct conversion of the muon into the electron. ## 2 Theoretical basement for calculation of the efficiency of Cosmic Ray Veto (CRV) system Usually, the muon registration system is the hodoscope/array of the several layers of plastic scintillation bars and/or drift/proportional/resistive plate chambers and could be split into the modules. In this article, we are considering a 4-layer CRV module based on 40-mm-wide plastic scintillation strips with the same layout but with different strips thickness: of 7-mm-thick the first one and 10-mm-thick the second. ### Charged particle registration probability for 4-layer CRV module Considering CRV module is an array of the four layers of the strips with 16 strips on each layer, and each layer is identical to other and requesting the coincidence of any 3 layers of 4, - with such conditions we can estimate the registration efficiency for charged particle of this system using combinatorial mathematics as follow: \[P_{CRV}=\mathrm{C}_{4}^{3}\times(P_{L})^{3}\times(1-P_{L})+(P_{L})^{4} \tag{1}\] here: \(\mathrm{C}_{4}^{3}\) could be calculated as: \(\mathrm{C}_{4}^{3}=\frac{4!}{(4-3)!3!}=4\) Using this equation (1) and in case of efficiency for CRV module on charged particles registration is required to be on a level of \(P_{CRV}=99.99\%\), the efficiency of each layer should be better than \(P_{L}=99.65\%\). Unfortunately, the efficiency of each layer is not a constant one and varies from case to case due to different factors (production quality, components properties variation, etc.). So, to calculate the registration probability for the real module, we should use the equation, which includes variation of registration probability of each strip fired: \[\begin{split} P_{m}=\sum\limits_{n=0}^{3}P_{L(i\%4)}\times P_{L( (i+1)\%4)}\times P_{L((i+2)\%4)}\times(1-P_{L((i+3)\%4)})+\\ +P_{L0}\times P_{L1}\times P_{L2}\times P_{L3}\end{split} \tag{2}\] In this equation, \(P_{L0};P_{L1};P_{L2};P_{L3}\) are the registration probability for layer 1, 2, 3 and 4 respectively1. Footnote 1: here: \(\sum\limits_{n=0}^{3}P_{L(i\%4)}\times P_{L((i+1)\%4)}\times P_{L((i+2)\%4) }\times(1-P_{L((i+3)\%4)})=\) \(=P_{L0}\times P_{L1}\times P_{L2}\times(1-P_{L3})+P_{L0}\times P_{L1}\times(1- P_{L2})\times P_{L3}+\) \(+P_{L0}\times(1-P_{L1})\times P_{L2}\times P_{L3}+(1-P_{L0})\times P_{L1}\times P_{L2} \times Prob_{L3}\) ### Combined efficiency for one layer consisted with N-strips Each layer of CRV module represents a 1D-array of plastic scintillation strips. Since cosmic muon could pass multiple stripes depending on its incidence angle to the module surface, we need to combine the neighbor strips registration probabilities to each other in aim to calculate in proper way each layer registration probability. To facilitate the calculation and make it simple, we can calculate inefficiency \(\overline{P_{L}}\) (the probability to miss detection of the particles passage) of each strip first and then we can find the inefficiency of layer as a multiplication of registration inefficiency of the strips with follow equation: \(\overline{P_{L}}=\overline{P_{S_{1}}}\times\overline{P_{S_{2}}}\times\ldots \times\overline{P_{S_{N}}}=\prod\limits_{i=1}^{N}\overline{P_{S_{i}}}\) (here: \(\overline{P_{S_{1}}}\), \(\overline{P_{S_{2}}}\), \(\overline{P_{S_{N}}}\) - inefficiency for strip 1, strip 2, strip N respectively). And now we can easily calculate the registration probability for the layer as: \[P_{L}=1-\overline{P_{L}}=1-\prod\limits_{i=1}^{N}\overline{P_{S_{i}}}=1-\prod \limits_{i=1}^{N}(1-P_{S_{i}}) \tag{3}\] For instance, if a layer consists of 16 strips and a charged particle passes through \(5^{th}\) and \(6^{th}\) neighbor strips only, so inefficiency for those strips could be calculated as \(\overline{P_{5}}=(1-P_{(S_{5})})\) and \(\overline{P_{6}}=(1-P_{(S_{6})})\) but the inefficiency for other strips should be equal to 1 since they never fired. And overall efficiency of this layer according to the equation (3) could be expressed as \(P_{L}=1-\overline{P_{L}}=1-(1-P_{(S_{5})})(1-P_{(S_{6})})\). Next step is to calculate the registration probability for each strip. ### Charged particle registration probability for the particular strip The values of the light yield collected on a photosensitive detector (PMT, SiPM/MPPC, etc.) randomizes around of some mean value and its probabilities can be calculated using Poisson distribution [3] \[P(x)=\frac{\mu^{x}e^{-\mu}}{x!} \tag{4}\] here \(x\) is amount of light yield collected on PMT or SiPM in photoelectrons and \(\mu\) is expected/mean value of the collected on a photosensor light in photoelectrons. The Poisson distribution for a high number of light yield value tends to a Gaussian distribution with \(\sigma=\sqrt{\mu}\) and the distribution of probability for light yield values could be calculated as: \[G(x)=\frac{1}{\sqrt{2\pi\mu}}e^{-\frac{(x-\mu)^{2}}{2\mu}} \tag{5}\] Usually, to distinguish the real signal from the noise, the incoming signal should be processed by a discriminator with some level of threshold (by charge, by voltage amplitude of pulse, by current, etc.). But the pulse discrimination will suppress not only noise but some necessary events as well thus decreasing particle registration efficiency for the whole system. In this case, the probability of the registration for the charged particle passed the detector could be calculated as an integral from the discrimination threshold value to the infinitive [7]: \[P(\mu)=\int_{T_{ph.e.}}^{+\infty}f_{eff}(x)dx=\frac{1}{\sqrt{2\pi\mu}}\int_{T_{ ph.e.}}^{+\infty}e^{-\frac{(x-\mu)^{2}}{2\mu}}dx \tag{6}\] Of course, for minimum ionizing particles the more appropriate formalization is the Landay distribution. But, in some cases, for instance when just the number of events is important not the shape and the threshold value is less than a mean value (or, in other words the threshold cuts part of the left wing of the distribution), we can use Gaussian formalization with some approximation as a mimic of left wing of Landau distribution. Therefore, to estimate registration probability, it is possible to use an error function (\(erf(y)=\frac{2}{\sqrt{\pi}}\int_{0}^{y}e^{-x^{2}}dx\))) which is an anti-derivative of (6) and the final equation to calculate registration probability of the charged particle passage through the strip should look like: \[P(\mu)=\frac{erf\left(\frac{+\infty-\mu}{\sqrt{2\mu}}\right)}{2}-\frac{erf \left(\frac{T_{ph.e.}-\mu}{\sqrt{2\mu}}\right)}{2}=\frac{1}{2}+\frac{erf\left( \frac{\mu-T_{ph.e.}}{\sqrt{2\mu}}\right)}{2} \tag{7}\] For instance, if the mean value of the light yield collected on photodetector is 15.7 ph.e. and the threshold is set on 5 ph.e. level then the registration probability for the strip should be \(P(\mu)=99.65\%\). And it is the theoretical minimum amount of light in ph.e. from strip collected with selected threshold to satisfy required level of 99.99% registration efficiency for 4-layer CRV system in condition of coincidence for any 3 layers of 4 (see 2.1). ## 3 The simplified method for light yield simulation In aim to predict the light yield value collected on photodetector, one can use a direct simulation of the propagation for a charged particle through scintillator body by counting each processes emitting the light. But this method consumes too much processor time for the calculation since of a huge secondary particle number and requires a very precise simulation model with correct coefficients for a particular scintillator type and requires very careful feedback checking. Therefore, we need to try to find simplified models with less consumption of the calculation time and less uncertainty with light yield calculation. In this chapter we will discuss how to simplify the light yield estimation and thus drastically improve the time consumption needed for calculations. One of such ways is based on a study to find a direct dependence of the light yield on charged particle path which should be done for the scintillator strips we will use in a future. We will call it as "Simplified Light Yield Distribution" method (SLYD method). ### Strip transverse scan as an essential part of SLYD method This study is based on finding the dependence for the light yield per path unit (for instance, in photoelectron per mm) for real detector and then to use obtained results to predict the CRV overall efficiency with modeling in Geant-4 [4]. The main idea of this method is as follows. As we know, charged particle creates some light while a passage through the scintillation strip, or, in other words, the light yield from scintillation strip depends on a particle's passage path and where this path lies, for instance, the light collected on edges is less than on central area across of strip, also the amount of light collected increases with the length of the particle path. In first approximation, we can slide the propagation path into the separate areas and, thus, an overall light yield could be presented as a sum of light yields obtained in each passage area (Fig. 1). The blue arrow shows the muon path direction, and the red curve introduces a light yield distribution \(F_{\mu}(y)\) across a real scintillation strip with WLS fiber in and obtained by fitting the experimental data. Another, but important assumption is the homogeneity of the light collection inside of the selected area and the precision of this model should largely depend on quality of the strip production. In general, the light yield varies in different areas. To find the light yield in each area (Fig. 1), the data were collected and analyzed for the particles entering the strip in selected area in perpendicular to the strip surface. In this case the particle will pass inside of one area and thus it could be possible to find the mean value of light yield for this area. Since passage path length in this case is roughly equal to the strips thickness - the light yield per mm should be easily calculated. Once the light yield for each area obtained, the total light yield could be calculated as follow: \[\mu=\int_{-w/2}^{+w/2}F_{\mu}(y)\approx\sum_{i=0}^{N}\mu_{i}L_{i} \tag{8}\] Here, \(w\) is a strips width, \(F_{\mu}(y)\) is a light yield distribution by the path, N is a total number of the areas, \(\mu_{i}\) is a light yield per mm inside of a particular area between two neighbor dashed lines of the strip and \(L_{i}\) is the muon path length in mm inside of this area. Figure 1: The cross-section of the strip sliced into areas split by dashed lines; a muon path (blue arrow) and a relative light yield distribution (red curve utilizing 6th degree polynomial function) found by transverse scan of the strips. ### Transverse scan with cosmic rays for 7 mm and 10 mm thick and 40 mm wide strips Long-term run with cosmic muons using segmented cosmic ray telescope could allow us to mimic the transverse scan of the strips (Fig. 2a). To provide it, the cosmic telescope with an active area of 40x40 mm2 was created. This telescope consisted of two 16-channel cosmic muon hodoscopes lying above and below testing strips (Fig. 2b). Each channel is created by 2x2 \(mm^{2}\) Kuraray SCSF-81J scintillating fiber [5] placed with 2.5 mm pitch and directed along the strips. This geometry of hodoscope ensured to separate the telescope into 16 areas with step of 2.5 mm. The muon hodoscope was installed at 250 cm distance from strip readout by SiPMs. Four 3-m-long scintillator strips for this study were produced by "Uniplast" (Vladimir, Russia [6]). Two of them had a cross-section of 7x40 \(mm^{2}\) and other two had a cross-section of 10x40 \(mm^{2}\). Each strip were equipped with a single WLS 1.2 mm WLS Kuraray Y11(200) fiber [7] glued to the groove (Fig. 2a). All strips were stacked vertically, and the telescope was positioned above and below the stack to ensure simultaneous cosmic run of the strip under study. DAQ was based on CAEN DT5702 32-ch MPPC/SiPM readout Front-End [8]. Hamamatsu S13360-1350CS SiPMs [9] with 1.3x1.3 \(mm^{2}\) active surface were used for light collection from the strips and with the hodoscopes. Figure 2: Scintillation strip with segmented cosmic muons telescope for the traverse scan(a) and layout of the experimental setup to provide the cosmic run with segmented telescope Data with cosmic muons was collected during of 2 weeks which allowed us to achieve about 150 "vertical" muons per position (corresponding hodoscope channels in coincidence, see Figure 2a). Light yield (in photoelectrons) transverse distribution for each strips is presented in Table 1 and averaged light yield distribution for each type of strips are presented in Figure 3). Since we had limited number of SiPM on time for this test, only for half of strip was collected the cosmic data, having in the mind that other wing of strip should be symmetric to examined one. ### Strip transversal scan using collimated \({}^{90}Sr/^{90}Y\)\(\beta-\)source One can see that precise of study on cosmic muons described in previous chapter is not enough to use obtained data in a future since low statistic and big pitch value. It is necessary to increase the statistic by order and to decrease pitch value from 2.5 mm to 0.5 mm at least to get necessary precision. The required improvement will drastically increase the difficulty of cosmic muon telescope structure and, more important, the time for data collection up to several months. The transverse scan of the strip with \(\beta-\)source mimicking the cosmic muon is another way to obtain necessary data in a reasonable time. The \(\beta-\)particle beam issued by \({}^{90}Sr/^{90}Y\) source and collimated to 1 mm in diameter could be suitable for this purpose. To exam this proposal, we simulated a rate distribution by an energy for such beam (Fig. 3(a)). And then obtained distribution was transformed into to the distribution of relative energy deposition to the strip (which corresponds to the light yield collected on this strip) by \(\beta-\)particle energy (Fig. 3(b)). Please note, that average \(\beta-\)particles energy for \({}^{90}Sr\rightarrow^{90}Y\) decay is about 0.205 MeV and for \({}^{90}Y\rightarrow^{90}Zr\) decay is about 0.93 MeV. According to distribution of the energy deposition into a strip (Fig. 3(b)) the Figure 4: Simulation (done in Geant-4) of the energy distribution for \(\beta-\)particles beam issued by \({}^{90}Sr/^{90}Y\) source and collimated to 1 mm on output (a) and the distribution of relative light yield (in percent) as a function of the energy deposited to the strip (b) most contribution to light yield is done by beta particles bigger than 1 MeV with maximum for 1.5 MeV particles. The radiation length for 1 MeV \(\beta-\)particles is about 3.6 mm and for 1.5 MeV - about 5.4 mm. So, for scintillators with less than 10 mm in thickness such source will somehow mimic the cosmic muons thus drastically speeding up the transverse scan relative to such test using cosmic rays. At first, we performed the simulation using GEANT-4 [4] environment for the \(\beta-\)particles beam divergence by distance between a collimator and a strip entrance surface (Fig. 4(a)): for 2 mm distance (black curve) and for 10 mm (blue curve). The material for the collimator was an aluminum disk with a diameter of 10 cm and a thickness of 3 mm with a hole with a diameter of 1 mm in the center, aligned with the source. The special setup was built to provide such a transverse scan using \({}^{90}Sr/^{90}Y\)\(\beta-\)source with about 0.03 mCi activity (Fig. 6). The beam formed using collimator made from a 3-mm-thick aluminum disk with 1 mm-diameter in the center (Fig. 4(b)). The step of scanning was set to 0.5 mm. The light from strip was collected using EMI9814 PMT [10] and PMT's anode current was measured with Keithley 6847 picoammeter [11]. Figure 5: Simulation of the beam size (by \({}^{90}Sr/^{90}Y\)\(\beta-\)source) with different distances after the 1 mm diameter collimator (a) and Al-collimator design (b) ### Traverse scan results for 7 mm and 10 mm thick and 40 mm wide strips To obtain a distribution of light collection over the strip width, we scanned the strip across its width by collimated \(\beta-\)source as described in previous chapter. Two types of strips were scanned at this stage: 7x40 mm and 10x40 \(mm^{2}\) strips in cross-section. These strips were also produced by Uniplast (Vladimir, Russia). Both had only one fiber 1.2mm WLS Kuraray Y11(200), double cladding, C-type [7] glued into the 2-mm-deep groove along the strip. We provided the scans for strips in two positions: when WLS-fiber was close to beam (on "top") and then the strip was turned over (WLS-fiber is on "bottom"). These distributions for each strip (for "top" and "bottom") then summarized into one plot and normalized to 1 and then fitted with \(6^{th}\) degree polynomial function to achieve the best approximation. The distributions for 7x40 mm strip are presented on Figure 7, and the distributions for 10x40 mm strip are presented on Figure 8. A relative light loss near the WLS fiber for the "top" position of the fiber at the middle of the strip is appeared as expected. In the opposite direction, when WLS fiber is placed far from the source, the distribution for light is smooth at the middle of the strip since most light collected at 5 mm depth. Combining "top" and "bottom" distributions, we can mimic the light yield collected by cosmic muons. The Figure 6: Layout of the experimental setup to provide the transverse scan of the strip (a) and the real look of this setup (b) expected \(\beta-\)particles beam divergence (Fig. 4(a)) causes some uncertainties on the transverse distribution near the strips edges. One can see that the obtained distributions by transverse scan using collimated \(\beta-\)source have a close shape as for the similar distributions but obtained on cosmic rays. ## 4 Simulation of the CRV module charged particle registration probability by GEANT-4 Now, having the light yield distribution function, it is possible to calculate the charged particles registration probability for CRV module using GEANT-4. In this part chapter we will calculate the registration probability for two types of CRV modules: one consists of 4 layers of 15 strips each, and the second Figure 8: The distributions of light yield across for a 10-mm-thick strip for “top” (blue) and “bottom” (green) directions (a). Normalized distribution (b) for summary of “top” and “bottom” distributions, a blue rectangular illustrates the strips geometry. Figure 7: The distributions of light yield across a for 7-mm-thick strip for “top” (blue) and “bottom” (green) directions (a). Normalized distribution (b) for summary of “top” and “bottom” distributions, a blue rectangular illustrates the strips geometry. consists of 4 layers of 4 strips each. Calculations were done for the models with 7x40 mm2 and 10x40 mm2 strips in cross-section. The scintillation light yield on cosmic muons at 2500 mm distance from SiPM for 7x40x3200 \(mm^{3}\) and 10x40x3200 \(mm^{3}\) strips was measured in advance and amounted to 21 ph.e. and 30 ph.e. respectively on average. These values were used in simulation to create the map of the charged particles registration probability. Calculation for charged particle registration probability for 4 layers by 15 strips on each CRV module The GIANT model for the CRV module discussed in this chapter consists of 4 layers with 15 strips. The 2-mm-thick Al sheets are placed between layers and 0.5-mm-thick gap between sheet and strips was included also - so, the 3 mm gap (0.5 mm+2mm+0.5mm) is between layers in total (Fig. 8(a)). Also, the model includes gaps between scintillation volumes (so-called working volumes) for neighbor strips. This gap should include a physical gap between neighbor strips and should also include the non-scintillation space of strip created by a reflective cover for the strip. The thickness of the strip reflective cover discussed in this article is 0.2 mm. Thus, the total gap between a neighbor strip's scintillation working volumes is 0.5 mm (0.2mm+0.1mm+0.2mm) (see 8(b)). Each layer is shifted to another by the step and the set of such steps related to all layers creates a so-called shift pattern for CRV module geometry Figure 9: Layout of the CRV Module and ”8-8-8” shift pattern example (a). Gaps between strips (b) (Fig. 9a). The charged particles registration probability with different shift pattern of layer were studied for particles passing CRV module in various angle. Please note, that this calculation does not include the real angular distribution of the cosmic muons, or, in other words, remaining flat. But the cosmic muons angular intensities distribution can be combined later with CRV module registration probability depending on the module installation direction (horizontal, vertical, etc.) to get real estimation of the overall module efficiency to detect cosmic muons. The light yield calculated for the muon path according to the distribution we found in previous chapter by transversal scan of the strips. To calculate charged particle registration probability on cosmic muon for CRV module, we created so called registration probability map, corresponding for the different muon path by angle and by beginning point of the penetration (Fig. 10). We set the "0" on the middle of the 8th strip in top layer (the middle of the layer in a cross section). And angle "0" is the vertical to the strips. An area of "-40 to +40" mm (shown by red line on a figure) mm were studied with step of 0.1 mm. In each step, the angles were varied within a cone from -75 to +75 degree with 1 degree step (orange lines on Figure 10 represent the borders for the cone). The simulations of the muon registration probability for the CRV module with different pattern of shifts layers to each other were done. The average light yield set to 21 ph.e. and the threshold (level of discrimination) set to 5 ph.e. Figure 10: Layout to create the registration probability map for CRV module The registration probability maps (2D distribution by the angle and by point of entrance) were created for more than 60000 patterns of CRV modules, and, overall, calculations took about 1 month. The examples of registration probability maps for various shift patterns for CRV modules based on 7x40 \(mm^{2}\) in cross-section strips were illustrated on Figure 11. White areas on plots represent the areas with registration probability less than 99.5%. One can see that it is difficult to achieve 99.99% efficiency for the module with current configuration of the 7-mm-thick strip. Table 2 presents the overall efficiency with no angular distribution for cosmic Figure 11: Examples of registration probability maps for various CRV modules shift patterns: an original ”20-20-20” shift pattern by TDR2020 (a) muons included for the best shift patterns with the average light yield set to 21 ph.e. and the threshold (level of discrimination) set to 5 ph.e. We also did similar simulation for CRV modules based on 10x40 \(mm^{2}\) in cross-section strips. In this case, gaps, way to change particle propagation, etc. left same, but the average light yield set to 30 ph.e. but the threshold (level of discrimination) remained same: 5 ph.e.. The results for 10-mm-thick case shows that the charged particles registration probability was better than 99.99%. The comparison of the plots for 7-mm-thick and 10-mm-thick strips is shown in Fig. 12. \begin{table} \begin{tabular}{|l|l|} \hline Shift pattern & Overall efficiency \\ \hline 20-20-20 mm & 0.99902 \\ \hline 10-10-10 mm & 0.99976 \\ \hline 8-8-8 mm & 0.99988 \\ \hline 8-10-6 mm & 0.99983 \\ \hline 8-10-8 mm & 0.99985 \\ \hline 8-10-10 mm & 0.99981 \\ \hline 8-10-12 mm & 0.99976 \\ \hline 8-10-14 mm & 0.99968 \\ \hline 8-10-16 mm & 0.99963 \\ \hline 8-10-18 mm & 0.99958 \\ \hline \end{tabular} \end{table} Table 2: Best shift patterns for CRV modules based on 7x40 \(mm^{2}\) strips. The average light yield set from the strip to 21 ph.e. and the threshold set to 5 ph.e. Figure 12: The registration probability maps for CRV modules with 7-mm-thick (a) and on 10-mm-thick (b) strips. Strips width and shift pattern are same for both cases: 40 mm and “8-8-8” respectively. One can see that the 10-mm-thick strip has almost 99.999% overall registration probability in comparison to 7-mm-thick strip, which has 99.988%. Two major factors help to achieve such a difference. On one side, the 10-mm-thick strip has better strip thickness to gap between strips ratio under the same conditions - 20 (10/0.5) against 14 (7/0.5) for 7-mm-thick strip. Other also very important factor is that 10-mm-thick strip has about 30% more light yield against to 7-mm-thick strip: 30 ph.e. vs 21 ph.e.. ### Influence of the natural aging on charged particle registration probability for CRV module As we know, the plastic scintillation counter experiences the deterioration of the light yield by the time even in case it never used in experiments, or so-called natural aging. The speed of the deterioration (or how much loss in percent per year) is very important and should be considered when such a system should be used for an extended period, or the detector will be used after some notable delay (for instance, years) once it was created. Our 12-year-long study [12] for various geometry of scintillation counters (some of it equipped with WLS fibers as well) used in CDF experiment shows that the light yield decreases with a rate of 6...9% per year for polystyrene (PS) based scintillation counters. A major result taken from this study is that the natural deterioration of just PS without other factors is about 6.6% per year. So, we can expect, that the PS-based strips deterioration will be on same, 6...7% per year, level in best case. We should note that the results of the registration probability presented above are based on the 21 ph.e. and 30 ph.e. light yield for 7-mm-strip and 10-mm-strip. But the strips experience the natural aging process, or the light yield deteriorates with time. This fact shows that it is necessary to consider the natural aging of the scintillation strips to predict the efficient lifetime of the strips working under the requested conditions by registration efficiency, or to determine their long-term stability. If we consider the strip aging rate just as 6% per year, the light yield for 7- and 10-mm-thick strips will drop respectively to 17.6 ph.e. and 25.2 ph.e. for a 3-year term, and to 9.8 ph.e. and 14.1 ph.e. for a 13-year term. The registration probability maps for 3-year and 13-year terms are shown on Figure 13 and overall efficiencies of the module are presented in Table 3. ## 5 Experimental study of the 4x4 CRV module prototype with cosmic muons To perform this study, we prepared 16 3.2-m-long scintillation strips with 7x40 \(mm^{2}\) in cross-section, and 1.2-mm diameter Kuraray Y11 (200) WLS fiber glued into the central groove. Strips were produced by Uniplast also. So, as a next step, we created from scratch the 4x4 CRV module prototype using these strips: 4 layers with 4 strips on each sliced with steel 2-mm thick sheets (Fig.14) and tried to compare the efficiency of this 4x4 CRV module obtained by exposition on cosmic rays with simulation result obtained using GEANT-4. ### Testing the geometry of the strips Before creation of the 4x4 CRV module prototype, it is necessary to check the strips geometrical parameters. These real values were needed to put proper values into the simulation as model geometrical parameters for CRV module. The width of each strip was measured at 7 positions: at distances of 100, 500, 1000, 1500, 2000, 2500 and 2900 mm from one edge of the strip (Fig. 15). The resulting distribution was approximated by Gaussian and an average value of the strip's width was found to be equal to 39.78 mm with sigma of 0.08 mm (Fig. 16a). Next important parameter we need to know is the real gap between strips. It was calculated as follows: we measured the width of the layer created by 15 strips on each 7 positions (Fig. 15). Then the sum of the strips widths Figure 14: Layout of 4x4 CRV module prototype (a) and real look of it (b) on corresponding position which already measured was subtracted and result divided by 14 according to the number of gaps. The average gap's value between strips was found to be about 0.32 mm \(\pm\) 0.12 mm (Fig. 15(b)). ### Efficiency calculation for a 4x4 CRV module on cosmic muons We studied 4x4 CRV modules efficiency on cosmic muons. Two 126x60x20 \(mm^{3}\) scintillation counters create the muon telescope and these detectors were placed over and below of the module as shown at Figure 17: one telescope counter was located directly on the top of CRV module close to the module side edge; other telescope was located under the plate holding the CRV module at 30 cm distance from the bottom of the module, and close to the other side edge of the module. The telescope was positioned at 2500 mm distance from strips Figure 16: Variation of the width (a) and gaps (b) for 4x4 CRV module prototype Figure 15: Illustration of the positions to measure widths and gaps for 4x4 CRV module prototype readout, at far end. Such arrangement of telescope counters ensures the passage of the cosmic muon through all 4 working layers. Light collected by SiPM (Hamamatsu S13360-1350CS; similar SiPMs were used for light collection from the strips and trigger counters). Kuraray 1.4-mm diameter optical-clear fibers were attached far from SIPM ends of the strips. Using these fibers, the flashing UV light was delivered to strips for calibrations purpose by single photon counting. Data from the module and telescope counters collected using CAEN DT5702 32-ch MPPC/SiPM readout Front-End. UV light LED was produced by HVSys light flashes calibrated LED source [13]. Cosmic data for CRV module was taken continuously for a week duration thus allowing us to collect about 200 000 events. Using an absolute calibration method, all 16 SiPMs gain tuned for around 45 ADC channel per photon. The light yield distributions for each channel are shown in Figure 18. After pedestal subtraction and approximation by Landau distribution, an average light yield at 250 cm distance was found as 21\(\pm\)3 ph.e. Efficiency for this CRV module calculated as a ratio of the CRV module events selected by coincidence of any 3 layers of 4 to the total number of events registered by cosmic muon telescope. Data were processed offline and the threshold level set at 5 ph.e. for all channels during this analysis. Overall CRV module efficiency to register cosmic muons was found on the 99.69% level. Figure 17: Layout of the setup for 4x4 CRV module with cosmic telescope ### Efficiency simulation for a 4x4 CRV module As noted in the previous chapter, all real geometrical parameters of 4x4 CRV module were measured to set the sketch at GEANT-4 geometry properly. Also, the light yield is set to 21 ph.e. according to the data obtained from real CRV module. These parameters allowed us to provide proper simulation using Geant 4. The registration probability map for 4x4 CRV module was calculated using SLYD method as described at chapter 3. At first, the registration probability map was created (Fig. 19). Then, using real angular distribution of the cosmic muons, the spot overall efficiency of the module was calculated. ## 6 Conclusions A Simplified Light Yield Distribution (SLYD) method utilizing the transverse distribution of light yield to simulate the CRV module efficiency is developed. Transversal scan using \({}^{90}Sr/^{90}Y\)\(\beta-\)source to mimic cosmic muon introduced. The strips with 40x7 \(mm^{2}\) and 40x10 \(mm^{2}\) in cross-section were scanned to provide the required for this method data. Both strips were equipped with 1 WLS fiber glued into the central groove. Using GEANT4, the registration probability maps using SLYD method were created for 15x4 (4 layer by 15 strips on each layer) CRV modules for different patterns (layers shifts to each other array) to find best pattern. Simulation shows that the overall registration efficiency depends on patterns, strips light yield and geometrical parameters like gap between strips for neighbor one. The simulation was done for strips with cross-section of 40x7 and 40x10 \(mm^{2}\). Was found that efficiency for the CRV module based on 40x7 \(mm^{2}\) in cross-section strips with average light yield set to 21 ph.e. based on real data for such strips and threshold level on 5 ph.e is less than 99.93% for the best pattern. And it will drop to 99.67% in 3-year term and to 83.17% for 13-year term in base of the aging rate at 6% per year. Efficiency for similar CRV module but based on 40x10 \(mm^{2}\) in cross-section strips with average 30 light yield was simulated. Was found that with same conditions: almost 100% efficiency in initial, 99.999% in 3-year term and 99.98 Figure 19: Registration probability map obtained by simulation: with scale from 60% to 100% (a). Same plot with “magnifier”: scale is from 99% to 100% (b) in 13-years term. Better "strip thickness VS gap" ratio with the other same conditions - 20 (10/0.5) for 10-mm-thick strip against 14 (7/0.5) for 7-mm-thick strip and, in addition, 10-mm-thick strip has by 30% more light - these two factors help to achieve these results. The 4x4 CRV module using 7x40x3000 \(mm^{2}\) strips was created. The efficiency at 2500 mm distance from SiPMs was found using cosmic muons. Th average light yield at this position was 21 ph.e. Also, the simulation for efficiency using SLYD method in GEANT4 of such module was provided for comparison with experimental one. Was found that overall efficiency for this module at this position was at 99.69% level and the simulation results for this module were 99.74% which is in good agreement with experimental data. It is important to note that for the systems required the high level registration efficiency at the 99.99% and more, it is important to improve the light yield as much as possible and achieve the gap between neighbor scintillation volumes as small as possible. ## 7 Acknowledgments The authors would like to express deepest appreciation to V.Kolomoec, V.Rogozin and I.Prokhorov who provided a high level of technical support on each step of this research.
2309.12164
Stratified Type Theory
A hierarchy of type universes is a rudimentary ingredient in the type theories of many proof assistants to prevent the logical inconsistency resulting from combining dependent functions and the type-in-type rule. In this work, we argue that a universe hierarchy is not the only option for a type theory with a type universe. Taking inspiration from Leivant's Stratified System F, we introduce Stratified Type Theory (StraTT), where rather than stratifying universes by levels, we stratify typing judgements and restrict the domain of dependent functions to strictly lower levels. Even with type-in-type, this restriction suffices to enforce consistency. In StraTT, we consider a number of extensions beyond just stratified dependent functions. First, the subsystem subStraTT employs McBride's crude-but-effective stratification (also known as displacement) as a simple form of level polymorphism where global definitions with concrete levels can be displaced uniformly to any higher level. Second, to recover some expressivity lost due to the restriction on dependent function domains, the full StraTT includes a separate nondependent function type with a "floating" domain whose level matches that of the overall function type. Finally, we have implemented a prototype type checker for StraTT extended with datatypes and inference for level and displacement annotations, along with a small core library. We have proven subStraTT to be consistent and StraTT to be type safe, but consistency of the full StraTT remains an open problem, largely due to the interaction between floating functions and cumulativity of judgements. Nevertheless, we believe StraTT to be consistent, and as evidence have verified the failure of some well-known type-theoretic paradoxes using our implementation.
Jonathan Chan, Stephanie Weirich
2023-09-21T15:22:04Z
http://arxiv.org/abs/2309.12164v3
# Stratified Type Theory ###### Abstract. To exploit the expressivity of being able to refer to the type of types, such as for large elimination, dependent type systems will either employ a universe hierarchy or else contend with an inconsistent type-in-type rule. However, these are not be the only possible options. Taking inspiration from Stratified System F, we introduce _Stratified Type Theory_ (StraTT), where rather than stratifying universes by levels, we stratify typing judgements and restrict the domain of dependent function types to some fixed level strictly lower than that of the overall type. Even in the presence of type-in-type, this restriction suffices to enforce consistency. We explore the expressivity of several extensions atop this design. First, the subsystem _subStraTT_ employs McBride's crude-but-effective stratification (also known as displacement) as a simple form of level polymorphism where top-level definitions can be displaced uniformly to any higher level as needed, which is valid due to cumulativity and plays well with stratified judgements. Second, to recover some expressivity lost due to the restriction on dependent function domains, the full StraTT system includes a separate nondependent function type with _floating_ domains, whose level instead matches that of the overall type. Finally, we have implemented a prototype type checker for StraTT extended with datatypes along with a small type checked core library. While the subsystem can be shown to be consistent, showing consistency for the full system with floating nondependent functions remains an open problem. Nevertheless, we believe that the full system is also consistent and have mechanized a syntactic proof of subject reduction. Furthermore, we use our implementation to investigate various well-known type-theoretic type-in-type paradoxes. These examples all fail to type check in expected ways as evidence towards consistency. ## 1. Introduction Ever since their introduction in Martin-Lof's intuitionistic type theory (MLTT) [22], dependent type theories have included hierarchies of type universes in order to rectify the inconsistency of the type-in-type axiom. That is, rather than the universe \(\star\) being its own type, these type theories have universes \(\star_{k}\) indexed by a sequence of levels \(k\) such that the type of a universe is the universe at the next higher level. Such a universe hierarchy is a rudimentary ingredient in many contemporary proof assistants, such as Coq [6], Agda [25], Lean [8], F\({}^{*}\)[30], Arend [5], and soon Idris 2 [3]. For greater expressiveness, all of these (except for Idris 2) also implement some sort of level polymorphism. Supporting such generality means that the proof assistant must handle level variable constraints, level expressions, or both. However, programming with and especially debugging errors involving universe levels is a common pain point among proof assistant users. So we ask: do all roads necessarily lead to level polymorphism and more generally a universe hierarchy, or are there other avenues to be taken? To begin our exploration, let us take a look back at a different mechanism for universe levels and revisit type polymorphism in System F [12, 26]. Recall the formation rule for polymorphic type quantification in System F, given below on the left. This rule is part of the judgement \(\Gamma\vdash A\)type, which asserts that the type \(A\) is well formed in context \(\Gamma\). The quantification in this rule is impredicative because the type \(\forall x.B\) itself can be substituted for \(x\) in \(B\), and it quantifies over all types including itself. Impredicativity has long been a troublemaker in the metatheory of System F, in particular the lack of a classical set-theoretic model [27]. To sidestep impredicativity, Leivant [18] introduced _Stratified System F_, which stratifies types into different levels by disallowing quantifying over types at the same level as the quantification itself. The formation rule for polymorphic types in this system is shown in the above rule on the right. This rule is part of the stratified type formation judgement, written \(\Gamma\vdash A\)type\(k\), where \(k\) is a stratification level. To extend stratified polymorphism to dependent types, there are two ways to read this judgement. We could interpret \(\Gamma\vdash A\)type\(k\) as a type \(A\) living in some stratified type universe \(\star_{k}\); the generalization would then correspond to a usual predicative type theory with a universe hierarchy where \(\star_{j}:\star_{k}\) when \(j<k\). Alternatively, we could interpret the level \(k\) as a property of the _judgement_ rather than part of a type universe, and reexpress the judgement as \(\Gamma\vdash A\).\({}^{k}\star\). Since dependent types can depend on terms, we might generalize the stratified type formation judgement to a stratified typing judgement \(\Gamma\vdash a\cdot^{k}A\), where variables \(x:^{k}A\) are also annotated with a level within the context \(\Gamma\), but using a type universe that doesn't have a level annotation. Guided by these principles, we introduce stratified dependent function types \(\Pi x\colon^{k}A.\,B\) which similarly quantify over types at strictly lower levels. To enable code reuse, rather than level polymorphism, we employ _crude but effective stratification_ by McBride (McBride, 1998). Following Hou (Favonia) et al. (Favonia, 2013), we refer to this as _displacement_ to prevent confusion. Given some signature \(\Delta\) of global definitions, we are permitted to use any definition with its levels displaced upwards. In the context of StraTT, displacement enables functions with level-annotated types to be used with arguments at any higher level. However, even in the presence of displacement, we find that stratification is sometimes _too_ restrictive and can rule out terms that are otherwise typeable in an unstratified system. Therefore, StraTT includes a separate unstratified non-dependent function type with a _floating_ domain. StraTT is cumulative, so all expressions inhabit the level at which they type check and at all higher levels. However, in a dependent function type, the level of the domain type is fixed even when the overall level of the type has been raised. In a floating, nondependent type, level of the domain type floats to have the same level as the overall type. In the absence of floating nondependent functions, with only stratified dependent functions, logical consistency holds even with type-in-type, because the restriction on the domains of dependent functions prevents the kind of self-referential trickery that enables the usual paradoxes. However, we have not yet proven logical consistency with the addition of floating nondependent functions. The covariant behaviour of the floating domain with respect to levels is unusual for function types, and is the primary barrier to semantic modelling. Even so, we have not found proof of inconsistency either, and our attempts lead us to believe that consistency _does_ hold, making the system suitable as a foundation for theorem proving. These features form the basis of our **Stratified Type Theory** (StraTT). Our contributions are as follows: * We first define subStraTT, a subsystem of StraTT, which features only stratified dependent function types and displacement. We briefly sketch a proof of consistency, modelling type universes with an inductive-recursive definition in Agda. \(\hookrightarrow\) Section2 * We then extend this subsystem to the full StraTT by adding nondependent function types with floating domains, motivated through examples. \(\hookrightarrow\) Section3 * We have used the Coq proof assistant to prove important syntactic metatheorems for StraTT, including subject reduction, which is nontrivial due to the level-annotated context. \(\hookrightarrow\) Section4 * We have developed a prototype implementation of a type checker, extending the language to include datatypes. We use this implementation to demonstrate the effectiveness of stratification and displacement in practical dependently-typed programming, as well as its shortcomings when compared to prenex universe polymorphism. \(\hookrightarrow\) Section5 * As evidence towards logical consistency, we discuss how common type-theoretic paradoxes, namely Hurkens' paradox (Hurkens, 1998) and variants of Russell's paradox (Russell, 1999) and Burali-Forti's paradox (Burali-Forti, 2000), fail to type check. We briefly highlight the challenges in a consistency proof attempt. \(\hookrightarrow\) Section6 Section7 discusses related work and we conclude in Section8. Our Agda model, Coq metatheory, and prototype implementation are available online at [https://github.com/plclub/StraTT](https://github.com/plclub/StraTT). Where lemmas and theorems are first introduced, we include a footnote indicating the corresponding source file and lemma name in the development. ## 2. A subsystem of Stratified Type Theory In this section, we introduce subStraTT, a fragment of StraTT that does not include the separate nondependent function types. As it's a subsystem, the main theorems of subject reduction and other lemmas in Section4 proven for the full StraTT still hold. The subsystem subStraTT is a cumulative, extrinsic type theory with types a la Russell, a single type universe, level-annotated dependent function types, an empty type, and definitions with level displacement. The most significant difference between subStraTT and other type theories with these features is the annotation of the typing judgement with a level in place of universes in a hierarchy. We use the naturals and their usual strict order and addition operation for our levels, but they should be generalizable to any displacement algebra (Favonia, 2013). The typing judgement has the form \(\boxed{\Delta;\Gamma+a.^{k}A}\) its typing rules are given in Figure1. The judgement states that term \(a\) is well typed at level \(k\) with type \(A\) under the context \(\Gamma\) and signature \(\Delta\). A signature consists of global definitions \(x\colon^{k}A\coloneqq a\), where each constant \(x\) is definitionally equal to its definition \(a\). A context consists of declarations \(x\colon^{k}A\) of variables \(x\). The type of the type universe \(\star\) is itself at any level; in the next section, we show how even with this rule, subStraTT can be proven consistent. Stratification occurs at dependent function types in rule DT-Pi: one can only quantify over types at strictly smaller levels, and the domain type must be well typed at the same strictly smaller level. Similarly, in rule DT-AbsTy, the body of a dependent function is well typed when its argument and its type are well typed at a strictly smaller level, and by rule DT-AppTY, a dependent function can only be applied to an argument of the strictly smaller level indicated by the function's type. Rules DT-Bottom and DT-Absurd are the uninhabited type and its eliminator, respectively. Although it should be consistent to eliminate a falsehood into any level, including lower levels, we restrict it so that the premises have the same level as the eliminator so that we can prove Regularity. In rules DT-Var and DT-Const, variables and constants at level \(j\) can be used at any larger level \(k\). This permits the following admissible cumulativity rule,1 analogous to having a cumulative universe hierarchy, allowing instead an entire derivation to be used at a higher level. Footnote 1: coq/restrict.v:DTyping_cumul Constants are also annotated with a superscript indicating how much they're displaced by. If a constant \(x\) is defined with a type \(A\), we're permitted to use \(x^{i}\) as an element of type \(A\) but with all of its levels incremented by \(i\). The metafunction \(a^{\star i}\) performs this increment in the term \(a\), defined recursively with \((\Pi x^{j}A.\,B)^{\star i}=\Pi x^{\star i\star j}A^{\star i}.\,B^{\star i}\) and \((x^{j})^{\star i}=x^{\star i\star j}\). The key formation rules for signatures \(\llbracket\vdash\Delta\rrbracket\) and contexts \(\llbracket\Delta+\Gamma\rrbracket\) are given below. \(\vdash\Delta\)\(\Delta;\varnothing\vdash A\cdot^{k}\)\(\star\)\(\Delta;\varnothing\vdash a\cdot^{k}\)\(\star\)\(x\notin\,\text{dom}\,\Delta\)\(x\notin\,\text{dom}\,\Delta\)\(\vdash\Delta,x\cdot^{k}A\)\(\vdash\Delta+\Gamma,x\cdot^{k}A\) In rule DT-Conv, we use an untyped definition equality \(\llbracket\Delta+a\equiv b\rrbracket\) that is reflexive, symmetric, transitive, and congruent, and includes \(\beta\eta\)-equivalence for functions and \(\delta\)-equivalence of constants \(x\) with their definitions. When a constant is displaced as \(x^{i}\), we must also increment the level annotations in their definitions by \(i\). Below are the rules for \(\beta\)-, \(\eta\)-, and \(\delta\)-equivalence; the remaining rules can be found in Appendix A. \(\Delta+(\lambda x.\,b)\)\(a\equiv b\{a/x\}\)\(\Delta\vdash\lambda x.\,b\)\(x\equiv b\)\(\Delta+x^{i}\equiv a^{\star i}\) Given a well-typed, locally-closed term \(\Delta;\varnothing\vdash a\cdot^{k}A\), the entire derivation itself can be displaced upwards by some level increment \(i\). This lemma differs from cumulativity, since the level annotations in the term and its type are raised as well, not just the level of the judgement. **Lemma 2.1** (Displaceability (empty context)).: 2If \(\Delta;\varnothing\vdash a\cdot^{k}A\) then \(\Delta;\varnothing\vdash a^{\star i}:^{\star k}A^{\star i}\). Footnote 2: coq/incr.v:DTyping_incr With \(x\cdot^{k}A\coloneqq a\) in the signature, \(x^{i}\) is definitionally equal to \(a^{\star i}\). Thus, this lemma justifies rule DT-Const, which gives such displaced constants \(x^{i}\) the type \(A^{\star i}\). ### Consistency proof sketch We can show that subStraTT is a consistent type theory, _i.e._ that not all types are inhabited. In this section, we sketch a model for subStraTT in Agda through the framework of _categories with families_[(10)], focussing on how types are modelled. Inspired by Kovacs [(16)], we use induction-recursion to model universes at each level, relying on the well-foundedness of levels to ensure their well-definedness. The elements of the inductive definition represent codes of the types of subStraTT, while the recursive function interprets these codes as types in Agda. Consistency follows from the interpretation of the empty type in subStraTT as an empty type in Agda, and so is relative to the consistency of Agda. Because the interesting part of subStraTT when considering consistency is the presence of type-in-type, here we only include the model for universes, and omit constants and displacement, as well as the interpretation from subStraTT into the model. We have made the Agda files containing the whole model available at [https://github.com/plclub/StraTT](https://github.com/plclub/StraTT) under the agda/ directory. First, assume a type of Levels along with a well-founded order ~c_ on them. The proof of well-founded wf has Figure 1. Syntax and typing rules (subStraTT) type v (k: Level) \(\rightarrow\)Acc k, where Acc k is the usual accessibility predicate and acc< its constructor.3 Footnote 3: agda/Acc.agda Now, the most direct way to model universes is as follows.4 data U (k: Level) : Set el : v k \(\rightarrow\)U k - Set data U k where U : U k \(\rightarrow\)': U k \(\rightarrow\)': v j - j < k \(\rightarrow\) (A : U j) \(\rightarrow\) (B : el j A - U k) - U k el k U = U k el k L' = L el k (U' j -k A B) = (x : el j A) - el k (B x) The universe U k contains the code for itself U', the code for the empty type L', and the code for dependent functions \(\Pi^{\prime}\), containing a strictly smaller level j, the code for its domain at the smaller level, and a function that produces a code for its codomain at the same level given an element of the interpretation of the code of the domain. The interpretation el k interprets each code as expected, though in contrast to usual inductive-recursive models, the interpretation of U' in U k isn't some smaller U j, but rather U k itself. Agda will reject this inductive-recursive definition for not being strictly positive, because in the type of the 8 argument of \(\Pi^{\prime}\), Agda thinks U could appear in a negative position as the result of el. However, we know that only a strictly smaller U j will be returned by virtue of well-foundedness of the levels, so this definition is morally valid. To convince Agda of this, we adapt the technique from Kovacs (Kovacs, 2000) and parametrize U by U< and el<. These parameters represent universes at strictly smaller levels and interpretation functions that can only be used on these strictly smaller universes. data U' k (U< : \(\vee\) (j - j < k - Set) (el: v \(\vee\) (j<k : j < k) - U< j<k - Set) : Set el' : v k (U< : \(\vee\) (j - j < k - Set) (el: v \(\vee\) (j - k : j < k) - U< j<k - Set) - U' k U< el< - Set With this change, the A argument of \(\Pi^{\prime}\) has type U< j<k, while the B argument has type el< j<k A - U' k U< el<, no longer violating strict positivity. We tie the knot by defining the top-level U< and el< by induction over accessibility predicates on levels, then finally instantiate the predicates by well-foundedness in U and el. ``` U<:\(\vee\)(k)\(\rightarrow\)Acck\(\rightarrow\)\(\vee\){j}\(\rightarrow\)j<k\(\rightarrow\)Set el<:\(\vee\)(k)(p : Acc k) (j) (j<k : j < k) - U< p j<k - Set U< (acc< f) {j} j<k = U' j (U< (f j<k)) (el< (f j<k)) el< (acc< f) {j} j<k = el' j (U< (f j<k)) (el< (f j<k)) U : v k = U' k (U< (wf k)) (el< (wf k)) el : v k - U k - Set el k = el' k (U< (wf k)) (el< (wf k)) ``` To correctly model the cumulativity of subStrATT, we also need to show that universes and codes are cumulative as well. More precisely, we prove that given a code in U j, it can be lifted to a larger universe U k, and given an element in the interpretation of the smaller code, we can produce an element in the interpretation of the lifted code. ``` lift:\(\vee\){jk}\(\rightarrow\)j<k-Uj-Uk el-:\(\vee\){jk}-(j<k:j<k)-\(\vee\) u\(\rightarrow\)elju\(\rightarrow\)elk(liftj<k u) ``` The proofs of these cumulativity lemmas are slightly involved due to having to deal with accessibility proofs, which further requires assuming that Acc k is a mere proposition. These proofs and the full definitions of U' and el' can also be found in the Agda files:5 Footnote 5: agda/model.agda ## 3. StraTT and floating functions We have found that subStrATT alone is insufficiently expressive, with some examples being unexpectedly untypeable and others being simply clunky to work with. The full StraTT system therefore extends the subsystem with a separate non-dependent function type, written \(A\to B\), that does not have the same level restriction on the domain as the dependent function type. The typing rules for nondependent function types, functions, and application are given in Figure 2. The domain, codomain, and entire nondependent function type are all typed at the same level. Functions take arguments of the same level as their bodies, and are applied to arguments of the same level. This distinction between stratified dependent and unstratified nondependent functions corresponds closely to Stratified System F: type polymorphism is syntactically distinct from ordinary function types, and the former forces the codomain Figure 2. Typing rules (nondependent functions) to be a higher level while the latter doesn't. From the perspective of Stratified System F, StraTT merely generalizes stratified type polymorphism over types to include term polymorphism. We say that the domain of these nondependent function types _floats_ because unlike dependent function types, it isn't fixed to some particular level. The interaction between nondependent functions and cumulativity is where this becomes interesting. Given a function \(f\) of type \(A\to B\) at level \(j\), by cumulativity, it remains well typed with the same type at any level \(k\geq j\). The level of the domain floats up from \(j\) to match the function at \(k\), in the sense that \(f\) can be applied to an argument of type \(A\) at any greater level \(k\). This is unusual because the domain isn't contravariant with respect to the ordering on the levels as we might expect. This behaviour is why the consistency model from Section 2.1 can't straightforwardly be extended to accommodate nondependent function types. We examine the issue in detail in Section 6.4. ### Examples _The identity function._ Here's one way we could assign a type to the type- polymorphic identity function. For concision, we use a pattern syntax when defining global functions and place function arguments to the left of the definition. (The subscript is part of constant name.) \[\mathsf{id}_{0}:^{1}\Pi X:^{0}\star.\Pi x:^{0}X.X\] \[\mathsf{id}_{0}Xx\coloneqq x\] Stratification enforces that the codomain of the function type and the function body have a higher level than that of the domain and the argument, so the overall identity function is well typed at level 1. While \(x\) and \(X\) have level 0 in the context of the body, by subsumption, we can use \(x\) at level 1 in the body as required. Although the level of the domain of \(\mathsf{id}_{0}\) is fixed at 0, we can displace the constant by 1. \[\mathsf{id}_{1}:^{2}\Pi X:^{1}\star.\Pi x:^{1}X.X\] \[\mathsf{id}_{1}\coloneqq\mathsf{id}_{0}{}^{1}\] Since we have cumulativity, we would expect to be able to apply \(\mathsf{id}_{1}\) to itself. This is possible with a typical cumulative universe hierarchy, such as in Coq. In the below definition, since (forall (X : Type@(u0)), X -> X) can be assigned type Type@(u1), it can be used as the first argument to \(\mathsf{id}_{1}\). The second argument must then have type (forall (X : Type@(u0)), X -> X). While \(\mathsf{id}_{1}\) itself doesn't have this type, we can \(\eta\)-expand it to a function that does, since Type@(u0) is a subtype of Type@(u1) and thus X of the former type can be passed into a function that takes the latter. ``` Universeu0u1. Constraintu0<=u1. Definitionid1(X:Type@(u1))(X:X:=x. Definitionid1:forallX,X:=x. id1(forall(X:Type@(u0)),X -> X) (funXx=>id1X). ``` However, the analogous definition applying \(\mathsf{id}_{1}\) to itself doesn't type check! The problematic subterm is bolded in red below. ``` idid1:^{2}\Pi X:^{0}\star.\Pi x:^{0}X.X\] idid1:^{2}\(\mathsf{id}_{1}\coloneqq\mathsf{id}_{1}\)(\(\Pi X:^{0}\star.\Pi x:^{0}X.X\))(\(\lambda X.\lambda x.\mathsf{id}_{1}Xx\)) ``` The type \(\Pi X:^{0}\star.\Pi x:^{0}X.X\) is well typed at level 1, but the term \(\lambda X.\lambda x.\mathsf{id}_{1}Xx\) is only well typed with that type (again via subsumption) at level 2, so the latter can't be applied as the second argument to \(\mathsf{id}_{1}\), which is fixed at level 1. Here is where floating nondependent function type comes to use. Since the second argument isn't depended upon in the type, we can assign the identity function as follows. \[\mathsf{id}:^{1}\Pi X:^{0}\star.X\to X\] id\(Xx\coloneqq x\) Now the argument \(x\) and the function body are both at level 1 without requiring subsumption. The argument and body of a nondependent function having the same level is key to typing the self-application. ``` idid:^{2}\Pi X:^{0}\star.X\to X\] idid:^{1}(\Pi X:^{0}\star.X\to X)(\(\lambda X.\lambda x.\mathsf{id}^{1}Xx\)) ``` Displacing \(\mathsf{id}\) by 1, we can then pass in the type \(\Pi X:^{0}\star.X\to X\), which has level 1, followed by \(\lambda X.\lambda x.\mathsf{id}^{1}Xx\), which has level 2, yielding a final term at level 2. _Decidable types._ Floating nondependent function types are similarly crucial for type constructors. Later in Section 5 we'll consider datatypes with parameters, but for now, consider the following Church encoding [(2)] of decidable types, which additionally uses negation defined as implication into the empty type. ``` neg:^{0}\(\star\to\star\) neg\(X\coloneqq X\to\bot\) Dec:^{1}\(\star\to\star\) Dec:^{1}\(\mathsf{Dec}\ X\coloneqq\Pi Z:^{0}\star.(X\to Z)\to(\mathsf{neg}\ X\to Z)\to Z\) yes:^{1}\(\Pi X:^{0}\star.X\to\mathsf{Dec}\ X\) yesXx:^{1}\(\Pi X:^{0}\star.\)\(X\to\mathsf{Dec}\ X\) yesXx:^{1}\(\Pi X:^{0}\star.\)\(\exists g.\)\(\exists g.\)\(\exists\)no:^{1}\(\Pi X:^{0}\star.\)neg\(X\to\mathsf{Dec}\ X\) noXnx:^{1}\(\exists Z.\)\(\exists g.\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\)\(\exists\exists\)\(\exists\)\(\exists\)\(\exists\exists\)\(\ show that deciding a given type is irrefutable.6 Footnote 6: Note this differs from irrefutability of the law of excluded middle, neg (neg (\(\Pi\!X\):\({}^{0}\star\). Dec \(X\))), which cannot be proven constructively. \[\text{irrDec}:\Pi X\text{:}^{0}\star.\text{neg}\ (\text{ neg}\ (\text{Dec}\ X))\] \[\text{irrDec}\ X\ ndec \coloneqq\ ndec (\text{no}\ X\ (\lambda x.\ ndec (\text{yes}\ X\ x)))\] Without the nondependent function type, neg and Dec would be forced produce types at higher levels. The corresponding constructors yes\({}^{\prime}\) and no\({}^{\prime}\), omitted below, would also have higher levels. \[\text{neg}^{\prime}:^{1}\Pi X\text{:}^{0}\star.\star\] \[\text{neg}^{\prime}X\coloneqq\Pi X\text{:}^{0}X.\perp\] \[\text{Dec}^{\prime}:^{3}\Pi X\text{:}^{0}\star.\star\] \[\text{Dec}^{\prime}X\coloneqq\Pi Z\text{:}^{0}\star.\Pi y\text{: }^{2}(\Pi x\text{:}^{0}X.Z).\] \[\Pi nz\text{:}^{2}(\Pi nx\text{:}^{1}\text{neg}^{\prime}X.Z).Z\] Every dependent quantification in the domain increases the overall level, so the smallest level that can be assigned to Dec\({}^{\prime}\ X\) is 3, since it takes a function eliminating a no\({}^{\prime}\), which is a function taking the negation of \(X\), which itself is a function from \(X\). We can continue on to write the corresponding type of irrDec\({}^{\prime}\), displacing neg\({}^{\prime}\) as needed, but the body will no longer type check against it. \[\text{irrDec}^{\prime}:^{5}\Pi X\text{:}^{0}\star.\text{neg}^{ \prime 4}\ (\text{neg}^{\prime 3}\ (\text{Dec}^{\prime}\ X))\] \[\text{irrDec}^{\prime}X\ ndec \coloneqq\ ndec (\text{no}^{\prime}\ X\ (\lambda x.\ ndec (\text{yes}^{\prime}\ X\ x)))\] The level of the function _ndec_ of type neg\({}^{\prime 3}\ (\text{Dec}^{\prime}\ X)\) is now 4, which is too high to be used in the argument of no\({}^{\prime}\); if we displace yes\({}^{\prime}\) and no\({}^{\prime}\), then the level of the argument of _ndec_ will in turn be too high to fit. _Leibniz equality_. Although nondependent functions can often benefit from a floating domain, sometimes we don't want the domain to float. In some examples, the level of the domain needs to be fixed to something strictly smaller than that of the codomain even when the codomain doesn't depend on the function argument. Here, we turn to a simple application of dependent types with Leibniz equality (Lewis, 2017; Gershon et al., 2017) to demonstrate such a situation. \[\text{eq}\ :^{1}\Pi X\text{:}^{0}\star.\ X\to X\to\star\] \[\text{eq}\ X\ x\ y\coloneqq\Pi P\text{:}^{0}X\to\star.\ P\ x\to P\ y\] An equality eq\(A\)\(a\)\(b\) states that two terms are equal if given any predicate \(P\), a proof of \(P\)\(a\) yields a proof of \(P\)\(b\); in other words, \(a\) and \(b\) are indiscernible. The proof of reflexivity of Leibniz equality should be unsurprising. \[\text{ref}:^{1}\Pi X\text{:}^{0}\star.\Pi x\text{:}^{0}X.\text{eq} \ X\ x\] \[\text{ref}\ X\ x\ P\ px\coloneqq px\] We might try to define a predicate stating that a given type \(X\) is a mere proposition, _i.e._ that all of its inhabitants are equal, and give it a nondependent function type. \[\text{isProp}:^{0}\star\to\star\] \[\text{isProp}\ X\coloneqq\Pi x\text{:}^{0}X.\ \Pi y\text{:}^{0}X.\ \text{eq}\ X\ x\ y\] But this doesn't type check, since the body contains an equality over elements of \(X\), which necessarily has level 1 rather than the expected level 0. We must assign isProp a stratified function type; informally, stratification propagates dependency information not only from the codomain, but also from the function body. \[\text{isProp}:^{1}\Pi X\text{:}^{0}\star.\star\] \[\text{isProp}\ X\coloneqq\Pi x\text{:}^{0}X.\ \Pi y\text{:}^{0}X.\ \text{eq}\ X\ x\ y\] Going one further, we can define a predicate _isSet_ stating that \(X\) is an h-set (Shen et al., 2017), or that its equalities are mere propositions, by using a displaced isProp, which also raises the overall level. Once again, despite the type of isSet not being an actual dependent function type, here we need to fix the level of the domain. \[\text{isSet}\ :^{2}\Pi X\text{:}^{0}\star.\star\] \[\text{isSet}\ X\coloneqq\Pi x\text{:}^{0}X.\ \Pi y\text{:}^{0}X.\ \text{isProp}^{1}\ (\text{eq}\ X\ x\ y)\] ## 4. Syntactic metatheory We use Coq to mechanize the syntactic metatheory of the typing, context formation, and signature formation judgements of StraTT, recalling that this covers all of stratified dependent functions, floating nondependent functions, and displaced constants. The proof scripts are available at [https://github.com/plclub/StraTT](https://github.com/plclub/StraTT) under the coq/ directory. _Strengthening_. The key idea of this type system design is that stratification levels delineate judgements. A judgement at level \(k\) is only allowed to depend on judgements at the same or lower levels. One way to observe this property is through a form of strengthening result, which states that variables from higher levels can always be removed from the context and that contexts can be truncated at any level. Formally, we define the _restriction_ operation, written \(\lceil\Gamma\rceil^{k}\), that filters out all assumptions from the context with level greater than \(k\). **Lemma 4.1** (Restriction).: 7 If \(\Delta\vdash\Gamma\) then \(\Delta\vdash\lceil\Gamma\rceil^{k}\) for any \(k\), and if \(\Delta;\Gamma\vdash a\,\colon^{k}A\) then \(\Delta;\lceil\Gamma\rceil^{k}\vdash a\,\colon^{k}A\). Footnote 7: coq/ctx.v::D5iq_Dctx_Dtyping_restriction _Weakening and Narrowing_. We can extend the ordering between levels, \(j\leq k\), to an ordering between contexts, \(\Gamma_{1}\leq\Gamma_{2}\). At the same time, we also incorporate the idea of weakening into this relation. Stronger contexts have higher levels and fewer assumptions. \(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{1}\leq\Gamma_{2}\)\(\Gamma_{2}\leq\Gamma_{2}\)\(\Gamma_{1}\leq **Lemma 4.9**.: 15 Say \(i<j\) and \(i\leq k_{2}\), then for any \(k_{1}\), \(\uptau^{k_{1}}_{j}(\lceil\Gamma\rceil^{\prime})\leq\uptau^{k_{2}}_{i}(\lceil \Gamma\rceil^{\prime})\) Footnote 15: oq/textrict.v:Sub&_float_lt Footnote 16: oq/typesafety.v:Reduce_Preservation Footnote 17: oq/xmlxos.v:DEequiv_Armo_inj1,DEequiv_Armo_inj2 Footnote 18: oq/xmlxos.v:DEequiv_P_li_10,DEequiv_P_li_2 Footnote 19: oq/xmlxos.v:ine_* Footnote 20: oq/typesafety.v:WWMF_Preservation Footnote 21: coq/typesafety.v:progress Footnote 22: coq/axos.v:empty_Bottom Footnote 23: coq/axos.v:empty_Bottom Footnote 24: coq/axos.v:DEequiv_P_li_10,DEequiv_P_li_2 Footnote 25: oq/examples.v:DEequiv_Armo_inj1,DEequiv_Armo_inj2 Footnote 26: coq/axos.v:DEequiv_P_li_10,DEequiv_P_li_2 Footnote 27: coq/axos.v:ine_* Footnote 28: coq/examples.v:DEequiv_P_li_10,DEequiv_P_li_2 Footnote 29: coq/typesafety.v:ROEW_P_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li_10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li10,DEequiv_Pli10,DEequiv_P_li _Datatypes._ The implementation additionally features stratified datatypes, case expressions, and recursion, used to demonstrate the practicality of programming in StraTT. Restricting the datatypes to inductive types by checking strict positivity and termination of recursive functions to ensure consistency is possible but orthogonal to stratification and thus out of scope for this work. In this section and the next, the examples we provide will always satisfy strict positivity and structural termination. Revisiting an example from Section 3.1, we can define Dec as a datatype. \[\textbf{data Dec }(X:\star):^{0}\star\textbf{where}\] \[\text{Yes }:^{0}X\rightarrow\text{Dec }X\] \[\text{No }:^{0}\text{neg }X\rightarrow\text{Dec }X\] The lack of annotation on the parameter indicates that it's a floating domain, so that \(\lambda X\). Dec \(X\) can be assigned type \(\star\rightarrow\star\) at level 0. Datatypes and their constructors, like variables and constants, are cumulative, so the aforementioned type assignment is valid at any level above 0 as well. When destructing a datatype, the constructor arguments of each branch are typed such that the constructor would have the same level as the level of the scrutine. Consider the following proof that decidability of a type implies its double negation elimination, which requires inspecting the decision. \[\text{decDNE }:^{1}\text{IX}:^{0}\star\text{Dec }X \rightarrow\text{neg }(\text{neg }X)\to X\] \[\text{decDNE }X\text{ }dec\text{ }nnx\coloneqq\textbf{case }\text{ }dec\text{ of}\] \[\text{Yes }x\Rightarrow x\] \[\text{No }\text{nx}\Rightarrow\text{absurd}(nnx\text{ }nx)\] By the level annotation on the function, we know that \(dec\) and \(nnx\) both have level 1. Then in the branches, the patterns Yes \(x\) and No \(nx\) must also be typed at level 1, so that \(x\) has type \(X\) and \(nx\) has type \(\text{neg }X\) both at level 1. _Datatype displacement._ Datatypes and their constructors, like constants, can be displaced as well, uniformly raising the levels of their types. Consider now a Box type which stores in it a term of a fixed lower level. \[\textbf{data Box }(X:^{0}\star):^{1}\star\textbf{where}\] \[\text{MkBox }:^{1}\text{IX}:^{0}X\text{. Box }X\] This time, the presence of the level 0 annotation on the parameter indicates that it's fixed, which also allows the constructor argument \(x\) to be fixed at level 0. Displacing a Box by 1 then raises the fixed parameter level to 1, and its sole constructor would be a MkBox displaced by 1 which destructs to yield a term at level 1. The following map over a box displaced by 1 with a function at level 1 demonstrates this behaviour. \[\textbf{map }:^{2}\text{IX}:^{1}\star\text{.}\Pi Y:^{1}\star\text{.} \Pi f:^{1}X\to Y\text{. Box}^{1}X\rightarrow\text{Box}^{1}Y\] \[\textbf{map }X\text{ }Y\text{ }f\text{ }box:=\textbf{case }box\text{ of}\] \[\text{MkBox}^{1}x\Rightarrow\text{MkBox}^{1}(f\text{ }x)\] ### Extended example: dependent pairs Along with Dec and Box, we include with our implementation a variety of common datatypes and associated functions, such as dependent pairs, logical connectives, booleans, optionals, naturals, lists, finite sets, and vectors:23 Here, we take a closer look at dependent pairs to further explore the behaviour and limitations of StraTT. Footnote 23: impl/pi/README.pi Because there are two different function types, there are also two different ways to define dependent pairs. Using a floating function type for the second component's type results in pairs whose first and second projections can be defined as usual, while using the stratified dependent function type results in pairs whose second projection can't be defined in terms of the first. We first take a look at the former. \[\textbf{data NPair }(X:^{0}\star)\text{ }(P:X\rightarrow\star):^{1}\star \textbf{where}\] \[\text{MkPair }:^{1}\text{IX}:^{0}X\text{. }P\text{ }x \rightarrow\text{NPair }X\text{ }P\] \[\text{nfst }:^{2}\text{IX}:^{0}\star\text{.}\Pi P:^{1}X \rightarrow\star\text{.}\text{NPair }X\text{ }P\to X\] \[\text{nfst }X\text{ }P\text{ }p\coloneqq\textbf{case }p\text{ of MkPair }x\text{ }y \Rightarrow x\] \[\text{nsnd }:^{2}\text{IX}:^{0}\star\text{.}\Pi P:^{1}X \rightarrow\star\text{.}\Pi P:^{1}\text{NPair }X\text{ }P\text{ }P\text{ }(\text{nfst }X\text{ }P\text{ }p)\] \[\text{nsnd }X\text{ }P\text{ }p\coloneqq\textbf{case }p\text{ of MkPair }x\text{ }y \Rightarrow y\] Due to stratification, the projections unfortunately need to be defined at level 2 to accommodate dependently quantifying over the \(P\) at level 1. Even so, the second projection is well typed, since \(P\) can be used at level 2 by subsumption to be applied to the first projection. As the two function types are distinct, we do need both varieties of dependent pairs. In particular, with the above pairs alone, we aren't able to define a universe of propositions \(\text{NPair }\star\text{ isProp}\), as you'll recall that the predicate has type \(\Pi X:^{0}\star\star\star\text{ at level 1}\). \[\textbf{data DPair }(X:^{0}\star)\text{ }(P:\Pi X:^{0}X\star):^{1}\star \textbf{where}\] \[\text{MkPair }:^{1}\text{IX}:^{0}X\text{. }P\text{ }x \rightarrow\text{DPair }X\text{ }P\] \[\text{dfst }:^{2}\text{IX}:^{0}\star\text{.}\Pi P:^{1}(\Pi X:^{0}X \star)\text{. DPair }X\text{ }P\to X\] \[\text{dfst }X\text{ }P\text{ }p\coloneqq\textbf{case }p\text{ of MkPair }x\text{ }y \Rightarrow x\] \[\textbf{dsnd }:^{2}\text{IX}:^{0}\star\text{.}\Pi P:^{1}(\Pi X:^{0}X \star)\text{.}\Pi P:^{1}\text{DPair }X\text{ }P\text{.}\] \[\textbf{case }p\text{ of MkPair }x\text{ }y \Rightarrow P\text{ }x\] \[\textbf{dsnd }X\text{ }P\text{ }p\coloneqq\textbf{case }p\text{ of MkPair }x\text{ }y \Rightarrow y\] In the second variant of dependent pairs where \(P\) is a stratified dependent function type, the domain of \(P\) is fixed to level 0, so in the type in dsnd, it can't be applied to the first projection, but it can still be applied to the first component by matching on the pair. Now we're able to define \(\mathsf{DPair}\ \star\mathsf{isProp}\). In both cases, the first component has a fixed level, while the second component is floating, so using a predicate at a higher level results in a pair type at a higher level by subsumption. Consider the predicate isSet, which has type \(\Pi\mathsf{IX}\).1\(\star\star\star\) at level 2: the universe of sets \(\mathsf{DPair}\ \star\mathsf{isSet}\) is also well typed at level 2. Footnote 11: This example was provided by Stephen Dolan in private correspondence. Unfortunately, the first projection \(\mathsf{dfst}\) can no longer be used on an element of this pair, since the predicate is now at level 2, nor can its displacement \(\mathsf{dfst}^{1}\), since that would displace the level of the first component as well. Without proper level polymorphism, which would allow keeping the first argument's level fixed while setting the second argument's level to 2, we're forced to write a whole new first projection function. In general, this limitation occurs whenever a datatype contains both dependent and nondependent parameters. Nevertheless, in the case of the pair type, the flexibility of a nondependent second component type is still preferable to a dependent one that fixes its level, since there would need to be entirely separate datatype definitions for different combinations of first and second component levels, _i.e._ one with levels 0 and 1 (as in the case of isProp), one with levels 0 and 2 (as in the case of isSet), and so on. ## 6. On consistency In this section, we delve into the design of \(\mathsf{StraTT}\) and the implementation as they relate to logical consistency, _i.e._ the absence of a closed inhabitant of \(\bot\). ### Level annotations The lack of level annotations on unstratified nondependent function types lends to them their flexibility with respect to cumulativity. A declared function \(f\ :^{0}A\to B\) taking and returning a term at level 0 can, by subsumption, be used as a function taking and returning a term at _any_ higher level so long as the input and output levels match. It may be tempting to remove the level annotation on dependent function types as well, so that they enjoy the same flexibility as long as the output level is strictly greater than the input level, but this recovers impredicativity, thus defeating the purpose of stratification. Supposing the level annotations are removed and that we have some well-typed function type \(\Pi\mathsf{x}\colon\star\colon B\) at level 1, the following derivation is valid. \[f\ :^{1}\Pi\mathsf{x}\colon\star\colon B\vdash f\ :^{2}\Pi\mathsf{x} \colon\star\colon B\ \ \ \ \ \ \ \ \ \ \ f\ :^{1}\Pi\mathsf{x}\colon\star\colon B\vdash\Pi\mathsf{x} \colon\star\colon B\vdash\Pi\mathsf{x}\colon\star\colon B\vdash^{1}\star\] \(f\ :^{1}\Pi\mathsf{x}\colon\star\colon B\vdash f\ (\Pi\mathsf{x} \colon\star\colon B)\ :^{2}B\{\Pi\mathsf{x}\colon\star\colon B/x\}\) Without the level annotation, the application rule for dependent functions now applies merely whenever the argument is typed at a level strictly lower than the function, allowing our function type to be substituted into its own codomain. With both impredicativity and type-in-type, this system would be no different from an unstratified system with type-in-type, allowing us to derive an inconsistency. ### Constructor levels In the implementation, a datatype definition is valid if, among the other rules discussed, the level of the constructors cannot be higher than that of the datatype itself. If the level were allowed to be higher, while regularity wouldn't be violated, it would yet again be possible to derive an inconsistency. We demonstrate this with a variant of Burali-Forti's paradox [(4)] concerning the simultaneous well-foundedness and non-well-foundedness of particular datatype \(\mathsf{U}\)24; 25 Footnote 24: This example was provided by Stephen Dolan in private correspondence. \[\mathsf{dfst}\ \mathsf{U}\ :^{0}\ \star\ \mathsf{where}\] \[\mathsf{dfst}\ :^{1}\Pi\mathsf{X}\colon^{0}\star\ (X\to\mathsf{U})\to\mathsf{U}\] While \(\mathsf{dfst}\mathsf{dfst}\mathsf{dfst}\mathsf{dfst}\mathsf{dfst}\mathsf{dfst}\) is assigned level 1, we consider the possibility of assigning level 0 to its type \(\mathsf{U}\). Note that this definition is strictly positive, so we aren't using any tricks relying on negative datatypes. Next, we define a well-foundedness predicate for \(\mathsf{U}\). \[\mathsf{dfst}\ \mathsf{dfst}\ :^{2}\Pi u\colon^{1}\mathsf{U}\ \star\ \mathsf{where}\] \[\mathsf{dfst}\mathsf{dfst}\mathsf{dfst}\ :^{2}\Pi X\colon^{0}\star\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{.}\ \mathsf{. above (with its level as 1):26; 27 First, a U is said to be regular if it's provably inequal to its subarguments; this represents a set which doesn't contain itself. Footnote 26: This formulation is due to Paolo Capriotti (2017), and the Agda implementation can be found at [https://github.com/agda/agda/blob/master/test/Succeed/Russell.agda](https://github.com/agda/agda/blob/master/test/Succeed/Russell.agda). Footnote 27: impl/pi/Russell.pi \[\text{regular}\;\;\text{\text{\text@underline{\text{\text@underline{\text{ \text@underline{\text{\text@underline{\text{\text@text@underline{\text@text@text@underline{\text@text@text@text@text@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@text@@text@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@text@@text@@text@@text@@text@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@text@@text@@text@@text@@text@@text@@text@text@@text@text@@text@@text@text@@text@@text@text@@text@text@@text@text@@text@@text@@text@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@text@@text@@text@@text@text@@text@@text@@text@text@@text@@text@@text@text@@text@text@@text@@text@@text@text@@text@@text@@text@@text@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@text@@text@text@@text@@text@text@@text@@text@@text@text@@text@@text@text@@text@text@@text@@text@@text@text@@text@@text@text@@text@@text@text@@text@@text@text@@text@@text@text@@text@@text@@text@@text@text@@text@@text@text@text@@text@@text@text@@text@@text@text@@text@@text@text@@text@text@text@text@text@@text@text@@text@text@@text@text@@text@text@@text@@text@text@text@@text@text@@text@text@@text@@text@@text@text@@text@text@@text@text@@text@@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@@text@text@@text@text@text@@text@@text@text@@text@text@@text@text@@text@@text@text@text@@text@text@@text@text@@text@text@text@@text@text@text@@text@text@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@@text@text@text@@text@text@text@text@text@text@text@text@text@@text@text@text@text@@text@text@text@@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@@text@text@@text@text@text@text@@text@text@text@@text@text@@text@text@text@@text@text@text@@text@text@text@@text@text@text@@text@text@text@@text@text@text@text@@text@text@@text@text@text@@text@text@text@text@@text@text@text@@text@text@text@text@@text@text@text@text@@text@text@text@text@@text@text@@text@text@text@@text@text@text@text@text@@text@text@@text@text@text@text@@text@text@text@@text@text@@text@text@text@text@text@@text@text@text@text@@text@text@text@text@@text@text@text@text@text@text@text@@text@text@text@text@@text@text@text@text@@text@text@@text@text@text@@text@text@text@text@@text@text@text@@text@text@@text@text@text@@text@text@text@text@text@text@@text@text@@text@text@text@text@text@@text@text@text@text@text@text@@text@text@text@text@text@@text@text@text@@text@text@@text@text@text@text@text@text@text@@text@text@text@@text@text@text@text@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@text@text@@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@@text@text@text@text@text@@text@text@text@text@text@text@text@text@text@text@text@text@@text@text@text@text@text@text@@text@text@@text@text@text@@text@text@text@text@text@text@@text@text@text@text@@text@@text@text@text@@text@text@text@@text@text@text@@text@text@@text@text@@text@text@@text@text@text@@text@text@text@text@@text@text@@text@text@@text@text@text@text@@text@text@text@@text@text@text@@text@text@@text@text@text@@text@@text@text@text@@text@text@@text@text@text@@text@text@@text@text@@text@text@@text@@text@text@text@@text@text@@text@text@text@@text@@text@text@@text@text@@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@text@@text@@text@text@@text@text@@text@@text@@text@text@@text@text@@text@text@@text@text@@text@@text@@text@@text@text@@text@text@@text@text@@text@text@@text@@text@text@@text@@text@text@@text@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@text@@@text@text@@text@@text@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@text@@@text@@text@text@@text@@text@@text@@text@@text@@text@@text@@@text@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@@text@text Here, we have codes \(\hat{\Pi}\), \(\hat{\rightarrow}_{k}\), and \(\hat{\bot}_{k}\) in \(\mathsf{U}_{k}\). Our fundamental lemma then states the following, disregarding any issues regarding weakening of contexts and \(\eta\) mappings. **Lemma 6.1** (Fundamental lemma).: Let \(\eta\) be a mapping such that for every \(x\colon^{j}\!A\in\Gamma\), \(\eta(x)\in\mathsf{el}_{j}([\![A]\!]^{j}\eta)\). If \(\Gamma\vdash a\colon^{k}A\), then \([\![a]\!]^{k}\eta\in\mathsf{el}_{k}([\![A]\!]^{k}\eta)\). Consistency arises from the fundamental lemma combined with the appropriate definition of \(\mathsf{el}_{k}(\bot_{k})\), which in the Agda model would be Agda's empty type. The fundamental lemma is proven by induction on the typing derivation. In the rule DT-Var case, we have that \([\![x]\!]^{k}\eta=\eta(x)\in\mathsf{el}_{j}([\![A]\!]^{j})\), while we need to show that \([\![x]\!]^{k}\eta\in\mathsf{el}_{k}([\![A]\!]^{k})\). We thus need a lemma stating that cumulativity is preserved. **Lemma 6.2** (Preservation of cumulativity).: Suppose \(j\leq k\). If \(a\in\mathsf{el}_{j}([\![A]\!]^{j}\eta)\) then \(a\in\mathsf{el}_{k}([\![A]\!]^{k}\eta)\). We proceed by induction on the structure of \(A\). With some unfolding, we can see the case of \(\Pi x\colon^{j}A\). \(B\) poses no issue, since the level of its domain is fixed. In the case of \(A\to B\), we need to show that if \(f\in\mathsf{el}_{j}(\hat{\rightarrow}_{j}([\![A]\!]^{j},[\![B]\!]^{j}))\), then \(f\in\mathsf{el}_{k}(\hat{\rightarrow}_{k}([\![A]\!]^{k},[\![B]\!]^{k}))\), with the induction hypotheses stating that if \(a\in\mathsf{el}_{j}([\![A]\!]^{j})\) then \(a\in\mathsf{el}_{k}([\![A]\!]^{k})\), and similarly for \(B\). Since terms of type \(A\to B\) behave like functions, we expect that the interpretation of \(\hat{\rightarrow}\) codes behaves like a function space. However, to prove our goal, we require that they not be contravariant in the domain with respect to cumulativity, as expected from a function space, but _covariant_! Concretely, in the Agda model, supposing we have an interpretation function \([\![A]\!]\!]\!]\!]\!]\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2309.10015
SYNDICOM: Improving Conversational Commonsense with Error-Injection and Natural Language Feedback
Commonsense reasoning is a critical aspect of human communication. Despite recent advances in conversational AI driven by large language models, commonsense reasoning remains a challenging task. In this work, we introduce SYNDICOM - a method for improving commonsense in dialogue response generation. SYNDICOM consists of two components. The first component is a dataset composed of commonsense dialogues created from a knowledge graph and synthesized into natural language. This dataset includes both valid and invalid responses to dialogue contexts, along with natural language feedback (NLF) for the invalid responses. The second contribution is a two-step procedure: training a model to predict natural language feedback (NLF) for invalid responses, and then training a response generation model conditioned on the predicted NLF, the invalid response, and the dialogue. SYNDICOM is scalable and does not require reinforcement learning. Empirical results on three tasks are evaluated using a broad range of metrics. SYNDICOM achieves a relative improvement of 53% over ChatGPT on ROUGE1, and human evaluators prefer SYNDICOM over ChatGPT 57% of the time. We will publicly release the code and the full dataset.
Christopher Richardson, Anirudh Sundar, Larry Heck
2023-09-18T15:08:48Z
http://arxiv.org/abs/2309.10015v1
# Syndicom: Improving Conversational Commonsense ###### Abstract Commonsense reasoning is a critical aspect of human communication. Despite recent advances in conversational AI driven by large language models, commonsense reasoning remains a challenging task. In this work, we introduce Syndicom - a method for improving commonsense in dialogue response generation. Syndicom consists of two components. The first component is a dataset composed of commonsense dialogues created from a knowledge graph and synthesized into natural language. This dataset includes both valid and invalid responses to dialogue contexts, along with natural language feedback (NLF) for the invalid responses. The second contribution is a two-step procedure: training a model to predict natural language feedback (NLF) for invalid responses, and then training a response generation model conditioned on the predicted NLF, the invalid response, and the dialogue. Syndicom is scalable and does not require reinforcement learning. Empirical results on three tasks are evaluated using a broad range of metrics. Syndicom achieves a relative improvement of 53% over ChatGPT on ROUGE-1, and human evaluators prefer Syndicom over ChatGPT 57% of the time. We will publicly release the code and the full dataset. ## 1 Introduction Conversational AI has witnessed rapid advancements in recent years, largely due to the success of large language models (LLMs) such as GPT-3 Brown et al. (2020). These advancements have been driven by the notable achievements of models like ChatGPT, which is built upon InstructGPT Ouyang et al. (2022). InstructGPT was trained on an extensive dataset of instructions for various language tasks and was further enhanced using human feedback and reinforcement learning (RL). Consequently, research in conversational AI has shifted towards leveraging large models trained on extensive datasets, supplemented by human feedback. While these models have consistently demonstrated significant improvements in reasoning and problem-solving capabilities, they still exhibit flaws and issues. In many critical applications of LLMs, the tolerance for errors in dialogue responses is exceedingly low. Addressing these problems remains challenging, primarily due to the scarcity of data and the high cost associated with human feedback. Recent research has started exploring alternative techniques beyond human feedback and RL, such as natural language feedback (NLF) and self-correction Saunders et al. (2022); Scheurer et al. (2022); Welleck et al. (2022); Bai et al. (2022). Furthermore, even with the progress made, large models often generate hallucinations, underscoring the ongoing importance of knowledge grounding. One of the most demanding aspects of knowledge grounding is commonsense knowledge. Recent advancements in incorporating commonsense into LLMs have utilized resources such as ConceptNet Speer et al. (2017) or Atomic Sap et al. (2019). This paper presents a method for improving commonsense dialogue responses by (1) replacing human feedback and RL with natural language responses and (2) leveraging recent knowledge graph techniques to ground responses in commonsense knowledge derived from Atomic. To address the scarcity of data and the high cost of human feedback, the natural language feedback is elicited in a manner that specifically targets the chosen error types determined by the designer. This approach significantly enhances the speed and quality of model learning and refinement. The contributions of this paper are as follows: * Development of a scalable method for synthesizing knowledge-grounded data with error injection and feedback. * Release of a dataset rich in dialogues featuring commonsense inferences, annotated with commonsense errors, and accompanied by human-written feedback, which we refer to as Syndicom. * Description of a method for training both a feedback generation model and a response improvement model using natural language feedback (NLF), and demonstration of the superiority of this information-rich approach over state-of-the-art RL methods using Syndicom. ## 2 Recent Work The field of conversational AI has experienced a surge of interest in commonsense reasoning in recent years, with a significant focus on curating datasets Richardson and Heck (2023). ConceptNet Speer et al. (2017) and ATOMIC Sap et al. (2019) have emerged as widely used resources for dataset curation, establishing a de facto standard. Several datasets serve as sources for the dialogues, including DailyDialogue Li et al. (2017), MuTual Cui et al. (2020), DREAM Sun et al. (2019), and the Ubuntu Dialogue Corpus Lowe et al. (2015). Our research lies at the intersection of two critical areas in conversational AI: the synthesis of commonsense datasets and the training of models using natural language feedback. These areas have recently garnered significant research attention due to their potential to enhance the ability of conversational agents to understand and respond to complex human interactions with greater accuracy and consistency. By leveraging the synergies between these domains, our work aims to address the existing limitations in conversational agents and pave the way for more robust and effective conversational systems. ### Commonsense Dataset Curation In recent years, various datasets have been curated specifically for commonsense reasoning. Ghosal et al. (2021) introduced CIDER, a dialogue dataset annotated with commonsense inferences, which was later expanded with the more open-ended CI-CERO Ghosal et al. (2022). Some researchers have focused on specific types of commonsense, such as temporal commonsense Qin et al. (2021) and ethical commonsense Ziems et al. (2022); Kim et al. (2022); Sun et al. (2022). Others have concentrated on grounding dialogues in knowledge graphs Figure 1: Syndicom Process. Left: dataset generation, Right: Improving commonsense in dialogue response generation. (Zhou et al., 2021; Moon et al., 2019). These approaches rely on existing dialogue datasets and often employ filtering strategies to reduce dataset size. However, this reliance on existing datasets can limit the generalizability of methods to future problems. One potential solution to the scarcity of large-scale annotated commonsense knowledge datasets is the synthesis approach. Recently, Kim et al. (2022) proposed Soda, a method for procedurally generating social dialogues based on a commonsense knowledge graph. They utilized ATOMIC (Sap et al., 2019), which consists of atomic facts in natural language form, to generate synthetic dialogues rich in commonsense inferences. Their entirely procedural and highly scalable approach generates dialogue data suitable for training models that reason over commonsense knowledge. Building upon this work, we present Syndicom, a synthesis procedure and dataset that expands on the ideas of Soda and incorporates novel features crucial for our dialogue modeling approach. More details about Syndicom are provided in Section 3. ### Feedback and Response Improvement The use of feedback to improve language models has recently garnered increased interest, with most efforts focused on the application of reinforcement learning (Stiennon et al., 2020; Zhou et al., 2021; Bai et al., 2022, 2). Reinforcement learning with human feedback (RLHF) is particularly notable as it serves as the foundation for Instruct-GPT (Ouyang et al., 2022), which paved the way for ChatGPT. RLHF offers a flexible approach to improving LLMs; however, it faces challenges in terms of stability and efficiency inherent to RL. Moreover, the low dimensionality of the reward signal in RL (typically a scalar) severely limits the learning rate. A more information-rich approach than RL is the use of natural language feedback (NLF). NLF has been explored in several recent works. Scheurer et al. (2022) investigated the use of human-written NLF to train a dialogue response refinement model. Saunders et al. (2022) demonstrated that LLMs themselves can generate this feedback. Welleck et al. (2022) developed a method to improve sequence generation of LLMs by first generating a baseline using an imperfect base generator and then correcting the output using a second correction model. The correction model incorporates feedback as part of its input. However, the authors only demonstrated the use of feedback provided by various tools and APIs tailored to the specific tasks they explored. ## 3 The Syndicom Method Taking inspiration from recent NLF methods, this paper presents a new approach called Syndicom. This new approach combines the synthesis of commonsense dialogue data from a grounded knowledge graph (ATOMIC) with an NLF response improvement approach to improve dialogue responses. Figure 1 illustrates the two phase process. ### Syndicom Dataset The Syndicom dataset is created in a four step process: (1) Auto-generate commonsense dialogue templates, (2) Translate templates into natural language dialogues, (3) Generate invalid responses with error injection, and (4) Collect human-written explanations for the invalid responses. Examples from the Syndicom dataset are shown in Table 1. The GPT model we used for the steps in this section was text\(-\)davinci\(-\)003. Statistics for the dataset are shown in Table 2. #### 3.1.1 Generating Templates Our approach generates commonsense-focused dialogue templates from a commonsense knowledge base. For this study, we utilize ATOMIC (Hwang et al., 2021). ATOMIC consists of inferences in the form of Head \(\xrightarrow{\text{relation}}\) Tail. Each head and tail is a natural language description of a generic event, emotional state, action, description, etc. Dialogue templates are constructed by crawling through inferences rooted at each head of ATOMIC and chaining these inferences together to form multiple dialogue turns. The number of dialogue template turns is uniformly and randomly chosen between 3 and 8. #### 3.1.2 Converting to Natural Language Given the dialogue templates, the second step in creating Syndicom converts the templates to natural language conversations. We explored several methods, including crowdsourcing, but found LLMs to be the most consistent and effective. We used the GPT LLM (text-davinci-003) to generate the natural language dialogues from the templates. This was followed by in-context learning with 15 hand-written examples. The exact prompting used is shown in detail in Appendix A. #### 3.1.3 Error Injection To elicit feedback on commonsense from crowd workers, the Syndicom process starts by corrupting the valid dialogue responses so that they violate commonsense reasoning. This provides crowd workers with an easy target for their feedback. To corrupt the dialogue responses, Syndicom takes advantage of the commonsense dialogue inference structure provided by Atomic. Given a commonsense knowledge base \(\mathcal{K}\), a dialogue context \(\mathcal{C}\), and response \(r\) from Syndicom, the response is implied by commonsense from the context, or \(\mathcal{C}\xrightarrow{\mathcal{K}}r\). The response \(r\) is corrupted by replacing it with the semantic opposite, \(\overline{r}\). We prompted GPT as shown in Appendix A to acquire these semantic opposites. The result is dialogues annotated with commonsense contradictions of the form \(\{\mathcal{C},r,\overline{r}\}\). #### 3.1.4 Natural Language Feedback Acquisition The dialogues with commonsense contradictions are presented to crowd workers on the Amazon's Mechanical Turk platform. Each dialogue is shown in the form of context and invalid responses, informing them that the dialogues were generated by an AI attempting to sound human. The crowd workers were given instructions to review AI-generated casual text message conversations and provide 1-2 sentences of natural language feedback on the dialogue, and the final turn in particular (the invalid response). They were asked to be as specific as possible in their feedback. The full instructions and web interface given to the crowd workers can be found in Appendix A. To ensure the quality of the feedback, we used only masters-level crowd workers from English-speaking countries. This decision aimed to maximize the clarity and accuracy of the feedback provided. Each dialogue was evaluated by two crowd workers independently, allowing for a more comprehensive understanding of the AI's mistakes and ensuring a diverse range of feedback. With the addition of the feedback \(f\), this completes the dataset synthesis part, resulting in annotated dialogues of the form \(\{\mathcal{C},r,f,\overline{r}\}\). ### Syndicom Dialogue Improvement This section details the process of using natural language feedback to correct latent errors in the baseline conversational response. To begin, the dialogue response improvement problem is defined as follows: given a dialogue context \(\mathcal{D}\) and a response \(r_{b}\), generated by some dialogue system or model, produce an improved response \(r^{*}\). \[r^{*}=\operatorname*{argmax}_{r}p(r|\mathcal{D},r_{b}) \tag{1}\] Dialogue response generation and improvement has recently received considerable attention Shah et al. (2016); Nayak et al. (2017); Liu et al. (2017, 2018); Weston et al. (2018). This problem is especially relevant today with large language models (LLMs). While LLMs have recently reached a high degree of fluency in dialogue, in some domains they can be factually inaccurate. While these cases are relatively infrequent, the tolerance for factual errors for a number of important applications is very low. In addition, these errors are difficult to predict and/or automatically detect. This leads to a problem of data sparsity that is difficult to overcome for response improvement methods that rely on training models. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Template** & **Synthesized Dialogue** & **Explanation** \\ \hline \hline PersonX refuses PersonY & **Context** & **Crowed Worker 1:** \\ \(\hookrightarrow\) PersonX is seen as disguenble & 1 for those to what you ask. & The other person already said no. \\ \(\hookrightarrow\) As a result, PersonX refuses and irritated & **Why over any longoing dofrequently?** & **Crowed Worker 2** \\ \(\hookrightarrow\) Before that, PersonX needed: think about it & \(\Gamma\)m just answered and irritated. & The person did not say yes so this response was strange. \\ & **Valid Response** & **Valid Response** \\ & **You should think about it before you say no. & \\ & **Imauld Response** & **Imauld Response** \\ & & **You should think about it before you say yes. & \\ \hline \hline \end{tabular} \begin{tabular}{l|l} \hline \hline **Fronox makes music** & **Context** & **Crowed Worker 1:** \\ \(\hookrightarrow\) As a result, PersonX wants: to imperson & 1 love making music. & This contradicts with what was said about imperessing people. \\ \(\hookrightarrow\) PrenchX is seen as related & **How very distinct** & **Crowed Worker 2:** \\ \(\hookrightarrow\) As a result, PersonX will: grish asked to play something & **Thanks. 1) what want to imperson people with my playing. \\ \(\hookrightarrow\) Before that, PrenchX needed: to carry their violin & \(\Gamma\)m any year will. Can you play something for me?** \\ \(\hookrightarrow\) PersonX wanted: they want to share their creativity & **Valid Response**: \\ & **Thaty great, Tm glad you want to share your creativity,** \\ & **Imauld Response** & **Imauld Response** \\ & **Thaty’s awful. Id doesn want to share my creativity.** \\ \hline \hline \end{tabular} \end{table} Table 1: Example dialogues from Syndicom. Each dialogue context includes both valid and invalid responses, as well as crowd worker-written explanations for the invalid response. A method to partially mitigate the sparsity of dialogue response errors is to _artificially create invalid responses_\(\overline{r}\) via error injection (as described in Section 3.1.3). This method will be called Syndicom-Direct. Given the invalid response \(\overline{r}\) and the dialogue history \(\mathcal{D}\), a model is trained to learn the optimal response \(r^{*}\) \[r^{*}=\operatorname*{argmax}_{r}p(r|\mathcal{D},\overline{r}). \tag{2}\] A second approach called Syndicom-NLHF includes natural language human feedback (NLHF) to explain the rationale for why the response \(\overline{r}\) is invalid and then conditions on this side rationale. \[r^{*}=\operatorname*{argmax}_{r}p(r|\mathcal{D},\overline{r},f^{*}). \tag{3}\] As a comparison, we also implemented an approach called Syndicom-Multistep. This approach breaks the inclusion of NLHF into two steps: (1) train a feedback model on NLHF that _predicts_ the feedback critical of response \(\overline{r}\) \[\hat{f}=\operatorname*{argmax}_{f}p(f|\mathcal{D},\overline{r}). \tag{4}\] and (2) train a second model to produce an improved dialogue response from the invalid response, given the _predicted_ feedback \[r^{*}=\operatorname*{argmax}_{r}p(r|\mathcal{D},\overline{r},\hat{f}). \tag{5}\] Both models used in this work are based on OpenAI's GPT-3.5, specifically text\(-\)davinci\(-\)003. The models were fine-tuned through the OpenAI API for GPT based models. The hyperparameters used are listed in Table 3. ## 4 Experiments In this section, we provide a detailed description of the experiments conducted to evaluate our proposed method, Syndicom. The experiments aim to compare the direct prediction of the improved response in Equation 2 (Syndicom-Direct) with the response prediction when conditioned on natural language human feedback (NLHF) that explains why the initial response is invalid (Syndicom-NLHF). Additionally, we explore a multistep implementation of NLHF (Syndicom-Multistep). We compare the performance of our method against a ChatGPT baseline (gpt\(-\)3.5\(-\)turbo) using various text generation metrics, such as ROUGE, BLEU, SacreBLEU, BERTScore, and METEOR. ### Syndicom-Direct Our first experiment focused on the direct dialogue improvement task, where the objective is to enhance a dialogue response based solely on the context and an invalid response. No feedback, whether human or generated, was involved in this task. This optimization problem is described in Equation 2. In order to prevent the model from simply learning to undo the error injection, we introduced noise by rephrasing the invalid dialogues using an independent ChatGPT instance. This rephrasing was only performed at inference time and not during training. The rephrasing prompt is available in Appendix A. ### Syndicom-Multistep Next, we explored the Syndicom-Multistep approach. As shown in Equations 4 and 5, we first predicted feedback using the feedback model and then improved the dialogue response using the response improvement model. For the feedback predictor, we trained a GPT-based model to generate feedback given a dialogue context and an invalid response, as shown in Equation 4, using the typical causal language modeling objective. We evaluated the feedback generation model portion of Syndicom-Multistep separately and compared it to ChatGPT. The prompt used for the baseline can be found in Appendix A. Table 4 presents the results, demonstrating that our method outperformed the baseline on all metrics. Subsequently, we utilized the predicted feedback along with the dialogue context and invalid response to produce an improved dialogue response, as shown in Equation 5. Similar to the Syndicom-Direct experiments, we applied rephrasing to the invalid responses at inference time. The baseline model was explicitly instructed to first generate feedback for the invalid response and then use that feedback to guide its response improvement. Table 5 displays the results. ### Syndicom-NLHF The next experiment focused on enhancing dialogue responses using human feedback (Equation 3). Given a dialogue context, an invalid response, and human feedback, the goal was to generate an improved (valid) dialogue response. For this experiment, we utilized the raw human-written feedback from Syndicom and trained a separate GPT improvement model to generate valid responses. As before, we applied inference-time rephrasing to the invalid responses. Results are presented in Table 5 under Syndicon-NLHF. This version of our method outperformed the others on all metrics. ### Human Evaluation In addition to our automated metric evaluations, we conducted a human evaluation to assess the effectiveness of response improvements through generated feedback. This evaluation process mirrored the dialogue enhancement steps employed in the experiment described in Section 3.2. It is important to note that task assignments for crowdworkers require explicit and precise definitions, which often pose challenges in evaluating the commonsense aspect through human intervention. Existing human evaluations primarily focus on assessing the accuracy of information or determining the most preferred output from a set of alternatives. With the emergence of advanced language models like ChatGPT, human evaluation has become increasingly complex. This complexity arises from the remarkably high-quality and naturally articulated outputs generated by state-of-the-art models such as ChatGPT. In our study, we instructed crowdworkers that an AI system was attempting to emulate human conversation and generate dialogue responses that align with commonsense understanding and fit the given context. The workers were presented with two distinct responses: a standard ChatGPT response and our Syndicon response. Their task was to select the response that appeared more human-like and natural. The order of the responses chosen was randomized. Despite the impressive contextual relevance exhibited by ChatGPT responses, our method generated the more favored response **56.5%** of the time, compared to ChatGPT's 43.5% preference rate. For further details on the interface provided to the crowdworkers, please refer to Appendix A. ## 5 Discussion In the Discussion section, we analyze the performance of our proposed Syndicon method in conversational AI compared to the baseline model ChatGPT. The results are summarized in Tables 4 and 5, where we observe that Syndicon outperforms ChatGPT on all automatic metrics for the feedback and dialogue response improvement tasks. Specifically, Table 5 provides a comparison between our direct and multi-step approaches to the response improvement problem. Our multi-step method outperforms the direct method on various metrics such as ROUGE-1, BLEU, SacreBLEU, and BERTScore, despite the simplicity of the error typology used in the error injection during these experiments. This indicates that the multi-step approach has the potential to achieve even better performance when faced with more diverse error typologies, which we leave as an avenue for future research. One contributing factor to the superior performance of the multi-step method is the additional information encoded in the feedback model. The feedback model is trained on human feedback, providing it with more contextual information compared to the direct model, which is solely trained on valid and invalid responses. Even in cases where the direct model achieves slightly higher scores in certain metrics, the differences are negligible. Notably, BERTScore, which represents the most comprehensive model-based metric utilized in our \begin{table} \begin{tabular}{l c c c} \hline \hline **Description** & **Train** & **Val** & **Test** \\ \hline \# Samples & 16221 & 1709 & 1787 \\ \# Turns per template & 5.21\(\pm\)1.42 & 5.26\(\pm\)1.42 & 5.23\(\pm\)1.42 \\ \# Turns per dialogue & 5.18\(\pm\)1.36 & 5.21\(\pm\)1.36 & 5.18\(\pm\)1.32 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics of our Syndicon dataset. # Dialogue turns includes the valid response (\(\pm\) indicates 1 std deviation.). The splits were inherited from Atomic, the source of the templates. \begin{table} \begin{tabular}{l|c} \hline \hline **Hyperparameter** & **Value** \\ \hline Temperature & 0.7 \\ Max tokens & 50 \\ Top p & 1.0 \\ Frequency penalty & 0 \\ Presence penalty & 0 \\ \hline \hline \end{tabular} \end{table} Table 3: Hyperparameters used for GPT-3.5. The same parameters were used for training and inference. evaluation, further supports the argument in favor of the multi-step approach with feedback generation. When examining the NLHF columns in Table 5, we observe that Syndicom demonstrates significant improvement over ChatGPT for the response improvement task when provided with human feedback for the invalid response. This scenario aligns with use cases where feedback can be collected for a dialogue system and subsequently used to fine-tune and enhance the dialogue model. These findings underscore the value of the Syndicom method in continuous learning scenarios, particularly those where feedback from end users is actively being collected. Overall, Syndicom exhibits strong performance compared to the state-of-the-art large language model ChatGPT, despite both models being based on the same underlying architecture (GPT-3.5). It is worth noting that ChatGPT underwent substantial reinforcement learning through human feedback during its refinement process, making the success of Syndicom even more noteworthy. ## 6 Conclusion In this paper, we introduced Syndicom, a novel method for enhancing commonsense reasoning in dialogue response generation. By integrating a commonsense dialogue synthesis approach with targeted error injection, we tackled the challenge of incorporating commonsense knowledge into conversational AI systems. Our method comprised two key components: (1) a dataset consisting of valid and invalid responses to dialogue contexts, along with natural language feedback (NLF) for the invalid responses, and (2) a two-step procedure in \begin{table} \begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**ChatGPT**} & \multicolumn{3}{c}{**Syndicom**} \\ \cline{2-7} **Metric** & **Max** & **Min** & **Avg** & **Max** & **Min** & **Avg** \\ \hline **ROUGE1** & 0.204 & 0.123 & 0.163 & 0.315 & 0.185 & 0.250 \\ **ROUGE2** & 0.034 & 0.0078 & 0.0209 & 0.112 & 0.035 & 0.073 \\ **ROUGEL** & 0.150 & 0.093 & 0.122 & 0.248 & 0.144 & 0.196 \\ **BERTSCORE** & 0.863 & 0.853 & 0.858 & 0.883 & 0.866 & 0.874 \\ **SacreBLEU** & 2.546 & 1.533 & 2.039 & 6.697 & 2.907 & 4.802 \\ **BLEU** & 0.004 & 0.0001 & 0.0021 & 0.030 & 0.0041 & 0.0171 \\ **METEOR** & 0.197 & 0.129 & 0.163 & 0.279 & 0.158 & 0.219 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance in Feedback Generation performance of our method vs. baseline. Syndicom outperforms the baseline on all metrics. Each dialogue was accompanied by two feedback responses, and scores were computed for both independently. We show the max/min/avg over the two for each score and model. \begin{table} \begin{tabular}{l c c|c c|c} \hline \hline & \multicolumn{2}{c|}{**ChatGPT**} & \multicolumn{3}{c}{**Syndicom**} \\ **Metric** & Direct & NLHF & Direct & Multistep & NLHF \\ \hline **ROUGE1** & 0.132 & 0.231 & 0.386 & **0.388** & _0.474_ \\ **ROUGE2** & 0.029 & 0.081 & **0.174** & 0.172 & _0.246_ \\ **ROUGEL** & 0.112 & 0.201 & **0.324** & 0.322 & _0.396_ \\ **BLEU** & 0.008 & 0.031 & 0.117 & **0.125** & _0.168_ \\ **METEOR** & 0.209 & 0.290 & **0.390** & 0.387 & _0.445_ \\ **SacreBLEU** & 0.885 & 3.107 & 11.716 & **12.547** & _16.831_ \\ **BERTScore** & 0.859 & 0.880 & 0.909 & **0.910** & _0.919_ \\ \hline \hline \end{tabular} \end{table} Table 5: Response Improvement comparing ChatGPT with our new Syndicom methods. ChatGPT-Direct is fine-tuned to produce a valid response given only the invalid response, with no intermediate steps or feedback. ChatGPT-NLHF is additionally conditioned on natural language human feedback (NLHF). Syndicom-Direct is the model that optimizes Equation 2, Syndicom-Multistep optimizes Equation 5, and Syndicom-NLHF conditions on the same NLHF as used by the ChatGPT models. Bold text illustrates the highest score between all methods that are not give NLHF, and italics indicate the highest scores among NLHF tasks. Syndicom outperforms the baseline on all metrics for both tasks. volving training a model to predict NLF for invalid responses, followed by training a response generation model conditioned on the predicted NLF, the invalid response, and the dialogue. A notable advantage of Syndicom is its scalability and independence from reinforcement learning techniques, which are commonly employed in previous methods utilizing human feedback. Through comprehensive empirical evaluations across three tasks, we demonstrated the effectiveness of our approach using a diverse range of metrics. Notably, Syndicom outperformed Chat-GPT on all metrics for both the dialogue improvement tasks, with and without human feedback. To facilitate further research and practical adoption, we plan to release the code implementation of Syndicom as well as the complete dataset utilized in this work. By making these resources openly accessible, we aim to encourage collaboration and promote advancements in commonsense reasoning for dialogue systems. ## Limitations and Future Work There are a few areas of limitation in this work. First, all the dialogues generated were based on templates synthesized from ATOMIC triplets. The domain is thus limited to the material contained in ATOMIC. Second, the procedural generation technique, while scaleable, inevitably introduces structure within the data that can be exploited by statistical models (including deep neural nets and language models). This is why the feedback generation task is particularly crucial, because the explanations are human-written and thus avoid such a limitation. Our experiments demonstrate our method of improving baseline dialogue responses that have been corrupted with error injection. This has the advantage of scale and targeting specific error modes that may be observed with LLMs, but the invalid responses in Syndicom do not themselves represent errors actually made by LLMs. A larger scale study could involve a data collection of errors and mistakes made by an LLM to demonstrate our method in improving baseline dialogue responses, but this approach would not lend itself to scale as any particular type of error made by state-of-the-art LLMs will likely be very rare. A more scaleable approach might be to develop a more comprehensive error typology and injection scheme, which we leave to future work. In future work, a more comprehensive error topology could be explored, along with a more substantial human evaluation, to explore the generalizability of the proposed method. This work focused on commonsense errors, but other errors that are observed in large language models could be explored in further analysis like mathematical reasoning, humor and sarcasm, etc.
2309.09544
Opacities of dense gas tracers in galactic massive star-forming regions
Optical depths of dense molecular gas are commonly used in Galactic and extragalactic studies to constrain the dense gas mass of the clouds or galaxies. The optical depths are often obtained based on spatially unresolved data, especially in galaxies, which may affect the reliability of such measurements. We examine such effects in spatially resolved Galactic massive star-forming regions. Using the 10-m SMT telescope, we mapped HCN and H13CN 3-2, HCO+, and H13CO+ 3-2 towards 51 Galactic massive star-forming regions, 30 of which resulted in robust determination of spatially resolved optical depths. Conspicuous spatial variations of optical depths have been detected within each source. We first obtained opacities for each position and calculated an optical-thick line intensity-weighted average, then averaged all the spectra and derived a single opacity for each region. The two were found to agree extremely well, with a linear least square correlation coefficient of 0.997 for the whole sample.
Shu Liu, Junzhi Wang, Fei Li, Jingwen Wu, Zhi-Yu Zhang, Di Li, Ningyu Tang, Pei Zuo
2023-09-18T07:40:06Z
http://arxiv.org/abs/2309.09544v1
# Opacities of Dense Gas Tracers In Galactic Massive Star Forming Regions ###### Abstract Optical depths of dense molecular gas are commonly used in Galactic and extragalactic studies to constrain the dense gas mass of the clouds or galaxies. The optical depths are often obtained based on spatially unresolved data, especially in galaxies, which may affect the reliability of such measurements. We examine such effects in spatially resolved Galactic massive star forming regions. Using the 10-m SMT telescope, we mapped HCN and H\({}^{13}\)CN 3-2, HCO\({}^{+}\) and H\({}^{13}\)CO\({}^{+}\) 3-2 toward 51 Galactic massive star forming regions, 30 of which resulted in robust determination of spatially-resolved optical depths. Conspicuous spatial variations of optical depths have been detected within each source. We first obtained opacities for each position and calculated an optical-thick line intensity-weighted average, then averaged all the spectra and derived a single opacity for each region. The two were found to agree extremely well, with a linear least square correlation coefficient of 0.997 for the whole sample. keywords: galaxies: ISM - ISM: clouds - ISM: molecules - opacity ## 1 Introduction Dense molecular gas is a key to understand star formation in galaxies. Observations have demonstrated that stars, especially massive stars, are essentially and exclusively formed in the dense cores of giant molecular cores (GMCs, Evans, 2008). Low-\(J\) CO lines trace the total amount of molecular gas content, not sensitive to the dense cores with volume densities higher than 10\({}^{4}\) cm\({}^{-3}\). The transitions of molecules with large dipole moment, such as HCN, HCO\({}^{+}\), HNC, and CS, which have high critical density \(n_{\rm crit}>10^{4}\) cm\({}^{-3}\), are tracers of dense gas. With observations of HCN 1-0 toward 65 galaxies, Gao & Solomon (2004) found a strong linear correlation between the luminosities of HCN 1-0 and Infrared emission. This correlation was found to extend to Galactic dense cores (e.g. Wu, et al., 2005), and possibly to high-\(z\) galaxies and QSOs as well (e.g. Gao, et al., 2007). Further observations of CS \(J\)=5-4 in 24 IR-bright galaxies show such linear correlation still valid for the gas as dense as \(n_{\rm H_{2}}\sim 10^{6}\) cm\({}^{-3}\)(Wang, Zhang & Shi, 2011), which was supported by HCN 4-3 and CS 7-6 survey toward 20 nearby star-forming galaxies (Zhang, et al., 2014). Multiple line single pointing observations of HCN 1-0, HCO\({}^{+}\) 1-0, HNC 1-0, and CS 3-2 toward a sample of 70 galaxies also showed similar linear relationships (Li et al., 2021). Spatially resolved observations for local galaxies were also performed to study dense gas fraction and related star formation, such as, HCN 1-0, HCO\({}^{+}\) 1-0, CS 2-1, \({}^{13}\)CO 1-0, and C\({}^{18}\)O toward inner region of four local galaxies with ALMA and IRAM 30 m (Gallagher et al., 2018), HCN 1-0, HCO\({}^{+}\) 1-0, HNC 1-0, and CO isotopologues toward a number of nearby galaxies in the IRAM large program EMPIRE (Cormier et al., 2018; Jimenez-Donaire et al., 2017, 2019), HCN 1-0 and CO 1-0 toward M51 with the 50 meter Large millimeter telescope (LMT) (Heyer et al., 2022), and CO isotopologues investigation within the CLAWS programme (den Brok et al., 2022). However, because of the large dipole moments and high column density, such dense gas tracers are normally optically thick both in Galactic GMC cores and in galaxies. It is of any case hard to convert from luminosity of these tracers to dense gas mass, which is similar to the issue of the standard conversion factor of CO with several times or even more than 10 times uncertainty in different galaxies (e.g., Narayanan, et al. 2012; Papadopoulos, et al. 2012). Multiple transitions of molecular lines are powerful to derive the physical properties (volume density, temperature, etc.) of dense gas in galaxies, which had been made for nearby starbursts and ULIRGs (e.g., M 82, NGC 253 in Nguyen, et al. 1992, ARP 220, NGC 6240 in Greve, et al. 2009), with large uncertainties. Optically thin dense gas tracers, such as isotopic lines, are necessary for better understanding dense gas properties and chemical evolution in the Galaxy (Langer & Penzias 1990; Wilson & Matteucci 1992; Wilson & Rood 1994; Henkel et al. 1994; Milam et al. 2005). In recent years, increasing studies of CO isotopologues have been carried out in other galaxies (Martin et al. 2010; Henkel et al. 2014; Meier et al. 2015; Jimenez-Donaire et al. 2017a,b, 2019; Cormier et al. 2018; den Brok et al. 2021). Assuming a reasonable isotopic abundance ratio, one can determine the optical depths of dense gas tracers, such as HCN and CS lines, with the intensity ratio of dense gas tracers and their isotopic lines (Wang et al. 2014, 2016; Li, et al. 2020). However, despite the uncertainty of isotopic abundance in different galaxies, there are still other problems for determining the optical depths of dense gas tracers. One important effect is that we can only assume one value of optical depth in one galaxy if we do not have spatial resolution, while optical depths should vary at different regions, which had been seen in M 82 along major axis (Li et al. 2022). Thus, the derived optical depth of one dense gas tracer in each galaxy with one pair of dense gas tracer and its isotopic line, is only an averaged value within the observed region. Since it is a non-linear relation between optical depth and line ratio, the typical optical depth in one region with internal spatial distribution of different optical depth, may not be well constrained by line ratio of spatially integrated fluxes for both dense gas tracer and its isotopic line. The best way to study this effect is mapping a sample of massive star forming regions in the Milky Way with both dense gas tracers and their isotopic lines. The detailed description for deriving optical depths with two ways will be presented in Section 3. In this paper, the observations and data reduction are described in Section 2, while the methods of calculating optical depths in each sources are presented in Section 3. Then, the main results and discussions are given in Section4 and Section 5, and a brief summary is presented in Section 6. ## 2 Observations and data reduction The sample presented in this study is a subset of massive star forming regions with parallax distances from Reid, et al. (2014) with strong (\(>\)0.5 K) H\({}^{13}\)CN 2-1 emissions, which was detected by the Institut de Radioastronomie Millimetrique (IRAM) 30-m telescope in June and October 2016 (Wang et al. in preparation), for the guarantee of strong H\({}^{13}\)CN 3-2 emission. The observations were carried out using the Arizona Radio Observatory (ARO) 10-m Submillimeter Telescope (SMT) on Mt. Graham, Arizona, during several observing runs in 2017 March to May, 2017 December, 2018 January, 2018 March, 2018 May and 2018 October to November. Four molecular lines were observed with 1.3 mm ALMA band 6 receiver. HCN 3-2 with rest frequency of 265.886431 GHz and HCO\({}^{+}\) 3-2 with rest frequency of 267.55763 GHz were tuned in the upper sideband (USB) simultaneously. The isotopologues, H\({}^{13}\)CN 3-2 with rest frequency of 259.011787 GHz and H\({}^{13}\)CO\({}^{+}\) 3-2 with rest frequency of 260.255342 GHz were observed simultaneously also in the upper sideband (USB). The Forbes Filter Banks (FFB) backend was setup with 512 -MHz bandwidth for each line and 1 MHz channel spacing, which corresponds to \(\sim\) 1.15 km s\({}^{-1}\) at 260 GHz, with the spatial resolution of \(\sim\)27.8\({}^{\prime\prime}\). For each source, the on-the-fly (OTF) mode was used to cover 2\({}^{\prime}\times 2^{\prime}\) regions for HCN and HCO\({}^{+}\) 3-2, and 1.5\({}^{\prime}\times 1.5^{\prime}\) for H\({}^{13}\)CN and H\({}^{13}\)CO\({}^{+}\) 3-2, respectively (Table 1). The telescope time on each source for the tuning of HCN and HCO\({}^{+}\) 3-2 is around 40 minutes for most of the samples except for 9 sources as 80 minutes. For the isotopologues, the time on the strong ones are about 1.5 hours or 3 hours while on the other moderately weak cores are about 4.5 hours, even up to 6 hours for two cores due to the weather conditions. The off points were chosen as azimuth off 30\({}^{\prime}\) away from the mapping centers. The observation information as on-source time, system temperatures (\(T_{\rm sys}\)) and rms of HCN and H\({}^{13}\)CN 3-2 for each source are shown in Table 2. The information of HCO\({}^{+}\) 3-2 and H\({}^{13}\)CO\({}^{+}\) 3-2 are not exhibited, since what are comparable to those of HCN and H\({}^{13}\)CN 3-2, respectively. The OTF data of HCN and HCO\({}^{+}\) 3-2 for each source were re-gridded to the final maps with the step of 15\({}^{\prime\prime}\). The map of H\({}^{13}\)CN and H\({}^{13}\)CO\({}^{+}\) 3-2 are gridded to match the center and each position of the spatially resolved HCN and HCO\({}^{+}\) 3-2 data, respectively. The antenna temperature \(T_{mb}\) using \(T_{mb}=T_{A}^{*}/\eta_{b}\). The main beam efficiency \(\eta_{b}\) is 0.77. Typical noise levels were 0.09 K for HCN and HCO\({}^{+}\) 3-2, as well as 0.03 K for H\({}^{13}\)CO\({}^{+}\) 3-2 at the frequency spacing of 1 MHz in the unit of \(T_{A}^{*}\), respectively. A total of 51 sources were mapped. However, only sources with enough spatially resolved data points to obtained reliable H\({}^{13}\)CN and H\({}^{13}\)CO\({}^{+}\) 3-2 signals, are selected for final analysis. The selected 30 sources with basic parameters are listed in Table 1. All of the data were reduced with the CLASS software package in GILDAS1. For each line of each source, we first took a quick look at main regions with emission and obtained an averaged spectrum within this region to determine the line velocity range. Such velocity ranges were used as "mask" with "set window" in CLASS when baseline subtractions were done with first order polynomial. Then we used "print area" in CLASS to obtain the velocity integrated fluxes for each pixel, with the same value used for baseline subtraction, which was fixed the spectra within the map for each line of each source. Footnote 1: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS) ## 3 The method ### General description For the pair lines of HCN and H\({}^{13}\)CN, as well as HCO\({}^{+}\) and H\({}^{13}\)CO\({}^{+}\), with the assumption of same filling factors and excitation temperatures, the optical depth of each isotopologue can be estimated using \[\frac{\int I^{13}}{\int I^{12}}=\frac{1-e^{-\tau^{13}}}{1-e^{-\tau^{12}}}. \tag{1}\] where \(I^{13}\) and \(I^{12}\) are measured velocity integrated fluxes, while r\({}^{13}\) and \(\tau^{12}\) are the optical depths for each line. With the spatially resolved maps of HCN and HCO\({}^{+}\) 3-2, and their isotopologues H\({}^{13}\)CN and H\({}^{13}\)CO\({}^{+}\) 3-2, we use two methods to derive optical depths of dense gas tracers and their isotopologues to study whether they are consistent with each other. For both methods, the same assumption of abundance ratio as 40 is adopted for HCN and H\({}^{13}\)CN, as well as for HCO\({}^{+}\) and H\({}^{13}\)CO\({}^{+}\) lines. Even \({}^{12}\)C/\({}^{13}\)C varies in different regions of the Milky Way (Wilson & Rood 1994), taking the reasonable value of 40 will not affect our main results and conclusions, since we are comparing the relative optical depths with different methods instead of the absolute ones. The first method is deriving the spatially resolved optical depths for each position with H\({}^{13}\)CN 3-2 (or H\({}^{13}\)CO\({}^{+}\) 3-2) above 3\(\sigma\) level, and averaging the optical depths weighted by velocity integrated HCN 3-2 (or HCO\({}^{+}\) 3-2) fluxes. The second way is to average the data of HCN 3-2 (or HCO\({}^{+}\) 3-2) and H\({}^{13}\)CN 3-2 (or H\({}^{13}\)CO\({}^{+}\) 3-2) in the positions used in the first method, to obtain a pair of HCN/H\({}^{13}\)CN or HCO\({}^{+}\)/H\({}^{13}\)CO\({}^{+}\) 3-2 line ratio, which will be used to derive optical depth for each source. The detailed description of both methods is presented as following. **Method 1: Average of spatially resolved \(\tau\)** After data reduction, we get the grid-map and derive the velocity integrated fluxes of HCN and HCO\({}^{+}\) 3-2, as well as H\({}^{13}\)CN and H\({}^{13}\)CO\({}^{+}\) 3-2 of one source respectively, from which we can get the line ratio of H\({}^{13}\)CN/HCN 3-2 and H\({}^{13}\)CO\({}^{+}\)/HCO\({}^{+}\) respectively for each position. The uncertainties of the velocity integrated intensities for one pair of lines are estimated with \(\sigma_{rms}\times\sqrt{\delta v\Delta V}\), where the \(\sigma_{rms}\) is from the baseline fitting for the center point of the spectra for each map. Since the system temperature and weather conditions nearly do not vary during each OTF mapping, and the effective on source time at each position within the final grid-map is almost the same, the expected noise level at each position should be almost the same. We also checked the distribution of noise level from the baseline fitting for several sources, which provided promising results as expected. We select the positions with velocity integrated intensity of H\({}^{13}\)CN 3-2 and H\({}^{13}\)CO\({}^{+}\) 3-2 greater than 3\(\sigma\), respectively, from the grid-map as reliable signal, count the positions and mark them as "spatially resolved", which contain 80% to 95% of the total isotopologue flux for the observed core regions. Then we can obtain the spatially resolved \(\tau\)(H\({}^{13}\)CN) and \(\tau\)(H\({}^{13}\)CO\({}^{+}\)) for the certain source. Finally we derive the averaged \(\tau\)(H\({}^{13}\)CN) and \(\tau\)(H\({}^{13}\)CO\({}^{+}\)) weighted by velocity integrated intensities of HCN and HCO\({}^{+}\) 3-2 respectively, and take them as the typical \(\tau\)(H\({}^{13}\)CN) and \(\tau\)(H\({}^{13}\)CO\({}^{+}\)) of this clump. **Method 2: \(\tau\) from the averaged line ratios** By adopting the same selected positions for each source as described in Method 1, we obtain the spatially averaged spectrum for each pair of lines. Then, the obtained H\({}^{13}\)CN/HCN 3-2 and H\({}^{13}\)CO\({}^{+}\)/HCO\({}^{+}\) 3-2 line ratios for the region can be used to derive the optical depths of H\({}^{13}\)CN 3-2 and H\({}^{13}\)CO\({}^{+}\) 3-2 respectively, which is similar to that in galaxies without spatial information. The derived \(\tau\)(H\({}^{13}\)CN) and \(\tau\)(H\({}^{13}\)CO\({}^{+}\)), as well as \(\tau\)(HCN) and \(\tau\)(HCO\({}^{+}\)) by Method 2 are marked with "Averaged (center)". To each source, in order to guarantee that the exact same data were used in Method 1 and 2, same velocity range and masking, which were derived by "set mode x" and "set window" in CLASS respectively, were adopted for all spectra of each source, including \begin{table} \begin{tabular}{l c c c c c} \hline \hline source name & Alias & RA(J2000) & DEC(J2000) & D & D\({}_{\rm GC}\) & \(v_{\rm LSR}\) \\ & & & kpc & kpc & km s\({}^{-1}\) \\ \hline G005.88–00.39 & & 18:00:30.31 & -24:04:04.50 & 3.0 & 5.3 & 9.3 \\ G009.62–00.19 & & 18:06:14.66 & -20:31:31.70 & 5.2 & 3.3 & 4.8 \\ G010.47+00.02 & & 18:08:38.23 & -19:51:50.30 & 8.5 & 1.6 & 65.7 \\ G010.62–00.38 & W 31 & 18:10:28.55 & -19:55:48.60 & 5.0 & 3.6 & 4.3 \\ G011.49–01.48 & & 18:16:22.13 & -19:41:27.20 & 1.2 & 7.1 & 10.5 \\ G011.91–00.61 & & 18:13:58.12 & -18:54:20.30 & 3.4 & 5.1 & 36.1 \\ G012.80–00.20 & & 18:14:14.23 & -17:55:40.50 & 2.9 & 5.5 & 36.3 \\ G014.33-00.64 & & 18:18:54.67 & -16:47:50.30 & 1.1 & 7.2 & 22.6 \\ G015.03–00.67 & M 17 & 18:20:24.81 & -16:11:35.30 & 2.0 & 6.4 & 19.7 \\ G016.58–00.05 & & 18:21:09.08 & -14:31:48.80 & 3.6 & 5.0 & 60.0 \\ G023.00–00.41 & & 18:34:00.20 & -09:00:37.00 & 4.6 & 4.5 & 78.4 \\ G027.36-00.16 & & 18:41:51.06 & -05:01:43.40 & 8.0 & 3.9 & 93.1 \\ G029.95–00.01 & W 43S & 18:46:03.74 & -02:39:22.30 & 5.3 & 4.6 & 98.2 \\ G035.02–00.34 & & 18:54:00.67 & -40:20:19.20 & 2.3 & 6.5 & 52.9 \\ G035.19–00.74 & & 18:58:13.05 & +01:04:35.70 & 2.2 & 6.6 & 34.2 \\ G037.44–01.51 & & 18:54:14.35 & +04:41:47.10 & 1.9 & 6.9 & 44.3 \\ G043.16+00.01 & W 49N & 19:10:13.41 & +09:06:12.80 & 11.1 & 7.6 & 5.4 \\ G043.79–01.01 & GH 43.8–0.8–1.0 & 19:11:51.39 & +09:35:50.30 & 6.0 & 5.7 & 44.6 \\ G049.48–00.36 & W 51 & 18:182 & 19:23:39.82 & +14:31:05.00 & 5.1 & 6.3 & 60.7 \\ G049.48–0.38 & W 51 & 19:24:33.87 & +14:30:29.50 & 5.4 & 6.3 & 55.8 \\ G069.54–00.97 & ON 1 & 20:10:09.07 & +31:31:36.00 & 2.5 & 7.8 & 12.2 \\ G075.76+00.33 & & 20:21:41.09 & +37:25:29.30 & 3.5 & 8.2 & -1.5 \\ G078.12-03.63 & IRAS 20126+4104 & 20:14:26.47 & +41:32:37.10 & 1.6 & 8.1 & -3.3 \\ G081.87+00.78 & W 75N & 20:28:36.43 & +42:37:34.80 & 1.3 & 8.2 & 9.8 \\ G109.87+02.11 & Cep A & 22:56:18.10 & +62:01:49.50 & 0.7 & 8.6 & -9.8 \\ G121.29+00.65 & L 1287 & 00:36:47.35 & +63:29:02.20 & 0.9 & 8.8 & -17.1 \\ G123.06–06.30 & NGC 281 & 00:52:24.70 & +56:33:50.50 & 2.8 & 10.1 & -30.4 \\ G133.94–01.06 & W 30H & 02:27:03.82 & +61:52:25.20 & 2.0 & 9.8 & -46.8 \\ G188.94+00.88 & 5 252 & 06:08:33.35 & +21:38.28.70 & 2.1 & 10.4 & 3.8 \\ G232.62+00.99 & & & & & & \\ \hline \end{tabular} those in different positions for Method 1 and the averaged one for Method 2. Since the relation between optical depth ratio and line ratio is non-linear (see Equation 1), while the spatially integrated fluxes for the line emissions are simply linear collection, optical depths calculated with the two methods using the same data do not guarantee to provide the same results. ### Using one source as an example The detailed calculation of the optical depth is described with one source -- G029.95-00.01 and one pair of lines -- H\({}^{13}\)CN and HCN 3-2 as an example. The velocity integrated intensity map of H\({}^{13}\)CN 3-2 as red contour overlaid on that of HCN 3-2 as grayscale and black contour, is presented in the _top left_ panel of Figure 1. Note that there is an offset of peak position between H\({}^{13}\)CN 3-2 and HCN 3-2. 22 positions with H\({}^{13}\)CN 3-2 emission above 3\(\sigma\) level were selected from this source, which were marked in light pink on the grid-maps for HCN and H\({}^{13}\)CN 3-2 as demonstrated in the _top right_ panel of Figure 1. All the selected spectra of H\({}^{13}\)CN 3-2 were also checked by eye to confirm data quality. The spatial distribution of H\({}^{13}\)CN 3-2 optical depth is presented in the _bottom left_ panel of Figure 1, with contour levels from 0.05 to 0.25. Based on examination of the spectra, the "Spatially resolved" and "Averaged" \(\tau\) of H\({}^{13}\)CN 3-2 obtained respectively by two methods were listed in Table 3. \(\tau\)(H\({}^{13}\)CN) derived from the spatially resolved information is 0.1158\(\pm\)0.0002, while the "Averaged" \(\tau\)(H\({}^{13}\)CN) is 0.1157\(\pm\)0.0002. For the 22 positions located at "center" of the core, the intensity flux of H\({}^{13}\)CN is 34.99\(\pm\)0.86 K km s\({}^{-1}\) and taking 86.4% of the total H\({}^{13}\)CN flux, while the "center" HCN only contains 64.8% flux within the 2\({}^{\prime}\times 2^{\prime}\) region. The uncertainty of "Averaged" \(\tau\)(H\({}^{13}\)CN) is calculated from the intensity flux error propagation, while the same value is adopted as uncertainty of the "Spatially resolved" \(\tau\)(H\({}^{13}\)CN). For the region with detectable emission above 3\(\sigma\) level of H\({}^{13}\)CN 3-2, \(\tau\) derived from the two methods for G029.95-00.01 are generally agreed well with each other. Meanwhile, we also obtained the averaged \(\tau\)(H\({}^{13}\)CN) for the positions with H\({}^{13}\)CN 3-2 data included in 1.5 \({}^{\prime}\times 1.5^{\prime}\) mapping area, but the intensities of which are less than 3\(\sigma\) level and not used for calculation of the spatially resolved \(\tau\)(H\({}^{13}\)CN). Since the re-grid step is 15\({}^{\prime\prime}\), there are 7 \(\times\) 7 = 49 positions in total within1.5 \({}^{\prime}\times 1.5^{\prime}\) isotopologue mapping area. Besides the selected 22 positions out of 49 worked for \(\tau\) examination by two methods, there are 27 positions left, which are marked in light blue with the same area on grid-maps of HCN 3-2 and H\({}^{13}\)CN 3-2 in Figure 1 and labeled as "middle" part. Even though no significant signal of individual H\({}^{13}\)CN 3-2 emission in these 27 positions, the averaged or stacked spectrum of H\({}^{13}\)CN 3-2 do have about 5\(\sigma\) detection. We obtained \(\tau\) by using Method 2 for the 27 positions without significant signal of H\({}^{13}\)CN 3-2 emission. The result is shown in Table 3 and marked as "Averaged" \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{HCN} & \multicolumn{2}{c}{H\({}^{13}\)CN} \\ \cline{2-7} Source name & On-source time & \((T_{\rm sys})^{a}\) & rms\({}^{b}\) & On-source time & \((T_{\rm sys})^{a}\) & rms\({}^{b}\) \\ & min & K & K & hr & K & K \\ \hline G005 88-00.39 & 40 & 297 & 0.096 & 1.5 & 368 & 0.066 \\ G009.62+00.19 & 40 & 312 & 0.069 & 1.5 & 339 & 0.061 \\ G010.47+00.02 & 40 & 336 & 0.069 & 3.0 & 370 & 0.047 \\ G010.62-00.38 & 40 & 540 & 0.117 & 4.5 & 308 & 0.025 \\ G011.49-01.48 & 40 & 340 & 0.076 & 3.0 & 239 & 0.023 \\ G011.91-00.61 & 40 & 453 & 0.103 & 3.0 & 313 & 0.033 \\ G012.80-00.20 & 40 & 305 & 0.057 & 1.5 & 277 & 0.049 \\ G014.33-00.64 & 80 & 289 & 0.058 & 1.5 & 305 & 0.047 \\ G015.03-00.67 & 80 & 355 & 0.052 & 1.5 & 221 & 0.026 \\ G016.58-00.05 & 80 & 309 & 0.052 & 6.0 & 272 & 0.023 \\ G023.00-00.41 & 80 & 358 & 0.057 & 3.0 & 258 & 0.031 \\ G027.36-00.16 & 40 & 351 & 0.088 & 4.5 & 266 & 0.020 \\ G029.95-00.01 & 40 & 405 & 0.091 & 1.5 & 223 & 0.026 \\ G035.02-00.34 & 40 & 348 & 0.077 & 3.0 & 238 & 0.027 \\ G035.19-00.74 & 40 & 278 & 0.062 & 3.0 & 306 & 0.036 \\ G037.43+01.51 & 80 & 413 & 0.060 & 1.5 & 236 & 0.030 \\ G043.16+00.01 & 40 & 250 & 0.059 & 1.5 & 238 & 0.031 \\ G043.79-00.12 & 40 & 283 & 0.070 & 4.5 & 243 & 0.030 \\ G049.48-00.36 & 40 & 262 & 0.077 & 3.0 & 240 & 0.035 \\ G049.48-00.38 & 40 & 303 & 0.073 & 3.0 & 270 & 0.040 \\ G069.54-00.97 & 80 & 293 & 0.054 & 3.0 & 201 & 0.031 \\ G075.76+00.33 & 40 & 293 & 0.074 & 3.0 & 234 & 0.048 \\ G078.12+03.63 & 40 & 330 & 0.070 & 4.5 & 200 & 0.022 \\ G081.87+00.78 & 40 & 313 & 0.076 & 3.0 & 206 & 0.033 \\ G109.87+02.11 & 40 & 324 & 0.077 & 3.0 & 213 & 0.025 \\ G121.29+00.65 & 40 & 307 & 0.091 & 4.5 & 209 & 0.023 \\ G123.06+06.30 & 80 & 293 & 0.039 & 4.5 & 238 & 0.021 \\ G133.94+01.06 & 80 & 346 & 0.072 & 6.0 & 204 & 0.024 \\ G188.94+00.88 & 40 & 367 & 0.087 & 3.0 & 203 & 0.027 \\ G232.62+00.99 & 80 & 307 & 0.064 & 4.5 & 238 & 0.031 \\ \hline \end{tabular} **Notes:**\(a\). Averaged system temperature. \(b\). rms for all lines were obtained under the frequency resolution of 1 MHz. \end{table} Table 2: Observation details. (middle). The \(\tau\)(H\({}^{13}\)CN) of the 27 positions is 0.0512\(\pm\)0.0005, which is about half of \(\tau\)(H\({}^{13}\)CN) derived from the area of "center part". The intensity flux of H\({}^{13}\)CN and HCN 3-2 for the 27 "middle" positions are 5.5\(\pm\)1.1 K km s\({}^{-1}\) and 95.6\(\pm\)3.8 K km s\({}^{-1}\) respectively, taking about 13.6% and 19.5% of the total flux in each case. Since the HCN 3-2 mapping size is \(2^{\prime}\times 2^{\prime}\), there are 9\(\times\)9 = 81 positions with re-grid step of 15\({}^{\prime\prime}\). In fact, there are 32 positions have HCN 3-2 data and without H\({}^{13}\)CN 3-2 data, which are marked as "outside" positions. It is impossible to calculate even for the averaged optical depths of H\({}^{13}\)CN and HCN 3-2 there. However, since H\({}^{13}\)CN 3-2 emission is quickly decreasing from center to the outside and even the "middle" 27 positions only contain \(\sim\)13.6% flux within the 1.5 \({}^{\prime}\times 1.5^{\prime}\) region, we can neglect the contribution of H\({}^{13}\)CN 3-2 emission in "outside" positions when counting total H\({}^{13}\)CN 3-2 flux in this molecular core. But the HCN 3-2 emission can still be detected in such region. The stacked intensity flux of the "outside" for HCN emission is 67.2\(\pm\)3.9 K km s\({}^{-1}\) and contains 13.7% of the total HCN 3-2 flux. The contribution of the outside HCN 3-2 emission is listed in Table 3 and marked as "Averaged (outside")". In addition, we also calculated the averaged optical depths of H\({}^{13}\)CN and HCN 3-2 of this molecular core, which derived from the flux ratio of total H\({}^{13}\)CN emission for \(1.5^{\prime}\times 1.5^{\prime}\) and HCN emission for \(2^{\prime}\times 2^{\prime}\). The total flux of HCN, obtained by averaged spectrum in the \(2^{\prime}\times 2^{\prime}\) regions, is 490.1\(\pm\)6.1 K km s\({}^{-1}\), approximately equals to the sum of flux from "center", "middle" and "outside" parts. Also, the total flux of H\({}^{13}\)CN as 40.5\(\pm\)1.6 K km s\({}^{-1}\) is similar to the sum Figure 1: The data reduction results of G029.95-00.01 as an example. _Top left_: The velocity integrated intensity maps of HCN and H\({}^{13}\)CN 3-2, with the data from OTF observation in May 2018 with 10-m SMT. The mapping size of HCN 3-2 is \(2^{\prime}\times 2^{\prime}\), while it is \(1.5^{\prime}\times 1.5^{\prime}\) for H\({}^{13}\)CN 3-2, with a beam size of \(\sim\) 27.8\({}^{\prime\prime}\). The grey scale and the black contour with levels starting from 6 K km s\({}^{-1}\) in step of 5 K km s\({}^{-1}\) show the observed HCN 3-2. The red contour with levels starting from 0.8 K km s\({}^{-1}\) in step of 0.8 K km s\({}^{-1}\) represents H\({}^{13}\)CN 3-2. Top _right_: Grid-map of HCN 3-2 (black) with a size of \(2^{\prime}\times 2^{\prime}\), overlapped by gridded spectra of H\({}^{13}\)CN 3-2 (red) with a map size of \(1.5^{\prime}\times 1.5^{\prime}\). The flux intensities of H\({}^{13}\)CN 3-2 are multiplied by 3.2 \(\log\) positions marked in light pink located near the center part are selected with 3 \(\sigma\) level of H\({}^{13}\)CN 3-2, and spectra within where are adopted for calculating the spatially resolved \(\tau\). 27 positions marked in light blue are selected on a criteria such as H\({}^{13}\)CN 3-2 spectra with emission signals but not up to 3 \(\sigma\), spectra with where are used for calculating \(\tau\) of the ”middle” part of a source. The 32 most out part of spectra are data points of HCN 3-2 for “Averaged (outside)” \(\tau\) estimation. _Bottom left_: The spatially resolved \(\tau\)(H\({}^{13}\)CN) of G029.95-00.01 is demonstrated by black contour with levels starting from 0.05 in step of 0.05. _Bottom right_: The spectra of HCN (black) and H\({}^{13}\)CN 3-2 (red) at center position of G029.95-00.01. of fluxes from "center" and "middle". By taking the H\({}^{13}\)CN/HCN 3-2 ratio as 0.0826\(\pm\)0.0023 and using Method 2, we obtained the \(\tau\) (H\({}^{13}\)CN) as 0.0832\(\pm\)0.0003, which was moderately lower than that from the "center" part. The results are listed in Table 3 and marked as "Averaged (whole)". For a short summary of the optical depths obtained in different regions, including "center", "middle" and "outside" parts, the derived values are decreasing from center to the outside. The results of "middle" and "outside" parts are just for showing the optical depth distribution in individual sources itself, which will not be used for the discussions in next section. For another pair of lines -- HCO+ and H\({}^{13}\)CO\({}^{+}\), the same procedures are adopted for the optical depths calculation in different conditions, respectively. The results are listed in Table 4. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Range & \(I\)(HCN) & \(I\)(H\({}^{13}\)CN) & \(I\)(H\({}^{13}\)CO\({}^{+}\))/\(I\)(HCO\({}^{+}\)) & \(\tau\)(H\({}^{13}\)CO\({}^{+}\)) & \(\tau\)(HCO\({}^{+}\)) \\ & K km s\({}^{-1}\) & K km s\({}^{-1}\) & & & \\ \hline Spatially resolved (19, HCO\({}^{+}\) weighted) & & & & 0.1069\(\pm\)0.0001 & 4.276\(\pm\)0.004 \\ \hline Averaged (center, 19) & 240.3\(\pm\)2.0 & 24.74\(\pm\)0.43 & 0.1030\(\pm\)0.0009 & 0.1073\(\pm\)0.0001 & 4.292\(\pm\)0.004 \\ Averaged (middle, 30) & 137.4\(\pm\)2.8 & 6.36\(\pm\)0.61 & 0.0463\(\pm\)0.0035 & 0.0367\(\pm\)0.0001 & 1.468\(\pm\)0.004 \\ Averaged (outside, 32) & 98.6\(\pm\)2.8 & — & — & — & — \\ \hline Averaged (whole) & 476.3\(\pm\)5.2 & 31.12\(\pm\)0.68 & 0.0653\(\pm\)0.0007 & 0.0619\(\pm\)0.0001 & 2.476\(\pm\)0.004 \\ \hline \end{tabular} \end{table} Table 4: Optical depth derived from HCO\({}^{+}\) 3-2 of G029.95-00.01. Figure 3: \(\tau\)(H\({}^{13}\)CN) v. s. \(\tau\)(H\({}^{13}\)CO\({}^{+}\)) for the same sources. The dashed red line is the function of “y=x”. \begin{table} \begin{tabular}{c c c c c} \hline \hline Range & \(I\)(HCN) & \(I\)(H\({}^{13}\)CN) & \(I\)(H\({}^{13}\)CN)/\(I\)(HCN) & \(\tau\)(H\({}^{13}\)CN) & \(\tau\)(HCN) \\ & K km s\({}^{-1}\) & K km s\({}^{-1}\) & & & \\ \hline Spatially resolved (22, HCN weighted) & & & & 0.1158\(\pm\)0.0002 & 4.633\(\pm\)0.008 \\ \hline Averaged (center, 22) & 317.4\(\pm\)2.8 & 34.99\(\pm\)0.86 & 0.1102\(\pm\)0.0015 & 0.1157\(\pm\)0.0002 & 4.628\(\pm\)0.008 \\ Averaged (middle, 27) & 95.6\(\pm\)3.8 & 5.5\(\pm\)1.1 & 0.0571\(\pm\)0.0094 & 0.0512\(\pm\)0.0005 & 2.048\(\pm\)0.020 \\ Averaged (outside, 32) & 67.2\(\pm\)3.9 & — & — & — \\ \hline Averaged (whole) & 490.1\(\pm\)6.1 & 40.5\(\pm\)1.6 & 0.0826\(\pm\)0.0023 & 0.0832\(\pm\)0.0003 & 3.328\(\pm\)0.012 \\ \hline \end{tabular} \end{table} Table 3: Optical depth derived from HCN 3-2 of G029.95-00.01. Figure 2: The relation of “Spatially resolved” and “Averaged” optical depths for 30 sources. The blue filled circle and red diamond represent H\({}^{13}\)CN 3-2 and H\({}^{13}\)CO\({}^{+}\) 3-2, respectively. Black solid line shows the fitting result, while the dashed red line is the function of “y=x”.
2309.06677
SHARM: Segmented Head Anatomical Reference Models
Reliable segmentation of anatomical tissues of human head is a major step in several clinical applications such as brain mapping, surgery planning and associated computational simulation studies. Segmentation is based on identifying different anatomical structures through labeling different tissues through medical imaging modalities. The segmentation of brain structures is commonly feasible with several remarkable contributions mainly for medical perspective; however, non-brain tissues are of less interest due to anatomical complexity and difficulties to be observed using standard medical imaging protocols. The lack of whole head segmentation methods and unavailability of large human head segmented datasets limiting the variability studies, especially in the computational evaluation of electrical brain stimulation (neuromodulation), human protection from electromagnetic field, and electroencephalography where non-brain tissues are of great importance. To fill this gap, this study provides an open-access Segmented Head Anatomical Reference Models (SHARM) that consists of 196 subjects. These models are segmented into 15 different tissues; skin, fat, muscle, skull cancellous bone, skull cortical bone, brain white matter, brain gray matter, cerebellum white matter, cerebellum gray matter, cerebrospinal fluid, dura, vitreous humor, lens, mucous tissue and blood vessels. The segmented head models are generated using open-access IXI MRI dataset through convolutional neural network structure named ForkNet+. Results indicate a high consistency in statistical characteristics of different tissue distribution in age scale with real measurements. SHARM is expected to be a useful benchmark not only for electromagnetic dosimetry studies but also for different human head segmentation applications.
Essam A. Rashed, Mohammad al-Shatouri, Ilkka Laakso, Akimasa Hirata
2023-09-13T02:24:37Z
http://arxiv.org/abs/2309.06677v1
# SHARM: Segmented Head Anatomical Reference Models ###### Abstract Reliable segmentation of anatomical tissues of human head is a major step in several clinical applications such as brain mapping, surgery planning and associated computational simulation studies. Segmentation is based on identifying different anatomical structures through labeling different tissues through medical imaging modalities. The segmentation of brain structures is commonly feasible with several remarkable contributions mainly for medical perspective; however, non-brain tissues are of less interest due to anatomical complexity and difficulties to be observed using standard medical imaging protocols. The lack of whole head segmentation methods and unavailability of large human head segmented datasets limiting the variability studies, especially in the computational evaluation of electrical brain stimulation (neuromodulation), human protection from electromagnetic field, and electroencephalography where non-brain tissues are of great importance. To fill this gap, this study provides an open-access Segmented Head Anatomical Reference Models (SHARM) that consists of 196 subjects. These models are segmented into 15 different tissues; skin, fat, muscle, skull cancellous bone, skull cortical bone, brain white matter, brain gray matter, cerebellum white matter, cerebellum gray matter, cerebrospinal fluid, dura, vitreous humor, lens, mucous tissue and blood vessels. The segmented head models are generated using open-access IXI MRI dataset through convolutional neural network structure named ForkNet\({}^{+}\). Results indicate a high consistency in statistical characteristics of different tissue distribution in age scale with real measurements. SHARM is expected to be a useful benchmark not only for electromagnetic dosimetry studies but also for different human head segmentation applications. keywords: Human head models, brain segmentation, convolutional neural networks, MRI + Footnote †: journal: Journal of Medical Imaging ## 1 Introduction Anatomical reference models of human subjects are of great importance in several computer simulation studies such as medical imaging, dosimetric evaluation for diagnosis and therapy and human safety. In principle, digital models are generated from anatomical imaging of real subjects for better understanding real physical effects. Specifically, personalized electrode positions or coil location are explored in the non-invasive electrical and magnetic stimulation (Antonenko et al., 2019), in addition to group-level optimization(Gomez-Tames et al., 2018; Laakso et al., 2015). Variability analysis is needed to derive the limit in human protection from electromagnetic field (Hirata et al., 2021; ICNIRP et al., 2020). Several attempts provided different models that represent whole body models (Christ et al., 2009; Kim et al., 2008; Nagaoka et al., 2004; Segars et al., 2010; Yu et al., 2015). A useful review is in (Kainz et al., 2019). Segmentation of brain tissues is of high interest in several clinical applications such as diagnosis of abnormalities, assessment of neurophysiological performance, surgery planing and many others. Most of standard medical imaging applications can represent brain tissues in high contrast which enable accurate automatic annotation (Baur et al., 2021). However, segmentation of non-brain tissues is challenging as it represented in low contrast and/or allocated in limited regions. Moreover, in clinical medical applications, imaging protocols are usually adjusted such that brain tissues are presented in high quality as the main target of diagnostic applications (Kalavathi and Prasath, 2016). Whole head segmentation have been discussed mainly for the development of digital models for electromagnetic stimulation studies. SimNibs is an open source software for the simulation of non-invasive brain stimulation that include magnetic resonance (MR) image segmentation to generate head models (Suturnino et al., 2018; Thielscher et al., 2015). However, segmentation is limited to major head tissues such as white matter (WM), grey matter (GM), cerebrospinal fluid (CSF), skull and scalp. ROAST is another pipeline the include automatic MRI segmentation based on SPM12 (Ashburner and Friston, 2005) with variety of segmentation and electromagnetic modeling options (Huang et al., 2019). However, ROAST segmentation is also limited to a few number of tissues as in Table 1 in Ref. (Huang et al., 2019). Segmentation of fifteen head tissues using multi-modality images (MRI T1/T2, mDixon, venogram and CT) is proposed in (Puonti et al., 2020). Recently, the use of deep learning architectures demonstrate quality improvement of anatomical segmentation (Akkus et al., 2017). Several network architectures such as ForkNet (Rashed et al., 2019), SubForkNet (Rashed et al., 2020) and FastSurfer (Henschel et al., 2020) have been used to generate human head models with different scope and applications. Due to the complexity of full head segmentation and requirements of intensive efforts for manual parameters adjustment, there is a shortage of relatively large dataset of human head models. This problem becomes more feasible with the use of deep learning as robust segmentation tool with superior accuracy compared to conventional methods. The aim of this work is to generate an open-access Segmented Head Anatomical Reference Models (SHARM) that is large enough for subject variability studies. The developed dataset consists of 196 subjects segmented into 15 different tissues. The main contributions of this study can be summarized as follows: * An open-source deep learning pipeline for automatic segmentation of MRI head images. * An open-access large human head dataset segmented into brain and non-brain tissues. * Evaluation of the consistency of segmented models with realistic tissue characteristics. ## 2 Materials and methods ### Dataset and general pipeline The MRI dataset used in this study is the IXI Dataset1 which consists of around 600 MRI scans of healthy subjects. A set of 196 subjects are selected (123 females, 70 males, and 3 unknown), that are imaged at two hospitals (100 were imaged at the Guy's Hospital (London, UK) with Philips 1.5T system and 96 were images at the Hammersmith Hospital (London, UK) using a Philips 3T system). Excluded images criteria are based on quality of the image and availability of multi-modlity scans. The T1w/T2w image data are in Nifti formats and are used for generation of the head models. Footnote 1: [http://brain-development.org/ixi-dataset/](http://brain-development.org/ixi-dataset/) The raw T1 and T2-weighted MR images are registered using non-rigid registration such that T2w are adjusted to fit with T1w. The acoustic noise is reduced through contouring of head surface and relabeling external region as air voxels. The N4 bias field correction method (ITK2) is used for bias correction of both MRI modalities. Both T1w/T1w images are normalized with zero mean and unit variance, then scaled to vales [0.01, 0.99]. All the above pre-processing procedures are used to generate the network input volumes (a set of two \(256^{3}\) volumes representing T1w/T2w MRI with unified \(1^{3}\)mm resolution). A selected set of the network input is segmented into 15 different head tissues using the semi-automatic method detailed in (Laakso et al., 2015). The segmentation binary labels are used as network target (output) through training process. The remaining subjects are evaluated using trained network to automatically generate segmentation labels. Finally, an aggregation process is used to combine different segmentation labels into a head model. The pre-processed MR scans and segmented head models are available for each subject in SHARM dataset. The data processing pipeline is shown in Fig. 1. ### Semi-automatic segmentation The target dataset for the training process was generated using a semi-automatic segmentation pipeline that segments T1- and T2-weighted MR image data into 15 tissue types (Laakso et al., 2015). Briefly, after bias correction and normalization of the MR data, the pipeline first splits the head into three compartments: inner compartment, consisting of the volume inside the inner surface of the skull; middle compartment, consisting of the skull and nasal cavity; and the outer compartment, consisting of the volume between the outer surface of the skull and the outer surface of the skin. The quality of these Figure 1: Data flow used to generate SHARM from IXI dataset. compartments is verified by visual inspection, and whenever necessary, control parameters are manually altered until the compartments match the MR data. The inner compartment is segmented into brain using FreeSurfer image analysis software (Dale et al., 1999; Fischl and Dale, 2000). The brain segmentation consists of cerebral gray matter, cerebral white matter, cerebellar gray matter, cerebellar white matter, deep brain structures (brainstem, accumbens, amygdala, caudate, hippocampus, pallidum, putamen, thalamus), and ventricular CSF. The remaining non-brain volume in the inner compartment is segmented into CSF (bright T2-weighted image), blood (dark T2), and dura (non-brain non-CSF tissue close to the inner boundary of the skull). Anterior and middle cerebral arteries initially estimated from T2 are corrected using thresholding of registered MRA images (when available). Deep brain structures are treated as GM. The middle compartment consisting of the skull is segmented into cortical and cancellous bone by thresholding the T2-weighted MRI data. It is ensured Figure 2: ForkNet\({}^{+}\) with MRI T1w/T2w inputs and \(N\) segmented tissues outputs. that the inner and outer cortical bone layers are at least 1 mm and 1.5 mm thick, respectively. The nasal cavity also belongs to the middle compartment and is segmented as either mucous tissue or cortical bone based on T2-weighted images. The outer compartment is segmented into skin, fat, muscle, and eyes. The scalp (including subcutaneous fat) is segmented as the outer layer of the head, with thickness between 2 mm and 10 mm. Fat and muscle are segmented based on thresholding the T1-weighted image data. Finally, eyes and lens are segmented using both T1- and T2-weighted image data. The resulting segmentation has uniform voxel size of 0.5 mm\(\times\)0.5 mm\(\times\)0.5 mm, half of that of the input MR images. In this study, the segmented dataset was downscaled to the same resolution as the input images using three-dimensional nearest neighborhood interpolation algorithm. ### Network architecture The deep learning architecture used here is an extension of ForkNet (Rashed et al., 2019) by considering input data from both T1- and T2-weighted MRI scans. We then refer to the new network as ForkNet\({}^{+}\). The network inputs are the MRI scans in two encoders and outputs are \(N\) decoders each assigned to single anatomical structure (here \(N\)=15). The details of the layer structures and data processing flow is shown in Fig. 2. The network output are binary masks that identify different anatomical tissues/liquids such as skin, muscle, fat, skull (cortical bone), skull (cancellous bone), CSF, blood vessels, dura, brain GM, brain WM, cerebellum GM, cerebellum WM, vitreous humor, eye lens, mucous tissue and whole head. ForkNet\({}^{+}\) design is flexible and easy to be adjusted to segmented specific number of tissues for different applications as each tissue is segmented using separate decoder (Fig. 2). ### Head model generation Once the network is well-trained, the head models are generated through fast evaluation process. To reduce artifacts caused by 2D slice segmentation, a set of three networks are trained using slices of axial, sagittal and coronal directions as shown in Fig. 3. A rule-based segmentation merge approach using majority vote is used to generate the final segmentation from different slicing directions. When no majority in a voxel is found, the neighborhood majority vote (Rashed et al., 2020). ## 3 Results A set of 20 randomly selected head models and associated segmented labels are used to train ForkNet\({}^{+}\). The network architecture is developed using Wolfram Mathematica (R) ver. 13.0, installed on a Ubuntu 20.04 workstation of 12 Cores Intel (R) Core (TM) i9-10920X @3.50GHz, 64 GB memory, and NVIDIA RTX A6000 GPU. A three networks (axial, sagittal and coronal) are trained with cross-entropy loss function and ADAM optimization algorithm. The training was considered using 50 epochs with batch size 4. To reduce the computation cost, the number of output tracks is set to \(N=4\) (i.e., a set of 4 Figure 3: Network evaluation through different direction to generate the head model. Figure 4: Volume rendering sample of generated head models. Figure 5: Axial, sagittal and coronal slices (top to bottom) of head models shown in Fig. 4 in order. tissues are trained simultaneously). A single training round requires about 24 mins. The remaining 176 head models are evaluated through trained networks and the network output is aggregated to generate the head models. Example of generated head models are shown in Figs. 4 and 5. In some few cases, some manual edition is required mainly to remove a small regions of CSF-like tissue uncorrected segmented inside the mouth. Some other cases are excluded due to the strong noise that is difficult to be automatically removed and lead to incorrect segmentation of external contour. Evaluation of segmentation accuracy is not conducted due to the lack of manual true annotation and it is out of the scope of this work. However, we provide quantitative assessment of different tissues of the SHARM dataset. In principle, we study the variability of segmented volumes within age scale to validate the validity of SHARM models. Figure 6 demonstrates a regression curves of segmented brains. Brain is considered as a composition of GM, WM and CSF. It shows that a decline in global GM volume with age (\(R^{2}\)=0.274), while there is no significant change in WM volume with age (\(R^{2}\)=0.001). A remarkable increase in CSF volume with age (\(R^{2}\)=0.277). Difference between gender grouping and percentage of different structures with respect to total intracranial volume (TIV) are also shown in Fig. 6 and it is consistent with results reported in literature (e.g., (Ge et al., 2002; Good et al., 2001; Ruigrok et al., 2014)). The change in TIV over age and the correlation between GM/WM ratio with age is shown in Fig. 7. The change of skin, muscle, fat, skull, vitreous humor, and eye lens volumes is shown in Fig. 8. In general the volume of skin, muscles, and fat tissues are increasing with aging. Skull and eye lens does not change so much while the volume of vitreous humor is shrinked as reported earlier (Sebag, 1987). Data shown in Fig. 9 demonstrate that the volume of brain is highly correlated with the body mass index (BMI) and brain volume of male subjects are of large volume compared with females (Eliot et al., 2021). SHARM dataset can be download3 in MATLAB (*.mat) files with structure shown in Fig. 10. Footnote 3: [https://figshare.com/s/a4d9ba6f18a6b7f7ba2c](https://figshare.com/s/a4d9ba6f18a6b7f7ba2c) (zip, 7.62 GB) ## 4 Discussion This work demonstrate a new benchmark dataset for several computational neuroscience applications. The open-access SHARM consists of 196 head models segmented into 15 different tissues that cover a wide range of subject variability. A boxplot demonstrates the volume variations of different structures in SHRAM is shown in Fig. 11. It is clearly observed that skull volume is 0.827 \(\pm\) 0.08 \(L\) (male) and 0.730 \(\pm\) 0.08 \(L\) (female). These values are highly correlated with those listed in the ICRP Reference man (averaged bone without marrow of Skull is 708 gm). The vitreous humor is 15.098 \(\pm\) 2.10 \(mL\) (male) and 14.124 \(\pm\) 1.91 \(mL\) (female) that is referenced as value 15 \(\pm\) 6.5 gm in adult (ICRP Publication 23, 1975, Table 97, p. 220). Also, eye lens is 0.246 \(\pm\) 0.07 mL Figure 6: (a) Regression lines with scatter plot of total GM volume over age for all subjects (female in orange and male in green). (b) and (c) are regression lines and scatter plots for total WM and CSF, respectively. (d)-(f) are the regression lines and scatter plots for fractional volume (with respect to TIV) of GM, WM and CSF, respectively. (male) and 0.242 \(\pm\) 0.08 mL (female) which is calculated as 172 to 258.1 gm for (20 - 60 y) adult (ICRP Publication 23, 1975, Table 100, p. 225). Calculated volume and weight of different SHRAM tissues are compared with those reported in (ICRP Publication 23, 1975) in Table 1. These values are example that indicate segmentation accuracy of non-brain tissues. The segmented models along with normalized T1- and T2-weighted MR scans are available for each subject in additional to other demographic information. Most of the models in SHARM are presented with full head and neck segmentation which enables simulation studies that requires full head models. moreover, the trained deep learning model used to generate SHARM is shared which can be used to generate additional models considering the availability of consistent MR scans. The software ForkNet\({}^{+}\) generate individual tissue segmen \begin{table} \begin{tabular}{c l|c|c|c|c|c|c|c} \hline \multirow{2}{*}{1} & \multirow{2}{*}{ICRP} & \multirow{2}{*}{Semony\({}^{2}\)} & \multirow{2}{*}{Gender} & Volume & \multirow{2}{*}{ICRP} & \multirow{2}{*}{ICRP} & \multirow{2}{*}{ICRP} & \multirow{2}{*}{ICRP} \\ & & & & Mean & & & & & & \\ \hline \multirow{4}{*}{1} & Brain & \multirow{4}{*}{1,041} & M & 1,437 & 105 & 1,503 & 110 & 1,355 gm & \multirow{4}{*}{_Adult (20-60 y), Table 93_} \\ & & F & 1,277 & 108 & 1,336 & 113 & 1200 gm & \multirow{4}{*}{_Adult (20-90 y), Eq \# p.212_} \\ \cline{6-8} & & & M & 124 & 14 & 130 & 14 & \\ \cline{6-8} & & & F & 115 & 120 & 16 & \\ \cline{6-8} & & & M & 15.1 & 2.11 & 15.172 & 2.1 & \multirow{4}{*}{_Table 97_} \\ \cline{6-8} & & & F & 14.1 & 1.91 & 14.19 & 1.9 & \\ \cline{6-8} & & & M & 0.246 & 0.07 & 0.264 & 0.075 & \\ \cline{6-8} & & & F & 0.242 & 0.08 & 0.26 & 0.086 & \\ \cline{6-8} & & & M & 827 & 79 & 1,578 & 151 & \\ \cline{6-8} & & & F & 730 & 82 & 1,393 & 156 & \\ \hline \end{tabular} * _Average density values are acquired from (IT's Foundation, 2023)_ * _References from (ICRP Publication 23, 1975)_ \end{table} Table 1: Comparison of volume/weight values of SHARM models with ICRP Reference man (ICRP Publication 23, 1975). Figure 7: (a) Regression curves and scatter plot of (a) total intracranial volume (TIV) in liters and (b) GM/WM ratio over age for all subjects. tation in terms of probability maps which enables customized segmentation of a single subject through weighting based aggregation process (similar to those presented in (Rashed et al., 2021)). It is worth noting that evaluation of segmentation accuracy is out of the scope of this work, because earlier version of the ForkNet (Rashed et al., 2019) segmentation have been evaluated. The limitation of this work is the lack of variability of MR data acquisition. Data are acquired from two scanners installed at two medical institutes but it is developed by the same manufacturer. Further extension with data from other manufacturers are planned to be included in future versions. Moreover, we will include more information considering the segmentation of deep brain structures and fiber orientations in future versions of SHARM. Also, there is a lack of bone structure accuracy in the neck region as lack of neck data in T2 images. In future, we will investigate potential approaches to improve the segmentation neck region properly. Figure 8: Regression curves of segmented volume of (a) skin, (b) muscle, (c) fat, (d) skull, (e) vitreous humor, and (f) eye lens in SHARM models. ## 5 Conclusion In this study, we present SHARM, a benchmark dataset of 196 segmented human head models. The models are segmented into 15 different tissues using deep learning network named ForkNet\({}^{+}\). The freely available models along with normalized MR T1- and T2-weighted scans would enable a large scale studies in different applications such as electromagnetic brain stimulations. Results demonstrate that the segmented models are of high consistency with measurements obtained from real measurements. One feature of ForkNet\({}^{+}\) is the segmentation of each tissue is generated as probability maps that enable parametric segmentation for further customization of the generated head models. The trained networks as well as source code are shared for potential usage of head models generation. With large scale of subject age, SHARM would enable different electromagnetic dosimetry and human safety studies in a reliable manner. After publication, Mathematica notebooks demonstrate the implementation of ForkNet\({}^{+}\) architectures and trained networks will be available for download at: [https://github.com/erashed/ForkNetPlus](https://github.com/erashed/ForkNetPlus) ## Acknowledgment This work was funded by the Japan Society for the Promotion of Science (JSPS), a Grant-in-Aid for Scientific Research, Grant number JSPS KAKENHI 22K12765. Figure 9: Regression curves of TTV per BMI.
2308.16759
Constructing Indoor Region-based Radio Map without Location Labels
Radio map construction requires a large amount of radio measurement data with location labels, which imposes a high deployment cost. This paper develops a region-based radio map from received signal strength (RSS) measurements without location labels. The construction is based on a set of blindly collected RSS measurement data from a device that visits each region in an indoor area exactly once, where the footprints and timestamps are not recorded. The main challenge is to cluster the RSS data and match clusters with the physical regions. Classical clustering algorithms fail to work as the RSS data naturally appears as non-clustered due to multipaths and noise. In this paper, a signal subspace model with a sequential prior is constructed for the RSS data, and an integrated segmentation and clustering algorithm is developed, which is shown to find the globally optimal solution in a special case. Furthermore, the clustered data is matched with the physical regions using a graph-based approach. Based on real measurements from an office space, the proposed scheme reduces the region localization error by roughly 50% compared to a weighted centroid localization (WCL) baseline, and it even outperforms some supervised localization schemes, including k-nearest neighbor (KNN), support vector machine (SVM), and deep neural network (DNN), which require labeled data for training.
Zheng Xing, Junting Chen
2023-08-31T14:27:36Z
http://arxiv.org/abs/2308.16759v2
# Constructing Indoor Region-based Radio Map without Location Labels ###### Abstract Radio map construction requires a large amount of radio measurement data with location labels, which imposes a high deployment cost. This paper develops a region-based radio map from received signal strength (RSS) measurements without location labels. The construction is based on a set of blindly collected RSS measurement data from a device that visits each region in an indoor area exactly once, where the footprints and timestamps are not recorded. The main challenge is to cluster the RSS data and match clusters with the physical regions. Classical clustering algorithms fail to work as the RSS data naturally appears as non-clustered due to multipaths and noise. In this paper, a signal subspace model with a sequential prior is constructed for the RSS data, and an integrated segmentation and clustering algorithm is developed, which is shown to find the globally optimal solution in a special case. Furthermore, the clustered data is matched with the physical regions using a graph-based approach. Based on real measurements from an office space, the proposed scheme reduces the region localization error by roughly 50% compared to a weighted centroid localization (WCL) baseline, and it even outperforms some supervised localization schemes, including \(k\)-nearest neighbor (KNN), support vector machine (SVM), and deep neural network (DNN), which require labeled data for training. Localization, blind calibration, radio map, subspace clustering, segmentation. ## I Introduction Location-based services have gained significant attention in the industry and research community due to the proliferation of mobile devices [1, 2]. While several advanced triangulation-based approaches, such as time-of-arrival (TOA) [3], time difference of arrival (TDOA) [4], or angle of arrival (AoA) [5], can achieve sub-meter level localization performance under line-of-sight (LOS) conditions, they require specialized and complex hardware to enable. In many indoor applications, rough accuracy is acceptable, but hardware cost is a primary concern. For instance, monitoring the locations of numerous equipment in a factory necessitates a large number of low-cost, battery-powered devices, while, in this application, meter-level accuracy suffices, _e.g._, it suffices to determine whether the equipment is in room A or B. Consequently, received signal strength (RSS)-based localization may be found as the most cost-effective solution for indoor localization in these scenarios, as it does not require complicated hardware or a sophisticated localization protocol. Traditional RSS-based indoor localization algorithms can be roughly categorized into model-based, model-free, and data-driven approaches. Model-based approaches [6, 7] first estimate a path loss model to describe how the RSS varies with propagation distance, and then estimate the target location by measuring the propagation distance from sensors with known locations based on the measured RSS. However, these approaches require calibration for the path loss model and are also highly sensitive to signal blockage. Model-free approaches employ an empirical formula to estimate the target location. For example, weighted centroid localization (WCL) approaches [6, 8] estimate the target location as the weighted average of sensor locations using an empirical formula, where the RSS values can be used as weights. However, the choice of the empirical formula significantly affects the localization accuracy of WCL. Data-driven approaches mostly require an offline measurement campaign to collect _location-labeled_ RSS measurements at numerous spots in the target area to build a _fingerprint_ database [9, 10, 11]. The target is localized by comparing the RSS measurements with the fingerprints in the database. However, such fingerprinting approaches not only require extensive labor to collect a large amount of RSS measurements tagged with location labels, but also require significant calibration effort after the system is deployed, because the RSS signature may vary due to changes in the environment, such as of the change of furniture. An outdated fingerprint database may degrade the localization performance. Therefore, reducing the construction and calibration costs is a critical issue for RSS-based localization. There are some works on reducing the construction and calibration effort required for fingerprint localization. For example, the work [12] employed generative adversarial networks (GAN) to augment the fingerprint training dataset. Moreover, the work [13] proposed a meta-learning approach to train the fingerprint with a few labeled samples. In addition, some works utilize interpolation, such as Kriging spatial interpolation [14], to recover a significant amount of unlabeled data based on a small amount of labeled data. Despite these efforts, existing approaches still require a certain amount of location-labeled RSS measurement data, resulting in non-negligible construction and calibration cost. This paper proposes a _region-based radio map_ for coarse indoor localization, where the radio map is constructed via _unsupervised_ learning from RSS measurements _without_ location labels. For coarse localization, the indoor area is divided into several regions of interest, and the target is localized to one of these regions. It is important to note that such a localization problem arises in various application scenarios, such as tracking equipment in a factory and monitoring visitors in restricted areas, where having a coarse accuracy is sufficient, but hardware cost and calibration cost are the primary concerns. The region-based radio map consists of RSS features associated with each region. Therefore, the fundamental question is how to construct the radio map using unlabeled RSS data to reduce the calibration cost. To tackle this problem, we construct and explore some characteristics for the RSS data. First, we assume that the data is mostly collected sequentially from each region. This corresponds to a scenario where a mobile device visits each region once, while the sensor network collects the RSS of the signal emitted from the mobile. It is important to note that the trajectory of the mobile and the timing the mobile entering or leaving the region are not necessarily known by the network. Therefore, such an assumption induces almost no calibration cost. Second, we assume a subspace model for the RSS data, where the RSS vector lies in a low-dimensional affine subspace that varies across different regions. With these two assumptions, the construction of a region-based radio map can be formulated as a clustering problem using sequential data. However, classical subspace clustering algorithms [15, 16, 17] were not optimized for sequential data. Additionally, although some previous works [18, 19] have addressed sequential data clustering for video segmentation applications by constructing a similarity graph with temporally consistent constraints followed by graph clustering approaches, these methods cannot be extended to clustering RSS data. This is because the RSS data naturally appears as non-clustered due to multipaths and noise, and even adjacent RSS data collected in the same room can be divided into two clusters, despite the presence of temporally consistent constraints. Moreover, it is challenging to associate clusters with physical regions using the irregular clustering results generated by these methods. Thus, we need to address the following two major challenges: * **How to cluster the unlabeled RSS measurement data?** In practice, it is observed that the measured RSS fluctuates significantly even within a small area. Therefore, clustering them into groups using classical clustering approaches poses a challenge. * **How to match the clustered, yet unlabeled data to the physical regions?** Since the sensor network cannot observe any location labels, extracting location information remains a challenge, even if the RSS data is perfectly clustered. In this paper, we formulate a maximum-likelihood estimation problem with a sequential prior to cluster the RSS data. Consequently, the clustering problem is transformed into a sequence segmentation problem. Our preliminary work [20] attempted to solve the segmentation problem using a gradient-type algorithm, but the solution is prone to getting trapped at a poor local optimum. Here, we introduce a merge-and-split algorithm that has been proven to converge to a globally optimal solution for a special case. Global convergence in a general case is also observed in our numerical experiments. To match the clusters with the physical regions, we construct a graph model for a set of possible routes that may form the RSS sequential data; such a model leads to a Viterbi algorithm for the region matching, which can achieve a matching error of less than 1%. To summarize, we make the following contributions: * We develop an unsupervised learning framework to construct a region-based radio map without location labels. The approach is based on solving a subspace clustering problem with sequential prior. * We transform the clustering problem to a segmentation problem which is solved by a novel merge-and-split algorithm. We establish optimality guarantees for the algorithm under a special case. * We conduct numerical experiments using real measurements from an office space. It is found that the proposed unsupervised scheme even achieves a better localization performance than several supervised learning schemes which use location labels during the training, including \(k\)-nearest neighbor (KNN), support vector machine (SVM), and deep neural network (DNN). The remaining part of the paper is organized as follows. Section II introduces a signal subspace model for the region-based radio map, a sequential data collection model, and a probability model with a sequential prior. Section III develops the subspace feature solution, and the solution for matching clusters to physical regions. Section IV focuses on the development of the clustering algorithm and the potential optimality guarantees. Experimental results are reported in Section V and the paper is concluded in Section VI. ## II System Model ### _Signal Subspace Model for the Region-based Radio Map_ Suppose that there are \(D\) sensors with locations \(\mathbf{z}_{j}\in\mathbb{R}^{2}\), \(j=1,2,\ldots,D\), deployed in an indoor area. The sensors, such as WiFi sensors, are capable of measuring the RSS of the signal emitted by a wireless device, forming an RSS measurement vector \(\mathbf{x}\in\mathbb{R}^{D}\), although they may not be able to decode the message of the device. Consider partitioning the indoor area into \(K\) non-overlapping regions. It is assumed that signals emitted from the same region to share common feature due to the proximity of the transmission location and the similarity of the propagation environment. In practice, a room or a semi-closed space separated by large furniture or walls can be naturally considered as a region, where the intuition is that walls and furniture may shape a common feature for signals emitted from a neighborhood surrounded by these objects. Given the region partition, this paper focuses on extracting the large-scale _feature_ of each of the regions and building a region-based radio map from the RSS measurements \(\{\mathbf{x}\}\) without location labels. Assume that the RSS measurements \(\mathbf{x}_{i}\) in decibel scale taken in region \(k\) satisfy \[\mathbf{x}_{i}=\mathbf{U}_{k}\boldsymbol{\theta}_{i}+\boldsymbol{\mu}_{k}+ \boldsymbol{\epsilon}_{i},\quad\forall i\in\mathcal{C}_{k} \tag{1}\] where \(\mathbf{U}_{k}\in\mathbb{R}^{D\times d_{k}}\) is a semi-unitary matrix with \(\mathbf{U}_{k}^{\mathrm{T}}\mathbf{U}_{k}=\mathbf{I}\), \(\boldsymbol{\theta}_{i}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{\Sigma}_{k})\) is an independent variable that models the uncertainty due to the actual measurement location when taking the measurement sample \(\mathbf{x}_{i}\) in region \(k\), \(\boldsymbol{\Sigma}_{k}\) is assumed as a full-rank diagonal matrix with non-negative diagonal elements, \(\mathbf{\mu}_{k}\in\mathbb{R}^{D}\) captures the offset of the signal subspace, \(\mathbf{\epsilon}_{i}\sim\mathcal{N}(\mathbf{0},s_{k}^{2}\mathbf{I}_{D\times D})\) models the independent measurement noise, and \(\mathcal{C}_{k}\) is the index set of the measurements \(\mathbf{x}_{i}\) taken within the \(k\)th region. As such, the parameters \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\) specify an affine subspace with dimension \(d_{k}\) for the noisy measurement \(\mathbf{x}_{i}\), and they are the _feature_ to be extracted from the measurement data \(\{\mathbf{x}_{i}\}\). Thus, region-based radio map is a database that maps the \(k\)th physical region to the signal subspace feature \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\). _Remark 1_.: (Interpretation of the Subspace Model): As the user has \(2\) spatial degrees of freedom to move around in the region, the RSS vector \(\mathbf{x}\) may be modeled as a point moving on a two-dimensional hyper-surface \(\mathcal{S}\) embedded in \(\mathbb{R}^{D}\). Therefore, for a sufficiently small area, an affine subspace with dimension \(d_{k}\!=\!2\) or \(3\) can locally be a good approximation of \(\mathcal{S}\). ### _Sequential Data Collection and the Graph Model_ When the measurement location label sets are _not_ available, it is very difficult to obtain the subspace feature \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\). While conventional subspace clustering algorithms, such as the expectation-maximization (EM) approach, are designed for recovering both the location label sets and the subspace feature \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\), they may not work for the large noise case, which is a typical scenario here as the RSS data has a large fluctuation due to the multipath effect. To tackle this challenge, we consider a type of measurement that provide some implicit structural information without substantially increasing the effort on data collection. We assume that the sequence of measurements \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}\) are taken along an arbitrary route that visits all the \(K\) regions without repetition. Recall that \(\mathcal{C}_{k}\) is the collection of the measurements collected from the \(k\)th region along the route, and therefore, for any \(i\in\mathcal{C}_{k}\) and \(j\in\mathcal{C}_{k+1}\), we must have \(i<j\). Note that the exact route, the locations of the measurements, the sojourn time that the mobile device spends in each region, and the association between the sequential measurement set \(\mathcal{C}_{k}\) and the \(k\)th physical region are _unknown_ to the system. Nevertheless, we can model the eligibility of a specific route. A route \(\mathbf{\pi}\) is modeled as a permutation sequence with the first \(K\) natural numbers, where the \(k\)th element \(\pi(k)\) refers to the location label of the \(k\)th region along the route. Define a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) where each node in \(\mathcal{V}=\{1,2,\ldots,K\}\) represents one of the \(K\) regions, and each edge \((k,j)\in\mathcal{E}\) represents that it is possible to directly travel from region \(k\) to region \(j\) without entering the other region. Therefore, a route \(\mathbf{\pi}\) is eligible only if there is an edge between adjacent nodes along the route, _i.e._, \((\pi(j),\pi(j+1))\in\mathcal{E}\), for \(\forall 1\leq j\leq K-1\). ### _Probability Model with a Sequential Prior_ From the signal subspace model (1), the conditional distribution of \(\mathbf{x}_{i}\), given that it belongs to the \(k\)th region, is given by \[p_{k}(\mathbf{x};\mathbf{\Theta})=\frac{1}{(2\pi)^{D/2}|\mathbf{C}_ {k}|^{1/2}}\\ \times\exp\left(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu}_{k})^{\mathsf{T} }\mathbf{C}_{k}^{-1}(\mathbf{x}-\mathbf{\mu}_{k})\right) \tag{2}\] where \(\mathbf{C}_{k}=\mathbf{U}_{k}\mathbf{\Sigma}_{k}\mathbf{U}_{k}^{\mathsf{T}}+s_{k} ^{2}\mathbf{I}\) is the conditional covariance matrix for the \(k\)th cluster, and \(\mathbf{\Theta}=\{\mathbf{U}_{k},\mathbf{\Sigma}_{k},\mathbf{\mu}_{k},s_{k}^{2}\}_{k=1}^{K}\) is a shorthand notation for the collection of parameters. Let \(t_{k}\) be the last index of \(\mathbf{x}_{i}\) before the device leaves the \(k\)th region and enters the \((k+1)\)th region. Consequently, we have \(t_{0}=0<t_{1}<t_{2}<\cdots<t_{K-1}<t_{K}=N\), and for each \(k=1,2,\ldots,K\), all elements \(i\in\mathcal{C}_{k}\) satisfy \(t_{k-1}<i\leq t_{k}\). For a pair of parameters \(a<b\), an indicator function is defined as \[z_{i}(a,b)=\left\{\begin{array}{ll}1,&\quad a<i\leq b\\ 0,&\quad\text{otherwise.}\end{array}\right. \tag{3}\] As a result, the probability density function of measurement \(\mathbf{x}_{i}\) can be given by \[p(\mathbf{x}_{i};\mathbf{\Theta},\mathbf{t})=\prod_{k=1}^{K}p_{k}(\mathbf{x}_{i};\mathbf{ \Theta})^{z_{i}(t_{k-1},t_{k})} \tag{4}\] where \(\mathbf{t}=(t_{1},t_{2},\ldots,t_{K-1})\) is a collection of the time indices of the segment boundaries. Note that, for each \(i\), \(\sum_{k}z_{i}(t_{k-1},t_{k})=1\), and \(z_{i}(t_{k-1},t_{k})=1\) only under \(t_{k-1}<i\leq t_{k}\). Therefore, given \(i\in\mathcal{C}_{k}\), equation (4) reduces to \(p(\mathbf{x}_{i})=p_{k}(\mathbf{x}_{i})\). Consider a log-likelihood cost function \(\log\prod_{i=1}^{N}p(\mathbf{x}_{i};\mathbf{\Theta},\mathbf{\tau})\) which can be equivalently written as \[\mathcal{J}(\mathbf{\Theta},\mathbf{\tau})=\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}z_{i}( \tau_{k-1},\tau_{k})\log p_{k}(\mathbf{x}_{i};\mathbf{\Theta},\mathbf{\tau}) \tag{5}\] where \(\mathbf{\tau}=(\tau_{1},\tau_{2},\ldots,\tau_{K-1})\) denotes an estimator for \(\mathbf{t}\). Throughout the paper, we implicitly define \(\tau_{0}=0,\tau_{K}=N\) for mathematical convenience. It follows that \(z_{i}(\tau_{k-1},\tau_{k})\) represents a rectangle window that selects only the terms \(\log p_{k}(\mathbf{x}_{i};\mathbf{\Theta},\mathbf{\tau})\) for \(\tau_{k-1}<i\leq\tau_{k}\) and suppresses all the other terms. Thus, it acts as a _sequential prior_ that selects a subset of \(\{\mathbf{x}_{i}\}\) in a row for \(\mathcal{C}_{k}\). While we have made an assumption from model (1) that the measurements are statistically independent, in practice, the measurements \(\mathbf{x}_{i}\) taken in the transient phase from one region to the other may substantially deviate from both subspaces \(\{\mathbf{U}_{k-1},\mathbf{\mu}_{k-1}\}\) and \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\), leading to large modeling noise \(\mathbf{\epsilon}_{i}\). To down-weight the data possibly taken in the transient phase, we extend the rectangle window model \(z_{i}(a,b)\) in (3) for the sequential prior to a smooth window \[z_{i}(a,b)=\sigma_{\beta}\left(i-a\right)-\sigma_{\beta}\left(i-b\right) \tag{6}\] where \(\sigma_{\beta}(x)\) is a sigmoid function that maps \(\mathbb{R}\) to \([0,1]\) with a property that \(\sigma_{\beta}(x)\to 0\) as \(x\rightarrow-\infty\) and \(\sigma_{\beta}(x)\to 1\) as \(x\rightarrow+\infty\). The parameter \(\beta\) controls the slope of the transition from \(0\) to \(1\). A choice of a specific sigmoid function is \(\sigma_{\beta}(x)=(1+e^{-(x-1/2)/\beta})^{-1}\). For such a choice of \(\sigma_{\beta}(x)\), the rectangle window (3) is a special case of (6) for \(\beta\rightarrow\) 0, where one can easily verify that, for all \(k\), we have \(z_{i}(t_{k-1},t_{k})\to 1\) for all \(i\in\mathcal{C}_{k}\) and \(z_{i}(t_{k-1},t_{k})\to 0\) for \(i\notin\mathcal{C}_{k}\). The subspace clustering problem with a sequential prior is formulated as follows \[\underset{\mathbf{\Theta},\mathbf{\tau}}{\text{maximize}} \mathcal{J}(\mathbf{\Theta},\mathbf{\tau})\] (7) subject to \[0<\tau_{1}<\tau_{2}<...<\tau_{K-1}<N. \tag{8}\] Note that an EM-type algorithm cannot solve (7) due to the sequential structure imposed by \(z_{i}(\tau_{k-1},\tau_{k})\). Our prior work [20] investigated a gradient approach, but the iteration easily gets trapped at a bad local optimum even under the simplest form of the model at \(d_{k}=0\), as shown later in the experiment section. In the rest of the paper, we establish a new solution framework to solve (7) and derive the conditions for achieving the optimality. ## III Subspace Feature Extraction and Region Matching In this section, we focus on the solution \(\mathbf{\Theta}\) when the partition variable \(\boldsymbol{\tau}\) in (7) is fixed, and develop a maximum likelihood estimator \(\hat{\mathbf{\Theta}}(\boldsymbol{\tau})\) as a function of \(\boldsymbol{\tau}\) and the data. Then, we develop a method to map the subspace feature \(\{\mathbf{U}_{k},\boldsymbol{\mu}_{k}\}\) to the physical region. ### _Subspace Feature via Maximum-Likelihood Principal Component Analysis (PCA)_ The solution \(\mathbf{\Theta}\) for a given \(\boldsymbol{\tau}\) can be derived as follows. First, from the conditional probability model (2), the maximizer of \(\boldsymbol{\mu}_{k}\) to \(\mathcal{J}\) in (7) can be obtained by setting the derivative of \(\mathcal{J}\) with respect to \(\boldsymbol{\mu}_{k}\) to zero, leading to the unique solution \[\hat{\boldsymbol{\mu}}_{k}=\frac{1}{\sum_{i=1}^{N}z_{i}(\tau_{k-1},\tau_{k})} \sum_{i=1}^{N}z_{i}(\tau_{k-1},\tau_{k})\mathbf{x}_{i}. \tag{9}\] Then, denote \(\mathbf{W}_{k}=\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}\) for notational convenience. The critical point of \(\mathcal{J}\) with respect to the variable \(\mathbf{W}_{k}\) is obtained by setting the derivative \[\frac{\partial\mathcal{J}}{\partial\mathbf{W}_{k}}=\mathbf{C}_{k}^{-1} \mathbf{S}_{k}\mathbf{C}_{k}^{-1}\mathbf{W}_{k}-\mathbf{C}_{k}^{-1}\mathbf{W}_ {k} \tag{10}\] to zero, leading to the equation \[\mathbf{S}_{k}\mathbf{C}_{k}^{-1}\mathbf{W}_{k}=\mathbf{W}_{k} \tag{11}\] where \[\mathbf{S}_{k}=\frac{1}{\sum_{i=1}^{N}z_{i}(\tau_{k-1},\tau_{k})}\sum_{i=1}^{N }z_{i}(\tau_{k-1},\tau_{k})(\mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{k})( \mathbf{x}_{i}-\hat{\boldsymbol{\mu}}_{k})^{\text{T}} \tag{12}\] is the sample covariance of the data \(\{\mathbf{x}_{i}\}\) that are weighted by the \(k\)th sequential prior \(z_{i}(\tau_{k-1},\tau_{k})\) for the \(k\)th subspace. Recall that \(\mathbf{C}_{k}=\mathbf{W}_{k}\mathbf{W}_{k}^{\text{T}}+s_{k}^{2}\mathbf{I}\), and \(\mathbf{W}_{k}^{\text{T}}\mathbf{W}_{k}=\boldsymbol{\Sigma}_{k}^{1/2}\mathbf{ U}_{k}^{\text{T}}\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}=\boldsymbol{\Sigma}_{k}\), since \(\mathbf{U}_{k}\) is semi-orthogonal and \(\boldsymbol{\Sigma}_{k}\) is diagonal. We have the identity \(\mathbf{C}_{k}^{-1}\mathbf{W}_{k}=\mathbf{W}_{k}(\boldsymbol{\Sigma}_{k}+s_{k }^{2}\mathbf{I})^{-1}\), which can be easily verified by the relation \(\mathbf{W}_{k}=\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}\). Hence, \[\mathbf{S}_{k}\mathbf{C}_{k}^{-1}\mathbf{W}_{k}=\mathbf{S}_{k}\mathbf{W}_{k}( \boldsymbol{\Sigma}_{k}+s_{k}^{2}\mathbf{I})^{-1}. \tag{13}\] Using (13) and \(\mathbf{W}_{k}=\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}\), equation (11) becomes \(\mathbf{S}_{k}\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}(\boldsymbol{\Sigma}_ {k}+s_{k}^{2}\mathbf{I})^{-1}=\mathbf{U}_{k}\boldsymbol{\Sigma}_{k}^{1/2}\), which can be simplified to \[\mathbf{S}_{k}\mathbf{U}_{k}=\mathbf{U}_{k}(\boldsymbol{\Sigma}_{k}+s_{k}^{2} \mathbf{I}). \tag{14}\] It follows that (14) is an eigenvalue problem, which can be solved by constructing \(\mathbf{U}_{k}\) as a collection of \(d_{k}\) eigenvectors of \(\mathbf{S}_{k}\). In addition, let \(\lambda_{k,j}^{2}\) be the corresponding eigenvalue of the \(j\)th selected eigenvector of \(\mathbf{S}_{k}\) for the construction of \(\mathbf{U}_{k}\). Then, the \(j\)th diagonal element of the diagonal matrix \(\boldsymbol{\Sigma}_{k}\) can be set as \(\sigma_{k,j}^{2}=\lambda_{k,j}^{2}-s_{k}^{2}\). It will become clear as explained below that the best choice for constructing \(\mathbf{U}_{k}\) is to select the eigenvectors of \(\mathbf{S}_{k}\) corresponding to the \(d_{k}\)-largest eigenvalues, and the best estimate of \(s_{k}^{2}\) is given by \[s_{k}^{2}=\frac{\sum_{j=d_{k}+1}^{D}\lambda_{k,j}^{2}}{D-d_{k}}. \tag{15}\] To see this, substituting \(\hat{\boldsymbol{\mu}}_{k}\) from (9) and the solution \(\mathbf{U}_{k}\) and \(\boldsymbol{\Sigma}_{k}\) obtained from (14) to the log-likelihood function \(\mathcal{J}(\mathbf{\Theta},\boldsymbol{\tau})\) in (5), we obtain \[\mathcal{J}=-\frac{1}{2N}\sum_{k=1}^{K}\sum_{i=1}^{N}z_{i}(\tau_{k -1},\tau_{k})\Big{\{}D\log(2\pi)+\log\prod_{j=1}^{d_{k}}\lambda_{k,j}^{2}\] \[+\log(s_{k}^{2(D-d_{k})})+\frac{1}{s_{k}^{2}}\sum_{j=d_{k}+1}^{D} \lambda_{k,j}^{2}+d_{k}\Big{\}}. \tag{16}\] It has been shown in [21] that the maximizer to (16) is obtained as the solution (15) with \(\lambda_{k,j}\), \(j=1,2,\ldots,d_{k}\), being chosen as the \(d_{k}\)-largest eigenvalues of \(\mathbf{S}_{k}\). As such, we have obtained the solution \(\mathbf{\Theta}\) from (9), (14)-(15). ### _Matching Clusters to Physical Regions_ Denote \(\mathcal{D}_{k}\subseteq\mathbb{R}^{2}\) as the area of the \(k\)th physical region. Our goal here is to map the subspace feature \(\{\mathbf{U}_{k},\boldsymbol{\mu}_{k}\}\) or the segment \((\tau_{k-1},\tau_{k})\) for the \(k\)th cluster of \(\{\mathbf{x}_{i}\}\) to the physical region \(\mathcal{D}_{k}\). Mathematically, recall that the route \(\boldsymbol{\pi}=(\pi(1),\pi(2),\ldots,\pi(K))\) is modeled as a permutation sequence of the first \(K\) natural numbers. As a result, \(\boldsymbol{\pi}\) is a function that matches the \(k\)th subspace \(\{\mathbf{U}_{k},\boldsymbol{\mu}_{k}\}\) to the physical region \(\mathcal{D}_{\pi(k)}\), for \(1\leq k\leq K\). Define \(\mathbf{o}_{k}\) as the reference center location of the \(k\)th region \(\mathcal{D}_{k}\) as indicated by the red dots in Fig. 1. Thus, one essential idea is to link the clustered RSS measurements \(\mathbf{x}_{i}=(x_{i,1},x_{i,2},\ldots,x_{i,D})\) with the location topology of the sensors. Specifically, we employ the WCL approach to Figure 1: A \(30\times 16\) m\({}^{2}\) indoor area deployed with \(21\) sensors (blue icons) partitioned into \(10\) non-overlapping regions (green dashed rectangles), where the red circles refers to the reference region center. compute a reference location \(\hat{\mathbf{o}}_{k}\) for the measurements in the \(k\)th cluster \(\mathcal{C}_{k}\): \[\hat{\mathbf{o}}_{k}=\frac{1}{|\mathcal{C}_{k}|}\sum_{i\in\mathcal{C}_{k}}\frac{ \sum_{j=1}^{D}w_{i,j}\mathbf{z}_{j}}{\sum_{j=1}^{D}w_{i,j}} \tag{17}\] where \(w_{i,j}=(10^{x_{i,j}/10})^{\alpha}\) is the weight on the location \(\mathbf{z}_{j}\) of the \(j\)th sensor, \(x_{i,j}\) is the RSS of the \(j\)th sensor in the \(i\)th measurement \(\mathbf{x}_{i}\), and \(\alpha\) is an empirical parameter typical chosen as [6, 8, 22, 23]. Then, we exploit the _consistency property_ that adjacent features \(\{\mathbf{U}_{k},\mathbf{\mu}_{k}\}\) and \(\{\mathbf{U}_{k+1},\mathbf{\mu}_{k+1}\}\) should be mapped to physically adjacent regions \(\mathcal{D}_{\pi(k)}\) and \(\mathcal{D}_{\pi(k+1)}\), where \((\pi(k),\pi(k+1))\in\mathcal{E}\). In Section II-B, we have modeled the adjacency of any two physical regions using the graph model \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) based on the layout of the target area. With such a graph model, a graph-constrained matching problem can be formulated as \[\underset{\mathbf{\pi}}{\text{minimize}} \sum_{k=1}^{K}c(\hat{\mathbf{o}}_{\pi(k)},\mathbf{o}_{k})\] (18) subject to \[(\pi(j),\pi(j+1))\in\mathcal{E},\,\forall j=1,2,\ldots,K-1 \tag{19}\] where \(c(\mathbf{p}_{1},\mathbf{p}_{2})\) is a cost function that quantifies the difference between the two locations \(\mathbf{p}_{1}\) and \(\mathbf{p}_{2}\), for example, \(c(\mathbf{p}_{1},\mathbf{p}_{2})=\|\mathbf{p}_{1}-\mathbf{p}_{2}\|_{2}\). The constraint is to ensure that an eligible route only travels along the edges of the graph \(\mathcal{G}\) and the chosen region is non-repetitive. Problem (18) can be solved by the dynamic programming-based path searching strategy. For example, the Viterbi algorithm can be applied to optimally solve (18) with a complexity of \(\mathcal{O}(K^{3}N)\). To see this, any feasible route \(\mathbf{\pi}\) consists of \(K\) moves. At each region, there are at most \(K-1\) candidate regions for the next move, and therefore, there are at most \(K(K-1)\) moves to evaluate in the Viterbi algorithm. To evaluate the utility of each move, one computes (18) with a complexity of \(\mathcal{O}(KN)\). This leads to the overall complexity of \(\mathcal{O}(K^{3}N)\) to solve (18) using the Viterbi algorithm. ## IV Clustering via Segmentation for Sequential Data Based on the solution \(\hat{\mathbf{\Theta}}(\mathbf{\tau})\) developed in Section III-A, it remains to maximize \(\mathcal{J}(\hat{\mathbf{\Theta}}(\mathbf{\tau}),\mathbf{\tau})\) over \(\mathbf{\tau}\) subject to (8), and the subspace clustering problem (7) becomes a segmentation problem. The main challenge is the existence of multiple local maxima of \(\mathcal{J}(\hat{\mathbf{\Theta}}(\mathbf{\tau}),\mathbf{\tau})\). In this section, we first establish the optimality for a special case of \(d_{k}=0\) and \(s_{k}^{2}=s^{2}\) for some \(s>0\), which corresponds to \(K\)-means clustering with a sequential prior. Based on this theoretical insight, we then develop a robust algorithm to solve for the general case of \(d_{k}\geq 0\). For the special case of \(d_{k}=0\) and \(s_{k}^{2}=s^{2}\), the subspace model (1) degenerates to \(\mathbf{x}_{i}=\mathbf{\mu}_{k}+\mathbf{\epsilon}_{i}\), and the conditional probability (2) becomes \(p_{k}(\mathbf{x}_{i};\mathbf{\Theta})=\frac{1}{(2\pi)^{D/2}s}\exp(-\frac{1}{2s^{2} }\|\mathbf{x}_{i}-\mathbf{\mu}_{k}\|_{2}^{2})\). Then, problem (7) is equivalent to the following minimization problem \[\underset{\mathbf{\Theta},\mathbf{\tau}}{\text{minimize}} \frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}z_{i}(\tau_{k-1},\tau_{k}) \|\mathbf{x}_{i}-\mathbf{\mu}_{k}\|_{2}^{2}\] (20) subject to \[0<\tau_{1}<\tau_{2}<...<\tau_{K-1}<N\] where the variable \(\mathbf{\Theta}\) degenerates to a matrix \(\mathbf{\Theta}=[\mathbf{\mu}_{1}\ \mathbf{\mu}_{2}\ \cdots\mathbf{\mu}_{K}]\) that captures the centers of the clusters. Substituting the solution \(\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\) in (9) to (20), the cost function (20) simplifies to \[\tilde{f}(\mathbf{\tau})\triangleq\frac{1}{N}\sum_{i=1}^{N}\sum_{k=1}^{K}z_{i}( \tau_{k-1},\tau_{k})\big{\|}\mathbf{x}_{i}-\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_ {k})\big{\|}_{2}^{2}.\] ### _Asymptotic Property for Small \(\beta\)_ For each \(\mathbf{\tau}\), it is observed from the solution \(\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\) in (9) and the asymptotic property of the window function \(z_{i}(\cdot)\) in (6) that, as \(\beta\to 0\), \[\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\rightarrow\tilde{\mathbf{\mu}}(\tau_{k-1}, \tau_{k})\triangleq\frac{1}{\tau_{k}-\tau_{k-1}}\sum_{j=\tau_{k-1}+1}^{\tau_{k} }\mathbf{x}_{j} \tag{21}\] uniformly for each \(\tau_{k-1}\) and \(\tau_{k}\) that satisfy \(\tau_{k-1}+1\leq\tau_{k}\). Based on \(\tilde{\mathbf{\mu}}(\tau_{k-1},\tau_{k})\) in (21), define \[f_{k}(\tau_{k};\mathbf{\tau}_{-k})\triangleq\frac{1}{N} \sum_{i=\tau_{k-1}+1}^{\tau_{k+1}}\Big{[}\Big{(}1-\sigma_{\beta}(i -\tau_{k})\Big{)}\] \[\times\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}(\tau_{k-1},\tau_{k}) \big{\|}_{2}^{2}\] \[+\sigma_{\beta}(i-\tau_{k})\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}( \tau_{k},\tau_{k+1})\big{\|}_{2}^{2}\Big{]} \tag{22}\] for \(k=1,2,\ldots,K-1\), where \(\mathbf{\tau}_{-k}\triangleq(\tau_{k-1},\tau_{k+1})\). In addition, define \[f_{0}(\tau_{0}) =\frac{1}{N}\sum_{i=1}^{\tau_{1}}\sigma_{\beta}(i-\tau_{0})\big{\|} \mathbf{x}_{i}-\tilde{\mathbf{\mu}}(\tau_{0},\tau_{1})\big{\|}_{2}^{2} \tag{23}\] \[f_{K}(\tau_{K}) =\frac{1}{N}\sum_{i=\tau_{K-1}+1}^{N}\big{(}1-\sigma_{\beta}(i- \tau_{K})\big{)}\] \[\times\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}(\tau_{K-1},\tau_{K}) \big{\|}_{2}^{2} \tag{24}\] for mathematical convenience. The cost function \(\tilde{f}(\mathbf{\tau})\) can be asymptotically approximated as \(\frac{1}{2}\sum_{k=0}^{K}f_{k}(\tau_{k};\mathbf{\tau}_{-k})\) as formally stated in the following result. **Proposition 1** (Asymptotic Hardening).: _As \(\beta\to 0\), we have_ \[\tilde{f}(\mathbf{\tau})\rightarrow\frac{1}{2}\sum_{k=0}^{K}f_{k}(\tau_{k};\mathbf{ \tau}_{-k})\] _uniformly for every sequence \(\mathbf{\tau}\) satisfying the constraint (8)._ Proof.: See Appendix A. Proposition 1 decomposes \(\tilde{f}(\mathbf{\tau})\) into sub-functions \(f_{k}(\tau_{k};\mathbf{\tau}_{-k})\) that only depend on a subset of data \(\mathbf{x}_{i}\) from \(i=\tau_{k-1}+1\) to \(i=\tau_{k+1}\). ### _Asymptotic Consistency for Large \(N\)_ While the function \(f_{k}(\cdot)\) in (22) has been much simplified from \(\tilde{f}(\tau)\), it is a stochastic function affected by the measurement noise in (1). Consider a scenario of large \(N\) for asymptotically dense measurement in each region \(k\). Specifically, the total number of the sequential measurements grows in such a way that the segment boundary indices \(t_{1},t_{2},\ldots,t_{K-1}\) grow with a constant ratio \(t_{k}/N=\bar{\gamma}_{k}\) with respect to \(N\) as \(N\) grows, and the measurements in each region are independent. In practice, this may correspond to a random walk in each of the region for a fix, but _unknown_, portion of time \(\bar{\gamma}_{k}\) as \(N\) grows. We consider a deterministic proxy for \(f_{k}(\cdot)\) under large \(N\) defined as \[F_{k}(\tau_{k};\mathbf{\tau}_{-k})=\mathbb{E}\{f_{k}(\tau_{k};\mathbf{\tau}_{-k})\} \tag{25}\] where the expectation is over the randomness of the measurement noise \(\mathbf{\epsilon}_{i}\) in (1). As a result, \(F_{k}(\cdot)\) represents the cost in the noiseless case. In the remaining part of the paper, we may omit the argument \(\mathbf{\tau}_{-k}\) and write \(f_{k}(\tau_{k})\) and \(F_{k}(\tau_{k})\) for simplicity, as long as it is clear from the context. To investigate the asymptotic property for large \(N\), we define \(\bar{f}_{k}(\gamma_{k})=\lim\limits_{\beta\to 0}\ f_{k}(\gamma_{k}N)\), and \(\bar{F}_{k}(\gamma_{k})=\lim\limits_{\beta\to 0}\ F_{k}(\gamma_{k}N)\) for \(\gamma_{k}\in\Gamma=\{i/N:i=1,2,...,N\}\). Denote \(\gamma_{k}^{*}\) as the minimizer of \(\bar{F}_{k}(\gamma_{k})\), and \(\bar{\gamma}_{k}\) as the minimizer of \(\bar{f}_{k}(\gamma_{k})\). Then, we have the following result. **Proposition 2** (Asymptotic Consistency).: _Suppose that, for some \(k\), there exists only one index \(t_{j}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\) within the interval \((\tau_{k-1},\tau_{k+1})\). Then, it holds that \(\hat{\gamma}_{k}\to\gamma_{k}^{*}\) in probability as \(N\to\infty\)._ Proof.: See Appendix B. Proposition 2 implies that if \(\hat{\tau}_{k}\) minimizes \(f_{k}(\tau_{k})\), and \(\tau_{k}^{*}\) minimizes \(F_{k}(\tau_{k})\) as \(\beta\to 0\), then, we have \(\hat{\tau}_{k}/N\to\tau_{k}^{*}/N\) as \(N\to\infty\) and \(\beta\to 0\). Thus, the estimator \(\hat{\tau}_{k}\) obtained from \(f_{k}(\cdot)\) and the solution \(\tau_{k}^{*}\) obtained from \(F_{k}(\cdot)\) are asymptotically consistent in a wide sense. Therefore, we use \(F_{k}(\tau_{k})\) as a deterministic proxy for the stochastic function \(f_{k}(\tau_{k};\mathbf{\tau}_{-k})\) and we thus study the property of \(F_{k}(\tau_{k})\). ### _The Property of the Deterministic Proxy \(F_{k}(\tau_{k})\)_ We find the following properties for \(F_{k}(\tau_{k})\). **Proposition 3** (Unimodality).: _Suppose that, for some \(k\), there exists only one index \(t_{j}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\), within the interval \((\tau_{k-1},\tau_{k+1})\). Then, for any \(\varepsilon>0\), there exists a small enough \(\beta\), and some finite constants \(C_{1},C_{2},C_{1}^{\prime},C_{2}^{\prime}>0\) independent of \(\varepsilon\), such that \(F_{k}(\tau)-F_{k}(\tau-1)<\varepsilon C_{1}-C_{2}<0\), for \(\tau_{k-1}<\tau\leq t_{j}\), and \(F_{k}(\tau)-F_{k}(\tau-1)>C_{2}^{\prime}-\varepsilon C_{1}^{\prime}>0\), for \(t_{j}<\tau<\tau_{k+1}\). In addition, \(t_{j}\) minimizes \(F_{k}(\tau)\) in \((\tau_{k-1},\tau_{k+1})\)._ Proof.: See Appendix C-1. This result implies that, once the condition is satisfied, there exists a unique local minima \(t_{j}\) of \(F_{k}(\tau)\) over \((\tau_{k-1},\tau_{k+1})\). **Proposition 4** (Flatness).: _Suppose that, for some \(k\), there is no index \(t_{j}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\) in the interval \((\tau_{k-1},\tau_{k+1})\). Then, for any \(\varepsilon>0\), there exists a small enough \(\beta\) and a finite constant \(C_{0}>0\) independent of \(\varepsilon\), such that \(|F_{k}(\tau)-F_{k}(\tau-1)|<\varepsilon s^{2}C_{0}\) for all \(\tau\in(\tau_{k-1},\tau_{k+1})\)._ Proof.: See Appendix C-2. It follows that, when the interval \((\tau_{k-1},\tau_{k+1})\) is completely contained in \((t_{j},t_{j+1})\) for some \(j\), the function \(F_{k}(\tau)\) appears as an almost flat function for \(\tau\in(\tau_{k-1},\tau_{k+1})\) with only small fluctuation according to the noise variance \(s^{2}\). **Proposition 5** (Monotonicity near Boundary).: _Suppose that, for some \(k\), there are multiple partition indices \(t_{j},t_{j+1},\ldots,t_{j+J}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\) within the interval \((\tau_{k-1},\tau_{k+1})\). In addition, assume that the vectors \(\{\mathbf{\mu}_{k}\}\) are linearly independent. Then, for any \(\varepsilon>0\), there exists a small enough \(\beta\), and some finite constant \(C_{3},C_{4},C_{3}^{\prime},C_{4}^{\prime}>0\), such that \(F_{k}(\tau)-F_{k}(\tau-1)<\varepsilon C_{3}-C_{4}<0\) for \(\tau_{k-1}<\tau\leq t_{j}\), and \(F_{k}(\tau)-F_{k}(\tau-1)>C_{4}^{\prime}-\varepsilon C_{3}^{\prime}>0\) for \(t_{j+J}<\tau<\tau_{k+1}\)._ Proof.: See Appendix C-3. This result implies that the proxy cost function \(F_{k}(\tau_{k})\) monotonically decreases in \((\tau_{k-1},t_{j}]\) and increases in \([t_{j+J},\tau_{k+1})\). Properties in Proposition 3-5 lead to a useful design intuition for the algorithm. First, if \((\tau_{k-1},\tau_{k+1})\) contains only one index \(t_{j}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\), then \(t_{j}\) can be found by minimizing \(F_{k}(\tau)\) in \((\tau_{k-1},\tau_{k+1})\). Second, if \((\tau_{k-1},\tau_{k+1})\) contains none index \(t_{j}\), then there must be another interval \((\tau_{k^{\prime}-1},\tau_{k^{\prime}+1})\) containing more than one indices \(t_{j},t_{j+1},\ldots,t_{j+J}\), and as a result, by minimizing \(F_{k^{\prime}}(\tau)\) over \((\tau_{k^{\prime}-1},\tau_{k^{\prime}+1})\), the solution satisfies \(\tau_{k^{\prime}}\in[t_{j},t_{j+J}]\). This intuition leads to a successive merge-and-split algorithm derived as follows. ### _A Merge-and-Split Algorithm under the Proxy Cost_ Denote \(\mathbf{\tau}^{(m)}\) as the segmentation variable from the \(m\)th iteration, and \(\mathcal{C}_{k}^{(m)}=\{\tau_{k-1}^{(m)}+1,\tau_{k-1}^{(m)}+2,\ldots,\tau_{k}^{(m )}\}\) as the corresponding index set of the \(k\)th cluster based on the segmentation variable \(\mathbf{\tau}^{(m)}\). For the \((m+1)\)th iteration, the **merge** step picks a cluster \(\mathcal{C}_{k}^{(m)}\), \(k\in\{1,2,...,K-1\}\), and merges it with the adjacent cluster \(\mathcal{C}_{k+1}^{(m)}\), forming in a new set of \(K-1\) clusters \(\mathcal{\tilde{C}}_{1}^{(m,k)},\mathcal{\tilde{C}}_{2}^{(m,k)},\ldots,\mathcal{ \tilde{C}}_{K-1}^{(m,k)}\); algebraically, it is equivalent to removing the \(k\)th variable \(\tau_{k}^{(m)}\), resulting in a set of \(K-2\) segmentation variables, denoted as an \((K-2)\)-tuple \(\mathbf{\tau}^{(m,k)}=(\tau_{1}^{(m,k)},\tau_{2}^{(m,k)},\ldots,\tau_{K-2}^{(m,k)})\). In the **split** step, a cluster \(\mathcal{\tilde{C}}_{j}^{(m,k)}\) is selected and split into two, resulting in a new set of \(K\) clusters \(\mathcal{\tilde{C}}_{1}^{(m,k,j)}\), \(\mathcal{\tilde{C}}_{2}^{(m,k,j)}\), \(\ldots,\mathcal{\tilde{ As a result, \(\triangle F_{\star}^{(m,k,j)}\) corresponds to the cost reduction of merging clusters \(\mathcal{C}_{k}^{(m)}\) and \(\mathcal{C}_{k+1}^{(m)}\) followed by an optimal split of the \(j\)th cluster after the merge. Then, by evaluating the cost reduction for all \((K-1)^{2}\) possible combinations of the merge-and-split, one can find the best segmentation variable \(\mathbf{\tau}^{(m+1)}\) that yields the least cost for the \((m+1)\)th iteration. The overall procedure is summarized in Algorithm 1. The complexity can be analyzed as follows. For each iteration, there are \((K-1)^{2}\) merge-and-split operations. An exhaustive approach to compute Step 2b) in Algorithm 1 requires \(\mathcal{O}(N/K)\) steps to enumerate all possible integer values \(\tau\) in the interval \((\tau_{j-1}^{(m,k,j)},\tau_{j+1}^{(m,k,j)})\), while for each step, the function \(F_{j}(\tau;\mathbf{\tau}_{-j}^{(m,k,j)})\) can be approximately computed by its stochastic approximation \(f_{j}(\tau;\mathbf{\tau}_{-j}^{(m,k,j)})\) in (22), which requires a complexity of \(\mathcal{O}(DN/K)\). As a result, for each iteration, it requires a complexity of \(\mathcal{O}(DN^{2})\). For a benchmark, an exhaustive approach to solve (20) requires \(\mathcal{O}(N^{(K-1)})\) iterations and each iteration requires a complexity of \(\mathcal{O}(NK)\) to evaluate (20). Thus, the proposed merge-and-split is efficient. ### _Convergence and Optimality_ Algorithm 1 must converge due to the following two properties. First, the cost function is lower bounded by \(0\) since it is the sum of squares. Second, define \(\tilde{F}(\mathbf{\tau})\triangleq\sum_{k=1}^{K-1}F_{k}(\tau_{k};\mathbf{\tau}_{-k})\), if \(\tilde{F}(\mathbf{\tau}^{(m+1)})\neq\tilde{F}(\mathbf{\tau}^{(m)})\), then the \(m\)th iteration must _strictly_ decrease the cost. Specifically, for each \(k\), Step 3) in Algorithm 1 guarantees \(\triangle F_{\star}^{(m,k,j^{*}(k))}\geq\triangle F_{\star}^{(m,k,k)}\), implying that \(\tilde{F}(\mathbf{\hat{\tau}}^{(m,k,j^{*}(k))})\leq\tilde{F}(\mathbf{\tau}^{(m)})\) for all \(k\); in addition, the output of the outer loop implies that \(\tilde{F}(\mathbf{\tau}^{(m+1)})\leq\tilde{F}(\mathbf{\hat{\tau}}^{(m,k,j^{*}(k))})\) for all \(k\), and there must be at least one strict inequality for \(\tilde{F}(\mathbf{\tau}^{(m+1)})<\tilde{F}(\mathbf{\tau}^{(m)})\) if \(\mathbf{\tau}^{(m)}\neq(t_{1},t_{2},...,t_{K-1})\), which is proved in Appendix E. To investigate the optimality of the converged solution from Algorithm 1, consider the clusters \(\widetilde{\mathcal{C}}_{j}^{(m,k)}\) constructed in Step 1) of Algorithm 1. Recall that the data \(\{\mathbf{x}_{i}\}\) is clustered sequentially with segment boundaries \(t_{1},t_{2},\ldots,t_{K-1}\). We have the following property on the cost reduction \(\triangle F_{\star}^{(m,k,j)}\) in (26). **Lemma 1** (Cost Reduction).: _Consider two distinct clusters \(\widetilde{\mathcal{C}}_{j}^{(m,k)}\) and \(\widetilde{\mathcal{C}}_{j^{\prime}}^{(m,k)}\) constructed from the \(m\)th iteration and the \(k\)th loop of Step 1) in Algorithm 1. Suppose that there exists at least one index \(t_{k^{\prime}}\in\{t_{1},t_{2},\ldots,t_{K-1}\}\) in \(\widetilde{\mathcal{C}}_{j}^{(m,k)}\), and no such \(t_{k^{\prime}}\) in \(\widetilde{\mathcal{C}}_{j^{\prime}}^{(m,k)}\). Then, for a sufficiently small \(\beta\), \(\triangle F_{\star}^{(m,k,j)}>\triangle F_{\star}^{(m,k,j^{\prime})}\)._ Proof.: See Appendix D. Lemma 1 can be intuitively understood from Propositions 3 and 4, which suggest that \(F_{j}(\tau;\mathbf{\tau}_{-j}^{(m,k,j)})\) is unimodal in \((\tau_{j-1}^{(m,k,j)},\tau_{j+1}^{(m,k,j)})\), but \(F_{j^{\prime}}(\tau;\mathbf{\tau}_{-j^{\prime}}^{(m,k,j^{\prime})})\) is almost flat in \((\mathbf{\tau}_{j-1}^{(m,k,j^{\prime})},\mathbf{\tau}_{j+1}^{(m,k,j^{\prime})})\), and hence, the former one has a larger potential to reduce the total cost \(\tilde{F}(\mathbf{\tau})=\sum_{k=1}^{K-1}F_{k}(\tau_{k};\mathbf{\tau}_{-k})\). **Theorem 1** (Optimality).: _Algorithm 1 terminates at \(\mathbf{\tau}^{*}=(\tau_{1}^{*},\tau_{2}^{*},\ldots,\tau_{K-1}^{*})\), with \(\tau_{k}^{*}=t_{k}\), \(k=1,2,\ldots,K-1\)._ Proof.: See Appendix E. As a result, Algorithm 1 can converge to the globally optimal solution \(t_{k}\) under the proxy cost \(F_{k}(\cdot)\) for \(d_{k}=0\) and \(s_{k}^{2}=s^{2}\) despite the problem being non-convex. ### _Merge-and-Split Clustering for Sequential Data_ We now extend Algorithm 1 to the general case for \(d_{k}\geq 0\). Recall from Proposition 2 that the segment boundary estimator \(\hat{\tau}_{k}^{(N)}\) obtained from minimizing \(f_{k}(\tau_{k};\mathbf{\tau}_{-k})\) is asymptotically consistent with the minimizer \(\tau_{k}^{*}\) of the proxy cost \(F_{k}(\tau_{k};\mathbf{\tau}_{-k})\) for asymptotically small \(\beta\) and large \(N\). Therefore, it is encouraged to extend Algorithm 1 for minimizing the actual cost \(f_{k}(\tau;\mathbf{\tau}_{-k})\). It is clear that \(f_{k}\), which can be computed directly from the data \(\{\mathbf{x}_{i}\}\), is a stochastic approximation of the proxy function \(F_{k}(\tau_{k};\mathbf{\tau}_{-k})\). Specifically, based on the probability model (2), for \(d_{k}\geq 0\), we define \[\mathcal{F}_{k}(\mathbf{\Theta},\mathbf{\tau})\triangleq\frac{1}{N}\sum_{i= \tau_{k-1}+1}^{\tau_{k+1}}\Big{[}\big{(}1-\sigma_{\beta}(i-\tau_{k})\big{)} \Big{(}\mathrm{ln}|\mathbf{C}_{k}| \tag{27}\] \[\qquad\qquad\qquad+(\mathbf{x}_{i}-\mathbf{\mu}_{k})^{\mathsf{T}} \mathbf{C}_{k}^{-1}(\mathbf{x}_{i}-\mathbf{\mu}_{k})\Big{)}\] \[\qquad\qquad\qquad+\sigma_{\beta}(i-\tau_{k})\Big{(}\mathrm{ln}| \mathbf{C}_{k+1}|\] \[\qquad\qquad\qquad+(\mathbf{x}_{i}-\mathbf{\mu}_{k+1})^{\mathsf{T}} \mathbf{C}_{k+1}^{-1}(\mathbf{x}_{i}-\mathbf{\mu}_{k+1})\Big{)}\Big{]}\] for \(k=1,2,\ldots,K-1\), and \[\mathcal{F}_{0}(\mathbf{\Theta},\mathbf{\tau}) =\frac{1}{N}\sum_{i=1}^{\tau_{1}}\sigma_{\beta}(i-\tau_{0})\Big{(} \mathrm{ln}|\mathbf{C}_{1}|\] \[\qquad\qquad+(\mathbf{x}_{i}-\mathbf{\mu}_{1})^{\mathsf{T}}\mathbf{C} _{1}^{-1}(\mathbf{x}_{i}-\mathbf{\mu}_{1})\Big{)}\] \[\mathcal{F}_{K}(\mathbf{\Theta},\mathbf{\tau}) =\frac{1}{N}\sum_{i=\tau_{K-1}+1}^{N}\big{(}1-\sigma_{\beta}(i- \tau_{K})\big{)}\Big{(}\mathrm{ln}|\mathbf{C}_{K}|\] \[\qquad\qquad+(\mathbf{x}_{i}-\mathbf{\mu}_{K})^{\mathsf{T}}\mathbf{C} _{K}^{-1}(\mathbf{x}_{i}-\mathbf{\mu}_{K})\Big{)}\] in the same way as (22)-(24). Following the same argument as in Proposition 1, it is observed that maximizing \(\mathcal{J}(\mathbf{\Theta},\mathbf{\tau})\) in (5) is asymptotically equivalent to minimizing \(\frac{1}{2}\sum_{k=0}^{K}\mathcal{F}_{k}(\mathbf{\Theta},\mathbf{\tau})\) for \(\beta\to 0\). In Section III-A, it has been shown that, for a given segmentation variable \(\mathbf{\tau}\), the solution \(\hat{\mathbf{\Theta}}(\mathbf{\tau})\) can be constructed from (9), (14)-(15). Therefore, the \(f_{k}\) function (22) studied in Section IV-A to Section IV-E can be obtained in a similar way as \[f_{k}(\tau_{k};\mathbf{\tau}_{-k})=\mathcal{F}_{k}(\hat{\mathbf{\Theta}}(\mathbf{\tau}), \mathbf{\tau}). \tag{28}\] As a result, a direct extension of Algorithm 1 to solve for the segmentation \(\mathbf{\tau}\) is to replace the function \(F_{k}(\tau_{k};\mathbf{\tau}_{-k})\) in Algorithm 1 with \(f_{k}(\tau_{k};\mathbf{\tau}_{-k})\) defined in (28). In the following, we call the extended version Algorithm 1 for convenience. Note that the complexity of evaluating (28) is \(\mathcal{O}(D^{2}N+D^{3})\) because computing (27) requires \(\mathcal{O}(D^{2}N/K)\) and computing \(\hat{\mathbf{\Theta}}(\mathbf{\tau})\) requires \(\mathcal{O}(D^{2}N+D^{3})\), leading to a total complexity of \(\mathcal{O}(D^{2}N^{2}K)\) per iteration in Algorithm 1. To reduce the complexity, one may consider to _alternatively_ update \(\mathbf{\Theta}\) and \(\mathbf{\tau}\) in \(\mathcal{F}_{k}(\mathbf{\Theta},\mathbf{\tau})\). Such an alternating optimization approach can be developed from Algorithm 1 in a straightforward way and is summarized in Algorithm 2. As such, the per iteration complexity is reduced to \(\mathcal{O}(D^{2}N+D^{3}+N^{2})\). ``` 1:\(\mathbf{\Theta ### _Convergence and Verification of the Theoretical Results_ In practice, the mobile device has non-negligible transition from one region to another, and therefore, the exact timing at the region boundary is not well-defined. We thus define the \(\varepsilon\)-tolerance error \[E_{\varepsilon}=\frac{1}{N}\sum_{k=1}^{K-1}\max\{|\tau_{k}-t_{k}|-\varepsilon N,0\}\] to evaluate the convergence of the algorithms. Here, global optimality is claimed when \(E_{\varepsilon}=0\). The proposed Algorithms 1 and 2 and their variants, marked as "R", are evaluated. The "R" version of the proposed algorithms are randomly initialized following a uniform distribution for \(\tau_{k}^{(0)}\) in the first line of Algorithm 1. Algorithm 2 is initialized by running Algorithm 1 for 15 iterations, which have been counted in the total iterations of Algorithm 2 in Fig. 2. In Fig. 2 (a), the cost value reduction of Algorithm 2 tends to saturate at the 15th iteration (initial phase of Algorithm 2), but the cost value continues to quickly decreases starting from the \(16\)th iteration (main loop of Algorithm 2). The convergence is benchmarked with a gradient-based subspace clustering (GSC) method [20], which employs stochastic gradient descent to search for \(\boldsymbol{\tau}\). Fig. 2 shows the objective function value \(\sum_{k=1}^{K-1}\mathcal{F}_{k}(\widehat{\boldsymbol{\Theta}}(\boldsymbol{ \tau}),\boldsymbol{\tau})\) and the \(\varepsilon\)-tolerance error \(E_{\varepsilon}\) against the iteration number, where \(\varepsilon\) is chosen as \(\varepsilon=0.3\)%. Both Algorithms 1 and 2, as well as their variants with random initialization, converge to the globally optimal solution. By contrast, the GSC baseline is not guaranteed to converge to \(E_{\varepsilon}=0\), as it is easily trapped at poor local optimum. In addition, Algorithm 1 requires fewer iterations than Algorithm 2, but it has a higher computational complexity per iteration. Specifically, for the dataset we used, the total computational time of Algorithm 1 is 1033 seconds, whereas, that of Algorithm 2 is 245 seconds, 4X faster than Algorithm 1. Nevertheless, Algorithm 1 is guaranteed to globally converge in a special case as stated in Theorem 1. Next, we verify the theoretical properties in Propositions 3-5 through two numerical examples. In Fig. 3 (a), a simulated dataset is constructed based on the subspace model (1) with \(D=40\), \(d_{k}=0\), and \(K\) clusters. The ratio \(\|\boldsymbol{\mu}_{i}-\boldsymbol{\mu}_{j}\|_{2}^{2}/s^{2}\) of the squared-distance between the cluster centers over the noise variance \(s^{2}\) is set as 2.5. The data is segmented into two parts by \(\tau_{1}\) and the cost function \(f_{1}(\tau_{1})\) is plotted. First, as the number of samples \(N\) increases, the cost function \(f_{1}(\tau_{1})\) eventually becomes the deterministic proxy \(F_{1}(\tau_{1})=\mathbb{E}\{f_{1}(\tau_{1})\}\) as shown by the group of curves for \(K=2\) clusters. Second, \(f_{1}(\tau_{1})\) appears as unimodal for \(K=2\) clusters under large \(N\), which agrees with the results in Proposition 3. Third, for \(K=1\) cluster, \(f_{1}(\tau_{1})\) is an almost flat function under large \(N\) as discussed in Proposition 4. For \(K=3\) clusters under large \(N\), \(f_{1}(\tau_{1})\) appears as monotonic near the left and right boundaries as discussed in Proposition 5. In Fig. 3 (b), the experiment is extended to real data. We extract a subset of data samples belonging to \(K=1,2,3\) consecutive clusters from the measurement dataset, and plot the cost function \(f_{1}(\tau_{1})\) under parameter \(d_{k}=2\) and different \(\beta\) values. It is observed that the cost function appears as unimodal disturbed by noise for \(K=2\) clusters, which is consistent with the results in Proposition 3. This also provides a justification that the proposed subspace model is essentially accurate in real data. In addition, while a small \(\beta\) may help amplify the unimodality of the cost function as implied by Proposition 3, the cost function is prone to be disturbed by the modeling noise, resulting in multiple local minimizers. On the contrary, a medium to large \(\beta\) may help eliminate the noise for a unique local minimizer. ### _Cluster to Physical Region Matching_ The proposed cluster-to-region matching is based on a graph \(\mathcal{G}\), which may be generated from the floor plan in practice. For evaluation purpose here, we generate a set of graphs by randomly assigning edges between regions. The probability of an edge joining two regions \(j\) and \(k\) is \(q_{jk}=C_{\text{e}}\cdot\exp(-\|\mathbf{o}_{j}-\mathbf{o}_{k}\|_{2}^{2})\), _i.e._, the smaller the distance, the higher the probability, where \(C_{\text{e}}\) is a normalizing factor for the expected number of edges in the graph, and \(\mathbf{o}_{j}\) and \(\mathbf{o}_{k}\) are the reference locations Figure 3: Verification of the unimodality, the flatness, and the monotonicity near boundary of the cost function. Figure 2: Convergence of the proposed algorithms, where the curves marked with “R” represents the mean trajectories for 20 independent random initializations. of the \(j\)th and \(k\)th region, respectively, as shown in Fig. 1. The matching error is computed as \(E_{\text{m}}=\frac{1}{K}\sum_{k}\mathbb{I}\{\pi^{*}(k)\neq\pi(k)\}\) where \(\mathbf{\pi}^{*}\) is the desired matching. Table II summarizes the matching error for different graphs. It is observed that the fewer the edges, the lower the matching error. This is because the constraint set in (19) is smaller for fewer edges. For the graphs with 18 edges, the average number of eligible routes satisfying constraint (19) is 287 in our region topology; for the graphs with 27 edges, that number is above 13,000. Nevertheless, the overall matching error is mostly below 3% under parameter \(\alpha=1\) and below 1% for \(|\mathcal{E}|\leq 20\) under all parameter values \(\alpha=1,2,4\) when computing the reference centroid in (17). In addition, we compare the performance under different parameters \(\alpha\). It is shown that the performance is not sensitive with \(\alpha\), and the overall matching error is mostly below 4%. ### _Localization Performance_ We evaluate the localization performance of the region-based radio map using test datasets I and II, which have not been used for the radio map construction. A maximum-likelihood approach is used based on the conditional probability function (2), and the estimated region \(\hat{k}\) given the RSS measurement vector \(\mathbf{x}\) is given by \[\hat{k}=\operatorname*{argmax}_{k\in\{1,2,\ldots,K\}}p_{k}(\mathbf{x}; \mathbf{\Theta}).\] We compare the localization performance with two unsupervised schemes, max-RSS (MR), which picks the location of the sensor that observes the largest RSS as the target location, and WCL [8], which estimates the location as (17). For performance benchmarking, we also evaluate three supervised localization approaches KNN [25, 26], SVM [27], and DNN [28, 29], which are trained using the training set with region labels that were not available to the proposed scheme. The parameters of baseline methods are determined and tuned using a ten-fold cross validation. For KNN, the optimal number of neighbors was found to be 8. A Gaussian kernel was used for SVM. For DNN, we adopt a three layer multilayer perceptron (MLP) neural network with 30 nodes in each layer to train the localization classifier. The parameter \(\beta\) in (7) is set to be 1. The subspace dimension \(d_{k}\) is chosen from 1 to 3 according to the number of sensors located in the region as shown in Fig. 1. The performance is evaluated using the mean _region localization error_ defined as \(\mathbb{E}\{\|\mathbf{o}_{k}^{\ast}-\mathbf{o}_{k}\|\}\), where \(\mathbf{o}_{k}\) is the reference location of the \(k\)th region. Table III summarizes the region localization error. Somewhat surprisingly, the proposed scheme, trained without labels, performs even better then the supervised methods; in fact, it performs the best among all the schemes tested. There could be two reasons. First, the proposed clustering algorithm already recovers the correct region labels as indicated in Fig. 2. Second, the signal subspace model in (1) is, perhaps, more accurate for region-based localization than the other models. In addition, it is observed that the performance of all schemes degrades over time. However, the proposed scheme in Day 3 still performs better than all the other methods in Day 2. Thus, the proposed scheme based on signal subspace model is less sensitive to the change of the radio environment. Recall that the proposed scheme does not require location labels for constructing the region-based radio map, and hence, it is easier for re-calibration compared with the supervised schemes. ## VI Conclusion In this paper, a subspace clustering method with a sequential prior is proposed to construct a region-based radio map from sequentially collected RSS measurements. A maximum-likelihood estimation problem with a sequential prior is formulated, and solved by a proposed merge-and-split algorithm that is proven to converge to a globally optimal solution for a special case. Furthermore, a graph model for a set of possible routes is constructed which leads to a Viterbi algorithm for the region matching and achieves less than 1% matching error. The numerical results demonstrated that the proposed unsupervised scheme achieves an even better localization performance than several supervised learning schemes, including KNN, SVM, and DNN which use location labels during the training. ## Appendix A Proof of Proposition 1 We have \(\tilde{f}(\tau)=\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1,j\neq k,k+1}^{K}z_{\beta}( i,\tau_{j-1},\tau_{j})\big{\|}\mathbf{x}_{i}-\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k}) \big{\|}_{2}^{2}+\tilde{f}_{k}(\tau_{k})\), where \[\tilde{f}_{k}(\tau_{k}) =\frac{1}{N}\sum_{i=1}^{N}\Big{[}z_{\beta}(i,\tau_{k-1},\tau_{k}) \big{\|}\mathbf{x}_{i}-\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\big{\|}_{2}^{2}\] \[\quad+z_{\beta}(i,\tau_{k},\tau_{k+1})\big{\|}\mathbf{x}_{i}-\hat {\mathbf{\mu}}_{k+1}(\tau_{k},\tau_{k+1})\big{\|}_{2}^{2}\Big{]}.\] Recall the uniform convergence for \(\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\) as \(\beta\to 0\) in (21), i.e., \(\hat{\mathbf{\mu}}_{k}(\tau_{k-1},\tau_{k})\rightarrow\hat{\mathbf{\mu}}(\tau_{k-1},\tau _{k})\). We have \(\tilde{f}_{k}(\tau_{k})\rightarrow\tilde{f}_{k}(\tau_{k})\triangleq\frac{1}{N} \sum_{i=1}^{N}(z_{\beta}(i,\tau_{k-1},\tau_{k})\big{\|}\mathbf{x}_{i}-\tilde{ \mathbf{\mu}}(\tau_{k-1},\tau_{k})\big{\|}_{2}^{2}+z_{\beta}(i,\tau_{k},\tau_{k+1}) \big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}(\tau_{k},\tau_{k+1})\big{\|}_{2}^{2})\). Since \(z_{\beta}(i,t_{k-1},t_{k})\rightarrow\mathbb{I}\{t_{k-1}<i\leq t_{k}\}\) as \(\beta\to 0\), where the indicator function \(\mathbb{I}\{t_{k-1}<i\leq t_{k}\}=1\) if \(t_{k-1}<i\leq t_{k}\), and \(0\) otherwise, we have \(\sum_{i=1}^{N}z_{\beta}(i,\tau_{k-1},\tau_{k})\big{\|}\mathbf{x}_{i}-\tilde{\bm {\mu}}(\tau_{k-1},\tau_{k})\big{\|}_{2}^{2}\rightarrow\sum_{i=\tau_{k-1}+1}^{ \tau_{k+1}}(1-\sigma_{\beta}(i-\tau_{k})\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}( \tau_{k-1},\tau_{k})\big{\|}_{2}^{2}\) and \(\sum_{i=1}^{N}z_{\beta}(i,\tau_{k},\tau_{k+1})\big{\|}\mathbf{x}_{i}-\tilde{\bm {\mu}}(\tau_{k},\tau_{k+1})\big{\|}_{2}^{2}\rightarrow\sum_{i=\tau_{k-1}+1}^{ \tau_{k+1}}\sigma_{\beta}(i-\tau_{k})\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}( \tau_{k},\tau_{k+1})\big{\|}_{2}^{2}\) as \(\beta\to 0\). Thus, as \(\beta\to 0\), we have \(\tilde{f}_{k}(\tau_{k})\)\(\rightarrow\)\(f_{k}(\tau_{k})\). So, we have \(\tilde{f}(\mathbf{\tau})\rightarrow\frac{1}{2}\sum_{k=0}^{K}f_{k}(\tau_{k})\) as \(\beta\to 0\). ## Appendix B Proof of Proposition 2 Without loss of generality, we study the case \(K=2\) and \(k=1\). From (22) and (25), \(\bar{F}_{k}(\gamma_{k})\) and \(\bar{f}_{k}(\gamma_{k})\) can be written as \(\bar{F}_{1}(\gamma_{1})=\mathbb{E}\{\bar{f}_{1}(\gamma_{1})\}\) and \[\bar{f}_{1}(\gamma_{1})=\lim_{\beta\to 0} \frac{1}{N}\sum_{i=1}^{N}\Big{[}\Big{(}1-\sigma_{\beta}(i-\gamma_{ 1}N)\Big{)}\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{\mu}}(0,\gamma_{1}N)\big{\|}_{2}^ {2} \tag{29}\] \[+\sigma_{\beta}(i-\gamma_{1}N)\big{\|}\mathbf{x}_{i}-\tilde{\mathbf{ \mu}}(\gamma_{1}N,N)\big{\|}_{2}^{2}\Big{]}.\] Firstly, we prove that \(\bar{f}_{1}(\gamma_{1})\overset{\mathrm{P}}{\to}\bar{F}_{1}(\gamma_{1})\) uniformly for all \(\gamma_{1}\), i.e., \(\sup_{\gamma_{1}\in\Gamma}\ |\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})| \overset{\mathrm{P}}{\to}0\) as \(N\to\infty\), where \(\overset{\mathrm{P}}{\to}\) denotes convergence in probability. For any \(\gamma_{1}\leq\bar{\gamma}_{1}=t_{1}/N\), one can easily derive from (21) and (29) that, with \(s_{0}^{2}\overset{\triangle}{=}Ds^{2}\), \[\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1}) \tag{30}\] \[=\lim_{\beta\to 0}\ f_{1}(\gamma_{1}N)-\lim_{\beta\to 0}\ F_{1}( \gamma_{1}N)\] \[=2(\mathbf{\mu}_{1}-\mathbf{\mu}_{2})^{\mathrm{T}}\frac{1}{N}\Big{[} \frac{\gamma_{1}-\bar{\gamma}_{1}}{1-\gamma_{1}}\sum_{i=\gamma_{1}N+1}^{N}\bm {\epsilon}_{i}\] \[\qquad+\frac{1-\bar{\gamma}_{1}}{1-\gamma_{1}}\sum_{i=\gamma_{1} N+1}^{\bar{\gamma}_{1}N}\mathbf{\epsilon}_{i}\Big{]}+\frac{1}{N}\sum_{i=1}^{N}\mathbf{ \epsilon}_{i}^{\mathrm{T}}\mathbf{\epsilon}_{i}-s_{0}^{2}.\] **Lemma 2**.: _It holds that \(\sup_{\gamma_{1}\in\Gamma_{1}}\ |\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})| \overset{\mathrm{P}}{\to}0\) with \(N\to\infty\), where \(\bar{\Gamma}_{1}\overset{\triangle}{=}\{1/N,2/N,...,\bar{\gamma}_{1}\}\)._ Proof.: The absolute value of \(\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})\) in (30) can be upper bounded by \[|\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})| \tag{31}\] \[\leq\Big{|}\frac{2(\gamma_{1}-\bar{\gamma}_{1})}{1-\gamma_{1}} \Big{|}\cdot|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}|^{\mathrm{T}}\Big{|}\frac{1}{N}\sum_{i= \bar{\gamma}_{1}N+1}^{N}\mathbf{\epsilon}_{i}\Big{|}+\Big{|}\frac{2(1-\bar{\gamma} _{1})}{1-\gamma_{1}}\Big{|}\] \[\qquad\times|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}|^{\mathrm{T}}\Big{[}\frac{ 1}{N}\sum_{i=\gamma_{1}N+1}^{\bar{\gamma}_{1}N}\mathbf{\epsilon}_{i}\Big{]}+\Big{|} \frac{1}{N}\sum_{i=1}^{N}\mathbf{\epsilon}_{i}^{\mathrm{T}}\mathbf{\epsilon}_{i}-s_{0} ^{2}\Big{|}.\] where \(|\mathbf{x}|\) means \((|x_{1}|,|x_{2}|,...,|x_{n}|)^{\mathrm{T}}\), i.e., taking absolute values for each element of the vector \(\mathbf{x}\). For the first term on the right hand side (R.H.S.) of (31), we have \[\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}\Big{|}\frac{2(\bar{\gamma}_ {1}-\gamma_{1})}{1-\gamma_{1}}\Big{|}\cdot|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}|^{ \mathrm{T}}\Big{|}\frac{1}{N}\sum_{i=\bar{\gamma}_{1}N+1}^{N}\mathbf{\epsilon}_{i} \Big{|}\] \[\leq\frac{2(\bar{\gamma}_{1}N-1)}{N-1}\Big{(}\underset{j}{\max}| \mu_{1,j}-\mu_{2,j}|\Big{)}\Big{\|}\frac{1}{N}\sum_{i=\bar{\gamma}_{1}N+1}^{N} \mathbf{\epsilon}_{i}\Big{\|}_{1}\to 0\] as \(N\to\infty\). This is because \(\frac{2(\bar{\gamma}_{1}N-1)}{N-1}(\max_{j}|\mu_{1,j}-\mu_{2,j}|)\) is bounded and \(\frac{1}{N}\sum_{i=\bar{\gamma}_{1}N+1}^{N}\mathbf{\epsilon}_{i}=\frac{(1-\bar{ \gamma}_{1})N}{N}\frac{1}{(1-\bar{\gamma}_{1})N}\sum_{i=\bar{\gamma}_{1}N+1}^{N }\mathbf{\epsilon}_{i}\to\mathbf{0}\) due to the strong law of large numbers, where we recall that \(\mathbf{\epsilon}_{i}\) are i.i.d. with zero mean and bounded variance. For the second term on the R.H.S. of (31), we have \[\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}\Big{|}\frac{2(1-\bar{\gamma} _{1})}{1-\gamma_{1}}\Big{|}\cdot|\mathbf{\mu}_{1}-\mathbf{\mu}_{2}|^{\mathrm{T}}\Big{|} \frac{1}{N}\sum_{i=\gamma_{1}N+1}^{\bar{\gamma}_{1}N}\mathbf{\epsilon}_{i}\Big{|}\] \[\leq 2\Big{(}\underset{j}{\max}|\mu_{1,j}-\mu_{2,j}|\Big{)}\sup_{ \gamma_{1}\in\bar{\Gamma}_{1}}\Big{\|}\frac{1}{N}\sum_{i=\gamma_{1}N+1}^{\bar{ \gamma}_{1}N}\mathbf{\epsilon}_{i}\Big{\|}_{1}\to 0.\] To see this, we compute \[\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}\Big{\|}\frac{1}{N}\sum_{i= \gamma_{1}N+1}^{\bar{\gamma}_{1}N}\mathbf{\epsilon}_{i}\Big{\|}_{1} =\] \[\leq \sum_{j=1}^{D}\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}\Big{|}\frac{1}{N} \sum_{i=\gamma_{1}N+1}^{\bar{\gamma}_{1}N}\mathbf{\epsilon}_{i,j}\Big{|}.\] Note that, \(\forall\,\lambda>0\), \[\mathbb{P}\Bigg{\{}\!\!\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}\!\Big{|} \frac{1}{N}\sum_{i=\gamma_{1}N+1}^{\bar{\gamma}_{1}N}\!\epsilon_{i,j}\Big{|}>\lambda \Bigg{\}}\] \[=\mathbb{P}\Bigg{\{}\!\!\max_{2\leq k\leq\bar{\gamma}_{1}N}\!\! \Big{|}\sum_{i=k}^{\bar{\gamma}_{1}N}\!\frac{\epsilon_{i,j}}{N}\Big{|}>\lambda \Bigg{\}}\] \[\leq\frac{1}{\lambda^{2}}\sum_{i=2}^{\bar{\gamma}_{1}N}\!\!\sqrt{ \Big{\{}\frac{\epsilon_{i,j}}{N}\Big{\}}}=\frac{1}{\lambda^{2}}\frac{1}{N^{2}} (\bar{\gamma}_{1}N-1)s^{2}\to 0\] as \(N\to\infty\), where the inequality above is due to the Kolmogorov's inequality. This confirms that the second term of the R.H.S. of (31) indeed converges to 0 in probability as \(N\to\infty\). For the last term on the R.H.S. of (31) that is independent of \(\gamma_{1}\), it is easy to verify \(\frac{1}{N}\sum_{i=1}^{N}\mathbf{\epsilon}_{i}^{\mathrm{T}}\mathbf{\epsilon}_{i}-s_{0} ^{2}\to 0\) as \(N\to\infty\). Since \(\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}|\bar{f}_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})|\) is less than or equal to the supreme of the R.H.S. of (31), and the R.H.S. of (31) converges to 0 in probability as \(N\to\infty\), we have the conclusion that \(\sup_{\gamma_{1}\in\bar{\Gamma}_{1}}|\ f_{1}(\gamma_{1})-\bar{F}_{1}(\gamma_{1})| \overset{\mathrm{P}}{\to}0\ ## Appendix C Proof of Proposition 3, 4, and 5 ### _1. Proof of Proposition 3_ Following the signal model (1), under \(d_{j}=d_{j+1}=0\), \(s_{j}=s_{j+1}=s\), we have \[\mathbf{x}_{i}=\begin{cases}\boldsymbol{\mu}_{j}+\boldsymbol{\epsilon}_{i},& \tau_{k-1}<i\leq t_{j}\\ \boldsymbol{\mu}_{j+1}+\boldsymbol{\epsilon}_{i},&t_{j}<i\leq\tau_{k+1}. \end{cases} \tag{32}\] Case 1: Consider \(\tau_{k-1}<i\leq t_{j}\). Using (32), one can compute the expectation of \(f_{k}(\tau)\) and then take the difference, i.e., \(\triangle F_{k}(\tau)=F_{k}(\tau)-F_{k}(\tau-1)\) can be computed as \[\triangle F_{k}(\tau)=\frac{1}{N}\|\boldsymbol{\mu}_{j}-\boldsymbol{\mu}_{j+ 1}\|_{2}^{2}u(\tau,t_{j},\beta)+\frac{1}{N}s_{0}^{2}\gamma(\tau,\beta) \tag{33}\] where \[u(\tau,t_{j},\beta)=\left(\frac{\tau_{k+1}-t_{j}}{\tau_{k+1}- \tau}\right)^{2}\sum_{i=\tau_{k-1}+1}^{t_{j}}\sigma_{\beta}\left(i-\tau\right)\] \[+\left(1-\frac{\tau_{k+1}-t_{j}}{\tau_{k+1}-\tau}\right)^{2}\sum _{i=t_{j}+1}^{\tau_{k+1}}\sigma_{\beta}\left(i-\tau\right)-\left(\frac{\tau_{ k+1}-t_{j}}{\tau_{k+1}-\tau+1}\right)^{2}\] \[\times\sum_{i=t_{k-1}+2}^{t_{j+1}+1}\sigma_{\beta}\left(i-\tau \right)-\left(1-\frac{\tau_{k+1}-t_{j}}{\tau_{k+1}-\tau+1}\right)^{2}\] \[\times\sum_{i=t_{j}+2}^{\tau_{k+1}+1}\sigma_{\beta}\left(i-\tau \right)+\sum_{i=t_{j}+2}^{\tau_{k+1}+1}\sigma_{\beta}\left(i-\tau\right)-\sum_ {i=t_{j}+1}^{\tau_{k+1}}\sigma_{\beta}\left(i-\tau\right)\] and \[\gamma(\tau,\beta) =\left(\tau_{k+1}-\tau_{k-1}\right)\left(\frac{1}{\tau-\tau_{k-1 }}-\frac{1}{\tau-1-\tau_{k-1}}\right)\] \[+\frac{\sum_{i=\tau_{k-1}+1}^{\tau}\sigma_{\beta}\left(i-\tau \right)}{\tau_{k+1}-\tau}-\frac{\sum_{i=\tau_{k-1}+2}^{\tau}\sigma_{\beta} \left(i-\tau\right)}{\tau_{k+1}-\tau+1}\] \[\quad+\frac{\sum_{i=\tau+1}^{\tau_{k+1}+1}\sigma_{\beta}\left(i- \tau\right)}{\tau-1-\tau_{k-1}}-\frac{\sum_{i=\tau+1}^{\tau_{k+1}}\sigma_{ \beta}\left(i-\tau\right)}{\tau-\tau_{k-1}}\] \[\quad+\frac{\sum_{i=\tau_{k-1}+1}^{\tau}\sigma_{\beta}\left(i- \tau\right)}{\tau-\tau_{k-1}}-\frac{\sum_{i=\tau+1}^{\tau_{k+1}}\sigma_{\beta }\left(i-\tau\right)}{\tau_{k+1}-\tau}\] \[\quad+\frac{\sum_{i=\tau+1}^{\tau_{k+1}+1}\sigma_{\beta}\left(i- \tau\right)}{\tau_{k+1}-\tau+1}-\frac{\sum_{i=\tau_{k-1}+2}^{\tau}\sigma_{ \beta}\left(i-\tau\right)}{\tau-1-\tau_{k-1}}.\] To simplify the expression (33), consider the definition of \(\sigma_{\beta}\left(x\right)=\sigma\left((x-1/2)/\beta\right)\) based on the sigmoid function \(\sigma(x)=(1+\exp(-x))^{-1}\). It follows that \[\begin{cases}1-\varepsilon<\sigma_{\beta}\left(i-\tau\right)<1,&i\geq\tau+1\\ 0<\sigma_{\beta}\left(i-\tau\right)<\varepsilon,&i\leq\tau\end{cases} \tag{34}\] for \(\beta<(2\mathrm{ln}(1/\varepsilon-1))^{-1}\). Using the bounds in (34), there exist positive constants \(B_{1},B_{2}<\infty\), such that \(u(\tau,t_{j},\beta)\leq\varepsilon B_{1}-B_{2}\). Likewise, using (34), the term \(\gamma(\tau,\beta)\) can be upper bounded as \(\gamma(\tau,\beta)\leq\varepsilon(4+\frac{(\tau_{k+1}-\tau_{k-1})^{2}}{\tau_{ k+1}-\tau_{k-1}-1})\). As a result, there exist constants \(C_{1},C_{2}<\infty\), such that the difference in (33) \(\triangle F_{k}(\tau)<\varepsilon C_{1}-C_{2}<0\) for a small enough \(\beta\). Case 2: Now, consider \(t_{j}<i\leq\tau_{k+1}\). The derivation for a lower bound of \(\triangle F_{k}(\tau)\) in (33) is similar to the derivation of the upper bound of \(\triangle F_{k}(\tau)\) in Case 1 for \(\tau_{k-1}<i\leq t_{j}\). The lower bound of \(u(\tau,t_{j},\beta)\) is given by \(B_{4}-\varepsilon B_{3}\) for some positive and finite \(B_{3},B_{4}\). In addition, the lower bound of \(\gamma(\tau,\beta)\) is \(\gamma(\tau,\beta)\geq-\frac{1}{2}(\tau_{k+1}-\tau_{k-1})\varepsilon\). As a result, there exist constants \(C_{1}^{\prime},C_{2}^{\prime}<\infty\), such that \(\triangle F_{k}(\tau)>C_{2}^{\prime}-\varepsilon C_{1}^{\prime}>0\) for a small enough \(\beta\). Combining the above two cases yields the results of Proposition 3. ### _2. Proof of Proposition 4_ Under \(d_{k}=0\), \(s_{k}=s\), and no partition index in \((\tau_{k-1},\tau_{k+1}]\), it corresponds to the case where there exists a \(t_{j}\) in \(\{t_{1},t_{2},\ldots,t_{K-1}\}\) such that \(t_{j}\geq t_{k+1}\). Thus, the derivation of \(\triangle F_{k}(\tau)\) follows (31) by replacing \(t_{j}\) with \(\tau_{k+1}\). Then, following the same derivation as in Appendix C-1, one can easily arrive at \(\triangle F_{k}(\tau)<\varepsilon s^{2}\frac{D(\tau_{k+1}-\tau_{k-1})^{2}}{(\tau -\tau_{k-1})(\tau_{k+1}-\tau)}\), and \(\triangle F_{k}(\tau)>-\varepsilon s^{2}\frac{D(\tau_{k+1}-\tau_{k-1})^{2}}{( \tau_{k+1}-\tau+1)(\tau-\tau_{k-1}-1)}\), which are bounded since \(\tau\in(\tau_{k-1},\tau_{k+1}]\). As a result, \(|\triangle F_{k}(\tau)|<\varepsilon s^{2}C_{0}\), for some finite constant \(C_{0}>0\). ### _3. Proof of Proposition 5_ Following the signal model (1) under \(d_{k}=0\), \(s_{k}=s\) for any \(k\in\{1,2,...,K\}\), where \(K\geq 2\), we have \[\mathbf{x}_{i}=\boldsymbol{\mu}_{k}+\boldsymbol{\epsilon}_{i},t_{k-1}<i\leq t_{k} \tag{35}\] Case 1: Consider \(t_{k-1}<\tau\leq t_{j}\). Define \(\boldsymbol{\eta}(o,l)=\frac{t_{1}-t_{k-1}}{t_{k-1}-\sum_{a=0}^{l}}(t_{a}-t_{a -1})\boldsymbol{\mu}_{a}\) as the mean of the sample located in the \(k\)th subspace with \(s_{k}=s\), and \(d_{k}=0\), \(k\in[o,l]\). Using (35), one can compute the expectation of \(f_{k}(\tau)\) and then take the difference, i.e., \(\triangle F_{k}(\tau)=F_{k}(\tau)-F_{k}(\tau-1)\). Using the bounds in (34), there exist constants \(C_{3},C_{4}\), such that \(\triangle F_{k}(\tau)\) is upper bounded by \(\triangle F(\tau)<\varepsilon C_{3}-C_{4}\), which is smaller than zero if \(\beta\) is small enough, and \(\|\boldsymbol{\mu}_{j}-\boldsymbol{\eta}(j+1,j+J+1)\|_{2}\neq 0\), i.e., \(\boldsymbol{\mu}_{j},\boldsymbol{\mu}_{2},...,\boldsymbol{\mu}_{j+J+1}\) are linearly independent. Case 2: Now, consider \(t_{j+J}<\tau\leq\tau_{k+1}\). The derivation for a lower bound of \(\triangle F_{k}(\tau)\) is similar to the derivation of the upper bound of \(\triangle F_{k}(\tau)\) in Case 1 for \(t_{j+J}<\tau\leq\tau_{k+1}\). Using the bounds in (34), there exist constants \(C_{3}^{\prime},C_{4}^{\prime}\), such that \(\triangle F_{k}(\tau)\) is lower bounded by \(\triangle F(\tau)>-\varepsilon C_{3}^{\prime}+C_ small enough \(\beta\)). For the cluster \(\tilde{\mathcal{C}}_{j^{\prime}}^{(m,k)}\), since there is no partition such index \(t_{k^{\prime}}\) in the interval \((\tau_{j^{\prime}-1}^{(m,k,j^{\prime})},\tau_{j^{\prime}+1}^{(m,k,j^{\prime})}]\), Proposition 4 suggests that \(|F_{j^{\prime}}(\tau;\tau_{-j^{\prime}}^{(m,k,j^{\prime})})-F_{j^{\prime}}( \tau_{j^{\prime}}^{*};\tau_{-j^{\prime}}^{(m,k,j^{\prime})})|<\varepsilon B\) which leads to \(|\triangle F_{*}^{(m,k,j^{\prime})}|<\varepsilon B\). As a result, for a small enough \(\beta\) (hence, small enough \(\varepsilon\)), we must have \(\triangle F_{*}^{(m,k,j)}>\triangle F_{*}^{(m,k,j^{\prime})}\). ## Appendix E Proof of Theorem 1 If \(\exists k\in\{1,2,...K-1\}\) satisfying \(\tau_{k}^{(m)}\neq t_{k}\), then for any partition assignment \(\{\tau_{1},\tau_{2},...,\tau_{k-1},\tau_{k+1},...,\tau_{K-1}\}\), there always exists \(l\in\{1,2,...,K-1\}\) such that the interval \((\tau_{l-1},\tau_{l})\) contains at least one of \(\{t_{k}\}_{k=1}^{K-1}\). We first prove that the following two cases will not occur if \(\tilde{F}(\mathbf{\tau}^{(m+1)})=\tilde{F}(\mathbf{\tau}^{(m)})\). 1) If there exists \(l\in\{1,2,...,K-1\}\) such that the interval \((\tau_{l-1},\tau_{l+1})\) contains none of \(\{t_{k}\}_{k=1}^{K-1}\), then, there must exist a interval \((\tau_{a-1},\tau_{a})\), \(a\neq l,l+1\) containing at least one of \(\{t_{k}\}_{k=1}^{K-1}\). According to Lemma 1, we will obtain _a lower total cost_\(\tilde{F}(\cdot)\) if we replace the partition \(\tau_{l}\) with one of the partition in \((\tau_{a-1},\tau_{a})\). Thus, the algorithm will not stop if there exists \(l\in\{1,2,...,K-1\}\) such that the interval \((\tau_{l-1},\tau_{l+1})\) contains none of \(\{t_{k}\}_{k=1}^{K-1}\). 2) If there exists \(l\in\{1,2,...,K-1\}\) such that the interval \((\tau_{l-1},\tau_{l+1})\) contains at least two of \(\{t_{k}\}_{k=1}^{K-1}\). Then, there must exist an interval among \((\tau_{i-1},\tau_{i})\), \(i\in\{1,2,...l-1,l+2,...,K\}\) containing none of \(\{t_{k}\}_{k=1}^{K-1}\). This is conflicted with the conclusion of the above case 1), i.e., the interval \((\tau_{l-1},\tau_{l}]\) and \([\tau_{l},\tau_{l+1})\), for any \(l\in\{1,2,...K-1\}\), aways exist at least one of \(\{t_{k}\}_{k=1}^{K-1}\) if the algorithm is convergent. So, there exists none interval containing at least two of \(\{t_{k}\}_{k=1}^{K-1}\) if \(\tilde{F}(\mathbf{\tau}^{(m+1)})=\tilde{F}(\mathbf{\tau}^{(m)})\). Based on the above analysis, we have the conclusion that the interval \((\tau_{l-1},\tau_{l+1})\) for any \(l\in\{1,2,...K-1\}\) contains exactly one of \(\{t_{k}\}_{k=1}^{K-1}\) if \(\tilde{F}(\mathbf{\tau}^{(m+1)})=\tilde{F}(\mathbf{\tau}^{(m)})\). However, when Algorithm 1 converges, Proposition 3 implies that \(\tau_{l}=t_{l}\) if \(t_{l}\) is the only \(\{t_{k}\}_{k=1}^{K-1}\) in \((\tau_{l-1},\tau_{l+1})\). With this, we conclude that if \(\tau_{k}^{(m+1)}=\tau_{k}^{(m)}=t_{k}\) for all \(k\) when Algorithm 1 has converged.
2309.16470
Machine-learning-inspired quantum optimal control of nonadiabatic geometric quantum computation via reverse engineering
Quantum control plays an irreplaceable role in practical use of quantum computers. However, some challenges have to be overcome to find more suitable and diverse control parameters. We propose a promising and generalizable average-fidelity-based machine-learning-inspired method to optimize the control parameters, in which a neural network with periodic feature enhancement is used as an ansatz. In the implementation of a single-qubit gate by cat-state nonadiabatic geometric quantum computation via reverse engineering, compared with the control parameters in the simple form of a trigonometric function, our approach can yield significantly higher-fidelity ($>99.99\%$) phase gates, such as the $\pi / 8$ gate (T gate). Single-qubit gates are robust against systematic noise, additive white Gaussian noise and decoherence. We numerically demonstrate that the neural network possesses the ability to expand the model space. With the help of our optimization, we provide a feasible way to implement cascaded multi-qubit gates with high quality in a bosonic system. Therefore, the machine-learning-inspired method may be feasible in quantum optimal control of nonadiabatic geometric quantum computation.
Meng-Yun Mao, Zheng Cheng, Yan Xia, Andrzej M. Oleś, Wen-Long You
2023-09-28T14:36:26Z
http://arxiv.org/abs/2309.16470v1
# Machine-learning-inspired quantum optimal control ###### Abstract Quantum control plays an irreplaceable role in practical use of quantum computers. However, some challenges have to be overcome to find more suitable and diverse control parameters. We propose a promising and generalizable average-fidelity-based machine-learning-inspired method to optimize the control parameters, in which a neural network with periodic feature enhancement is used as an ansatz. In the implementation of a single-qubit gate by cat-state nonadiabatic geometric quantum computation via reverse engineering, compared with the control parameters in the simple form of a trigonometric function, our approach can yield significantly higher-fidelity (\(>99.99\%\)) phase gates, such as the \(\pi/8\) gate (T gate). Single-qubit gates are robust against systematic noise, additive white Gaussian noise and decoherence. We numerically demonstrate that the neural network possesses the ability to expand the model space. With the help of our optimization, we provide a feasible way to implement cascaded multi-qubit gates with high quality in a bosonic system. Therefore, the machine-learning-inspired method may be feasible in quantum optimal control of nonadiabatic geometric quantum computation. ## I Introduction Multi-qubit gates are widely used in quantum circuits [1; 2; 3], quantum error correction [4; 5; 6], and other fields [7; 8; 9; 10]. Single-shot multi-qubit gates [11; 12; 13] specify quantum circuits that evolute in a well-controlled, uninterrupted, and continuous-time way to implement the quantum computation [14]. Compared to the cascaded gates, single-shot multi-qubit gates can greatly reduce the circuit depth and shorten the implementation time, thus suppressing decoherence [15; 16]. However, the single-shot method is difficult to realize in experiments owing to the restricted conditions of simultaneously manipulating multiple physical systems and building complex couplings. One of the ways to mitigate the above difficulty is the application of single- and two-qubit gates to equivalently implement the function of multi-qubit gates [17; 18; 19], and the decomposition is guaranteed by the Solovay-Kitaev theorem [20]. Although the decomposition method loses the upper hand in terms of the circuit depth and the implementation time, it has a wider scope of application due to direct execution on the quantum processing unit, namely, the brain of a quantum computer [21]. The realization of the synthetic gates depends on the universal single-qubit gates and a two-qubit entangling gate with high fidelity [16]. However, statistical imprecision in the experimental controls, interactions between the system and the environment, and random driving forces from the environment will cause a reduction in fidelity [22]. Universal single-qubit gates based on the geometric phase in quantum systems have recently shown robustness against control-parameter fluctuations [23]. An adiabatically evolving system driven by a nondegenerate Hamiltonian exhibits geometric phase under cyclic evolution [24; 25; 26]. The geometric phase arising from cyclic evolution of quantum systems is uniquely determined by the geometric structure of the enclosed path in the parameter space [27], which is the well-known analog of the rotational effect in differential geometry when a vector is parallel transported [28; 29]. However, the strict condition of the adiabatic limit requires the evolution time to be infinitely long, which inevitably gives rise to decoherence of the system [30]. The nonadiabatic geometric phase gets rid of the bondage of the adiabatic condition, making it possible to shorten the evolution time of the system to a great extent. The nonadiabatic geometric phase lays a solid foundation for nonadiabatic geometric quantum computation (NGQC). Recently, NGQC has been executed theoretically [31; 32; 33; 34] and experimentally [35; 36; 37; 38; 39] in multiple quantum systems. Later, NGQC has been further promoted to NGQC+ [40]. The NGQC+ scheme loosens the conditions for the realization of NGQC to a certain extent, which becomes more compatible with optimal control methods. Several schemes have been developed, including counter-adiabatic driving [41; 42], dynamical decoupling [43; 44], and machine-learning-based optimization techniques [45; 46]. Recent researches have shown that logical qubit encoding is promising to protect quantum computation from errors [47; 48; 49]. However, in standard logical qubit sys tems based on multiple physical qubits [50; 51; 52; 53], quantum error correction and logical operations are difficult to achieve because the number of error channels rapidly increases with the number of qubits [54]. For the realization of logical qubits, bosonic systems are promising candidates, because the number of error channels can dramatically drop [54; 55] with taking advantage of the infinite-dimensional Hilbert space of the harmonic oscillator. The cat states of bosons have been widely used in quantum computation and quantum error correction [56; 57; 58; 59]. Encoding in cat-state subspace via reverse engineering provides a feasible scheme for the realization of NGQC in a bosonic system [60]. The application of reverse engineering, which constructs the Hamiltonian based on the corresponding invariant, makes it easier to find more free parameters to control the evolution path [61; 62]. Numerous studies have demonstrated that the tentative form of control parameters shapes the time evolution of quantum systems in a potentially useful way [63; 64; 65]. Typical forms of control parameters include polynomials of trigonometric functions [66; 67], as well as the product form of the trigonometric and complex exponential functions [68; 69]. In the system to be elucidated subsequently, the control parameters are limited to simple trigonometric functions. The adjustment of the evolution form of control parameters is of great importance in quantum computation. Adopting the machine-learning technology and optimization theory has been proved to be applicable to optimizing the control parameters of variational states in a variety of interacting quantum many-body systems [70; 71; 72; 73; 74]. Although designing control parameters to acquire high-fidelity quantum gates by neural network has been extensively studied for a long time [14; 75], it is still a flourishing and attractive research topic. Researchers designed dispersed and aperiodic control parameters by gradient ascent pulse engineering (GRAPE) under the guidance of state fidelity in nuclear magnetic resonance [76]. Here we introduce this method into the bosonic system, where the aperiodic discontinuous function is generalized to the periodic continuous function. We find that the incorporation of GRAPE enables the neural network to possess a powerful representation ability, which can expand the model space through the nonlinear activation function to fit any smooth periodic function and aperiodic function [77; 78]. As a result, we optimize continuous and periodic control parameters, which are easier to physically implement, through the neural network with the enhancement of periodic characteristics. The rest of the paper is organized as follows. In Sec. II we revisit the NGQC+ with cat states via reverse engineering. Section III is devoted to the construction of the neural network guided by the average fidelity with periodic feature enhancement to improve the performance of single-qubit gates. In Sec. IV, we benchmark the optimization on the T gate (\(\pi/8\) gate) and demonstrate that the neural network can effectively expand the model space. Furthermore, we assess the performance of the protocol under systematic noise, random noise and decoherence effect via numerical simulations. Finally, the conclusions and outlook are given in Sec. VI. ## II NGQC+ with cat states based on reverse engineering Applying reverse engineering to quantum computation not only permits the Hamiltonian to be more physically realizable, but also makes the implementation of quantum gates more flexible [61; 79; 80]. Consider a time-dependent Hamiltonian \(H(t)\) and the corresponding dynamic invariant \(I(t)\), which satisfies the following equation [81] (\(\hbar=1\)): \[i\frac{\partial}{\partial t}I(t)-[H(t),I(t)]=0. \tag{1}\] To realize NGQC+, we select a set of time-dependent eigenstates \(|\phi_{l}(t)\rangle\) (\(l=1,2,\cdots,d\)) of \(I(t)\) to span a \(d\)-dimensional computational subspace \(\mathcal{S}\), which are supposed to satisfy the three conditions below [40]. First, the computational basis should satisfy the boundary conditions at times \(t=0\) and \(L\), i.e., \(|\phi_{l}(0)\rangle=|\phi_{l}(L)\rangle\), to ensure that the evolution is cyclic. Here, \(L\) is the evolution period. Secondly, we can rewrite Eq. (1) based on eigenvectors of \(I(t)\) as \[\dot{\Xi}_{l}(t)=-i[H(t),\Xi_{l}(t)], \tag{2}\] where \(\Xi_{l}(t)=|\phi_{l}(t)\rangle\langle\phi_{l}(t)|\) is the projective operator of \(|\phi_{l}(t)\rangle\). Finally, the cumulative dynamic phase of one cycle needs to vanish, \[\Phi_{l}(L)=-\int_{0}^{L}dt\langle\phi_{l}(t)|H(t)|\phi_{l}(t)\rangle=0. \tag{3}\] This condition is the relaxation of parallel transportation \(\langle\phi_{l}(t)|H(t)|\phi_{k}(t)\rangle=0\) in NGQC. When the conditions of NGQC+ are all satisfied, the time evolution operator at the final time \(t=L\) in subspace \(\mathcal{S}\) can be described as \[U(L,0)=\sum_{l}\exp[i\Theta_{l}(L)]\Xi_{l}(0), \tag{4}\] where \(\Theta_{l}(L)\) is the geometric phase, given by \[\Theta_{l}(L)=\int_{0}^{L}dt\langle\phi_{l}(t)|i\frac{\partial}{\partial t}| \phi_{l}(t)\rangle. \tag{5}\] Suppose a Hamiltonian can be represented as follows, \[H(t)=\sum_{j=1}^{g}\lambda_{j}(t)G_{j}, \tag{6}\] where \(g\) is the rank of the group and \(\{G_{j}\}\) is a group of Hermitian generators of Lie algebra [61; 62; 82], obeying the following relations: \[[G_{i},G_{j}]=i\sum_{k}\mu_{ij}^{k}G_{k},\quad(i,j,k\in\{1,2,\cdots,g\}), \tag{7}\] where \(\mu_{ij}^{k}\) is the corresponding structure constant. If an invariant can be written as \[I(t)=\sum_{j=1}^{g}\xi_{j}(t)G_{j}. \tag{8}\] According to Eq. (1), it yields \[\dot{\xi}_{k}(t)=\sum_{i,j=1}^{g}\lambda_{i}(t)\xi_{j}(t)\mu_{ij}^{k}. \tag{9}\] Once \(\{\xi_{j}(t)\}\) are known, we can thus obtain \(\{\lambda_{j}(t)\}\) according to Eq. (9). We consider a system in which a resonant single-mode two-photon drive is applied to a Kerr nonlinear resonator. In the rotating frame, the system Hamiltonian [79; 83] can be written by \[H_{\rm cat}=-Ka^{\dagger 2}a^{2}+\epsilon_{2}(e^{2i\xi}a^{\dagger 2}+e^{-2i\xi}a ^{2}), \tag{10}\] where \(K\) is the Kerr nonlinearity, \(a^{\dagger}\) (\(a\)) is the creation (annihilation) operator of the cavity mode, \(\epsilon_{2}\) is the strength of the two-photon driving, and \(\xi\) is the phase of the driving. The coherent states \(|\pm\alpha\rangle\) with \(\alpha=\sqrt{\epsilon_{2}/K}\exp(i\xi)\) are the degenerate eigenstates of \(H_{\rm cat}\), whose superpositions \[|{\cal C}_{\pm}\rangle=\frac{1}{\sqrt{\mathcal{N}_{\pm}}}(|\alpha\rangle\pm| \text{-}\alpha\rangle), \tag{11}\] are referred to as even (odd) cat states with the normalization constants \(\mathcal{N}_{\pm}=2\pm 2\exp\bigl{(}-2|\alpha|^{2}\bigr{)}\). We apply an external single-photon drive [83]: \[H_{c}(t)=\chi(t)a^{\dagger}a+\epsilon(t)a^{\dagger}+\epsilon^{*}(t)a, \tag{12}\] where \(\chi(t)\) and \(\epsilon(t)\) are the detuning and strength of the driving, respectively. The total Hamiltonian is described by \(H_{tot}(t)=H_{\rm cat}+H_{c}(t)\). If the constraint that the energy gaps between cat states and other eigenstates are much larger than \(\chi(t)\) and \(\epsilon(t)\) is satisfied, the Hamiltonian can be reduced to two-dimensional subspace spanned by cat states \(|{\cal C}_{\pm}\rangle\). The Pauli matrices defined by cat states can be chosen as the Hermitian generators of the Lie group. The driving Hamiltonian thus can be simplified as \(H_{c}=\vec{\Omega}(t)\cdot\vec{\sigma}\), where \(\vec{\Omega}(t)=[\Omega_{x}(t),\Omega_{y}(t),\Omega_{z}(t)]\) is a three-dimensional unit vector, and \(\vec{\sigma}=[\sigma_{x},\sigma_{y},\sigma_{z}]\). Consider a dynamic invariant \(I(t)=\vec{\zeta}(t)\cdot\vec{\sigma}\), where \(\vec{\zeta}(t)=[\zeta_{x}(t),\zeta_{y}(t),\zeta_{z}(t)]\). Based on Eq. (9), we can get that \(\dot{\vec{\zeta}}(t)=2\vec{\Omega}(t)\times\vec{\zeta}(t)\) and \(|\zeta(t)|\) is constant. For convenience, we can let \(\vec{\zeta}(t)=(\sin\eta\sin\mu,\cos\eta\sin\mu,\cos\mu)\), where \(\mu\) and \(\eta\) are time-dependent control parameters. The eigenstates of \(I(t)\) in the cat-state representation are \[|\phi_{+}(t)\rangle = \cos\frac{\mu}{2}|{\cal C}_{+}\rangle+i\exp(-i\eta)\sin\frac{\mu} {2}|{\cal C}_{-}\rangle,\] \[|\phi_{-}(t)\rangle = i\exp(i\eta)\sin\frac{\mu}{2}|{\cal C}_{+}\rangle+\cos\frac{\mu} {2}|{\cal C}_{-}\rangle. \tag{13}\] According to Eqs. (3) - (5), we can calculate the geometric phases \[\Theta_{\pm}(L)=\pm\int_{0}^{L}dt\dot{\eta}\sin^{2}\frac{\mu}{2}, \tag{14}\] and the dynamic phases \[\Phi_{\pm}(L)=\mp\int_{0}^{L}dt\biggl{(}\frac{1}{2}\dot{\eta}\sin^{2}\mu+ \Omega_{z}\biggr{)}\sec\mu. \tag{15}\] In order to satisfy the conditions \(\Phi_{\pm}(L)=0\) and \(\dot{\vec{\zeta}}(t)=2\vec{\Omega}(t)\times\vec{\zeta}(t)\), we design \(\vec{\Omega}(t)\) as \[\Omega_{x}(t) = \frac{1}{4}[\dot{\eta}\sin\eta\sin(2\mu)-2\dot{\mu}\cos\eta],\] \[\Omega_{y}(t) = \frac{1}{4}[\dot{\eta}\cos\eta\sin(2\mu)+2\dot{\mu}\sin\eta],\] \[\Omega_{z}(t) = -\frac{1}{2}\dot{\eta}\sin^{2}\mu. \tag{16}\] Therefore, we set the parameters \(\chi(t)\) and \(\epsilon(t)\) as \[\chi(t) = \frac{\dot{\eta}\sin^{2}\mu\mathcal{N}_{+}\mathcal{N}_{-}}{|\alpha |^{2}(\mathcal{N}_{+}^{2}-\mathcal{N}_{-}^{2})},\] \[{\rm Re}[\epsilon(t)] = \frac{\sqrt{\mathcal{N}_{+}\mathcal{N}_{-}}}{4|\alpha|}(\Omega_{x} \cos\xi-e^{2|\alpha|^{2}}\Omega_{y}\sin\xi),\] \[{\rm Im}[\epsilon(t)] = \frac{\sqrt{\mathcal{N}_{+}\mathcal{N}_{-}}}{4|\alpha|}(\Omega_{x} \sin\xi+e^{2|\alpha|^{2}}\Omega_{y}\cos\xi), \tag{17}\] which are scarcely different from the forms presented in Ref. [60]. Based on Eq. (4), the time evolution operator can be represented as \[U(L,0)=\left[\begin{array}{cc}\cos\theta+i\cos\mu_{0}\sin \theta&\exp(i\eta_{0})\sin\mu_{0}\sin\theta\\ -\exp(-i\eta_{0})\sin\mu_{0}\sin\theta&\cos\theta-i\cos\mu_{0}\sin\theta\\ \end{array}\right],\] where \(\mu_{0}\) and \(\eta_{0}\) are the initial values of \(\mu\) and \(\eta\), respectively: \[\theta=\int_{0}^{L}dt\dot{\eta}\sin^{2}\frac{\mu}{2}. \tag{19}\] If we choose different \(\mu\), \(\eta\), and \(\theta\), we can implement an arbitrary unitary single-qubit gate [60]. ## III Construction of neural-network ansatz based on the average fidelity Recently a tentative scheme for the parameters is the usage of trigonometric functions as [60] \[\mu = \mu_{0}+\Lambda\sin^{2}\bigg{(}\frac{\pi t}{L}\bigg{)},\] \[\eta = \eta_{0}+\pi\bigg{[}1-\cos\bigg{(}\frac{\pi t}{L}\bigg{)}\bigg{]}, \tag{20}\] where \(\Lambda\) is an auxiliary parameter depending on the concrete form of the desired gate. To facilitate the subsequent discussion, the parameter selection scheme of Eq. (20) is referred to as the trigonometric-function-based protocol. We can numerically calculate the integral in Eq. (19) as \[\theta = \pi\bigg{[}1-\sqrt{\frac{\pi}{2\Lambda}}\Big{(}\cos(\mu_{0}+ \Lambda)C(\sqrt{\frac{2\Lambda}{\pi}}) \tag{21}\] \[+\sin(\mu_{0}+\Lambda)S(\sqrt{\frac{2\Lambda}{\pi}})\Big{)} \bigg{]},\] where \[S(x) = \int_{0}^{x}dt\sin\!\left(t^{2}\right)\!,\quad C(x)=\int_{0}^{x}dt \cos\!\left(t^{2}\right)\] are Fresnel integrals. It is obvious that \(\theta\) is only dependent on \(\mu_{0}\) and \(\Lambda\). We show the variation of \(\theta\) with respect to \(\Lambda\) for a few typical values of \(\mu_{0}\) in Fig. 1. One observes that \(\theta\) exhibits a decaying oscillation with respect to \(\Lambda\) and approaches \(\pi\) when \(\Lambda\) becomes sufficiently large. We find that \(\theta\) cannot take the entire parameter range between 0 and \(2\pi\), which implies that the trigonometric-function-based protocol cannot implement arbitrary single-qubit gates. Especially it is difficult to accurately obtain \(\Lambda\) by solving complex nonlinear Eq. (19). Therefore, we improve the method of designing the variational parameters \(\mu\) and \(\eta\) by machine-learning-inspired optimization based on GRAPE. Subsequently, we employ the neural network under unsupervised machine learning as an ansatz. The neural network is composed of the input, hidden, and output layers. Two adjacent layers are connected by the weights, biases, and activation function. We choose one hidden layer and \(\tanh(x)\) as the activation function. Because it is assumed that \(\mu\) and \(\eta\) have nothing to do with each other, the neural network is not fully connected. If the control parameters are not independent of each other, the fully connected neural network will be adopted. The final outputs are the specific function expressions \[\mu = \sum_{i=1}^{N}W_{i}^{(2)}\tanh\left(W_{i}^{(1)}\tau^{(1)}+B_{i}^{( 1)}\right)+B^{(2)},\] \[\eta = \sum_{i=1}^{N}W_{i}^{(4)}\tanh\left(W_{i}^{(3)}\tau^{(2)}+B_{i}^{( 3)}\right)+B^{(4)},\] where \(N\) is the number of neurons in the hidden layer. Since the constructions of \(\mu\) and \(\eta\) are similar, we take \(\mu\) as an example. \(\tau^{(1)}\) is the input of the neural network. The output of the neuron in the hidden layer is \(\tanh\left(W_{i}^{(1)}\tau^{(1)}+B_{i}^{(1)}\right)\) with the weights \(W_{i}^{(1)}\) and the biases \(B_{i}^{(1)}\). Similarly, the output of the neural network is the specific function expression of \(\mu\) with the weights \(W_{i}^{(2)}\) and the bias \(B_{i}^{(2)}\). _Feature enhancement._ In parallel, we impose some restrictions on the variational parameters. To meet the cycle evolution condition \(|\phi_{\pm}(0)\rangle=|\phi_{\pm}(L)\rangle\), the control parameters \(\mu\) and \(\eta\) should be periodic and \(L\) is an integer multiple of the corresponding periods of \(\mu\) and \(\eta\). Considering the period of \(\mu\) is \(T_{\mu}\) and the period of \(\eta\) is \(T_{\eta}\), it is supposed that \(T_{\eta}=mT_{\mu}\) with \(m\) being any real number. To be noticed, the periodicity of \(\mu\) and \(\eta\) is aimed at the real time \(t\). For simplicity, we set \(T_{\eta}=2T_{\mu}=L\). The initial values of the control parameters \(\mu_{0}\) and \(\eta_{0}\) can be determined by Eq. (18) for a target single-qubit quantum gate. To summarize, the control parameters should meet three requirements below: (1) \(\mu\) and \(\eta\) are periodic functions; (2) \(\mu\) and \(\eta\) have initial value \(\mu_{0}\) and \(\eta_{0}\), respectively; (3) \(T_{\eta}=2T_{\mu}=L\). The second condition can be satisfied easily. In particular, \(B^{(2)}\) and \(B^{(4)}\) can be set depending on \(\mu(0)=\mu_{0}\) and \(\eta(0)=\eta_{0}\). To achieve the goal that \(\mu\) and \(\eta\) are periodic functions, we ought to make periodic feature enhancement. Without loss of generality, we take the construction of a multi-layer neural network as an example. Considering the lemma that if \(\iota(x)\) is a given smooth periodic function with period \(L\) and \(\Upsilon(\cdot)\) is a smooth function, then \(\Upsilon(\iota(x))\) is still a periodic function with period \(L\)[77]. To proceed, we apply the sinusoidal functions \[\beta(x)=A\cos(\omega x+\phi)+c \tag{22}\] in the first hidden layer with \(\omega=2\pi/L\). We choose a nonlinear activation function for the sake of guaranteeing the periodicity of the output and generating higher-frequency terms to expand the model space in training the neural network. For other hidden layers, the normal linear superposition of neurons in the former layer and nonlinear activation can be used. In this paper, we find that utilizing a small-scale neural-network ansatz with a single hidden layer is sufficient in optimizing the performance of target gates, which shows the superiority of our method. However, it is worth noting that increasing the number of hidden units or incorporating additional hidden layers may yield improved behavior at the cost of increased computational time and more difficult physical realization. In this respect, the final representations of \(\mu\) and \(\eta\) of the neural network with the sole hidden layer are given by \[\mu = \sum_{i=1}^{N}W_{i}^{(2)}\tanh\Big{[}W_{i}^{(1)}\cos\big{(}\omega^ {(1)}\tau^{(1)}+\phi_{i}^{(1)}\big{)}+B_{i}^{(1)}\Big{]} \tag{23}\] \[+B^{(2)},\] \[\eta = \sum_{i=1}^{N}W_{i}^{(4)}\tanh\Big{[}W_{i}^{(3)}\cos\big{(} \omega^{(2)}\tau^{(2)}+\phi_{i}^{(2)}\big{)}+B_{i}^{(3)}\Big{]} \tag{24}\] \[+B^{(4)}.\] Here, \(\tau^{(1)}=2\pi t/L\), \(\tau^{(2)}=\pi t/L\), and \(\omega^{(1)}=\omega^{(2)}=1\). \(\phi_{i}^{(1)}\) and \(\phi_{i}^{(2)}\) are learnable parameters of the neural network, which will effectively expand the model space and satisfy the periodic relationship between \(\mu\) and \(\eta\). _Backpropagation guided by the average fidelity._ The average fidelity is the benchmark to assess the performance of the quantum gates in the closed system and proves to be more effective than assessing the fidelity of specific states, especially in improving the performance of synthetic multi-qubit gates, given the uncertainties introduced by the preceding gate in the circuit. We thus choose the average fidelity as the objective function, which is give by [84] \[F(t)=\frac{1}{\mathcal{D}(\mathcal{D}+1)}\big{[}\operatorname{Tr}\!\left(M(t )M(t)^{\dagger}\right)+|\operatorname{Tr}\!\left(M(t)\right)|^{2}\big{]}, \tag{25}\] where \(\mathcal{D}\) is the dimension of the computational subspace, \(M(t)=\mathcal{P}_{c}U_{G}^{\dagger}U_{1}(t)\mathcal{P}_{c}\), \(\mathcal{P}_{c}\) is the projective operators of the subspace, and \(U_{G}\) and \(U_{1}(t)\) are the matrix representations of the ideal and actual gates, respectively. The application of Eq.(25) as the objective function instead of Eq.(19) offers two distinct advantages. First, Eq.(25) takes into consideration the leakage to unwanted levels, making it a more realistic measure of the performance of the scheme compared to \(\theta\), which is defined in the two-dimensional cat-state subspace. Secondly, while both the average fidelity and \(\theta\) are non-convex functions [85] due to the complex interactions among multiple parameters and the utilization of a nonlinear activation function, it is crucial to emphasize that there exists a clear global maximum for the average fidelity, which is unity. This allows for straightforward determination of when to finalize the neural network's learning process. On the other hand, \(\theta\) encompasses infinite ideal values, which may potentially confuse the network. Therefore, considering the aforementioned factors, employing the average fidelity as the objective function for neural network training is a more suitable choice compared to utilizing \(\theta\) and the fidelity of specific states. The workflow of the machine-learning-inspired optimization is illustrated in Fig. 2. In the neural-network ansatz, there are three layers with two input units, \(N\) hidden units, and two output units. The final-state average fidelity \(\mathcal{F}\equiv F(L)\) measured at the final moment depends crucially on the specific evolution details of each previous moment. Considering the nonlinear relationship between the external single-photon drive in Eq.(17) and the control parameters, it is challenging to directly derive the variation of \(\mathcal{F}\) with respect to the neural-network parameters. Alternatively, we use the greedy algorithm, in which the temporal period \(L\) can be divided into \(n\) discrete time slices during an evolution cycle of realizing a single-qubit gate. Optimizing the average fidelity at each time slice can lead to a substantial reduction in complexity, ultimately resulting in a higher overall average fidelity \(\mathcal{F}\) for single-qubit gates. It is obvious that the evolution between two contiguous moments is described by the Schrodinger equation [76]. To this end, we cal Figure 2: The workflow of the machine-learning-inspired optimization based on the average fidelity. For a temporal cycle between \(t_{1}=0\) and \(t_{n}=L\) (red dots), which is divided into \(n-2\) slices \(\{t_{i}\}_{i=2}^{n-1}\) (blue dots), we should perform the variation and update all parameters at each time slice to ensure that the neural network captures the information at each moment effectively. Two adjacent dots are connected by the Schrödinger equation, denoted as \(U(t,0)\). We choose the neural network with one hidden layer. \(\tau^{(a)}\) (\(a=1,2\)) is the linear transformation of \(t_{i}\) as the input and the output \(\mu\) and \(\eta\) are functions of \(\tau^{(a)}\). Each neuron in the hidden and output layers has a corresponding weight and bias. The trigonometric functions \(\cos\!\left(\omega\tau^{(a)}+\phi\right)\) with different phases can be used as the inputs of the neurons in the hidden layer to ensure the output of the neural network is a periodic function, and \(\omega\) is determined by the period of \(\mu\) and \(\eta\). Adjusting the bias of the output is useful to make \(\mu\) and \(\eta\) possess the fixed initial value. At the \(t=L\) moment, the average fidelity is calculated according to the existing parameters to judge whether it is good enough to end the training. culate all the gradients of the average fidelity \(F(t)\) with respect to parameters by the chain rule at each time slice. In order to obtain the maximum of average fidelity \(F(t)\), we adopt the gradient ascent algorithm to update all the parameters \[W^{(a)} \leftarrow W^{(a)}+l_{W}^{(a)}\frac{\partial F(t)}{\partial W^{(a)}},a=1,2, 3,4,\] \[B^{(b)} \leftarrow B^{(b)}+l_{B}^{(b)}\frac{\partial F(t)}{\partial B^{(b)}},b=1,3,\] \[\phi^{(c)} \leftarrow \phi^{(c)}+l_{\phi}^{(c)}\frac{\partial F(t)}{\partial\phi^{(c)} },c=1,2, \tag{26}\] for the next variation. Here, the learning rates \(l_{W}^{(a)}\), \(l_{B}^{(b)}\) and \(l_{\phi}^{(c)}\) are the adjustable parameters, depending on the impact of the corresponding parameters on the average fidelity \(F(t)\). We calculate the final-state average fidelity \(\mathcal{F}\) corresponding to the current parameters to judge whether the neural network has been well trained. The process of the above operations is defined as one variational process. After performing the variational processes \(N_{\mathrm{VP}}\) times, we consider the training to be complete when \(\mathcal{F}\) approaches \(1\) with high precision. Note that the neural-network ansatz for the machine-learning-inspired method allows us to avoid solving Eq. (19), which has been assumed to be automatically met when the final-state average fidelity tends to unity. In this case, it is inevitable that we should verify whether Eq. (19) is valid or not according to the specific form of \(\mu\) and \(\eta\). ## IV Numerical results and discussion of single-qubit gates The Gottesman-Knill theorem Gottesman and Knill (1993) tells us that a circuit using only Clifford gates and Pauli measurements Gottesman and Knill (1994) is insufficient for the universal quantum computation. T gate \[\left[\begin{array}{cc}1&0\\ 0&e^{i\pi/4}\end{array}\right] \tag{27}\] is the most natural and easiest single-qubit non-Clifford gate, which supplements the set of Clifford gates to achieve universal quantum computation Gottesman and Knill (1993); Gottesman and Knill (1994). The implementation of the T gate in the trigonometric-function-based protocol is not perfect. To realize the T gate, we should make the off-diagonal elements of the evolution \(U(T,0)\) in Eq. (18) vanish. Thus, we set \(\mu_{0}=0\). In this case, \(U(T,0)\) has nothing to do with \(\eta_{0}\). We choose \(\eta_{0}=0\) for simplicity. For the diagonal elements, it is readily yielded that \(2k\pi-2\theta=\pi/4\), namely, \(\theta=k\pi-\pi/8\), where \(k\) is an arbitrary integer. In the neural network, there are six hidden units. \(\mu\) and \(\eta\) use half of the hidden units, respectively, as shown in Fig. 2. We pre-train the neural network according to the trigonometric-function-based protocol to obtain the initial parameters. Take a time series with \(n=1000\) data points evenly spaced between the time duration \(t=0\) and \(L\). Then, we set \(l_{W}^{(2)}=l_{W}^{(4)}=10^{-4}\), \(l_{W}^{(a)}=l_{B}^{(b)}=l_{\phi}^{(c)}=10^{-5}\), \(a=1,3\), \(b=1,3\), \(c=1,2\) for the first \(1240\) iterations of variational processes, and \(l_{W}^{(2)}=l_{W}^{(4)}=10^{-5}\), \(l_{W}^{(a)}=l_{B}^{(b)}=l_{\phi}^{(c)}=10^{-6}\), \(a=1,3\), \(b=1,3\), \(c=1,2\) for the last \(1890\) iterations of variational processes. The learning rates of \(W^{(2)}\) and \(W^{(4)}\) are ten times more than those of other parameters, because the impact of \(W^{(2)}\) and \(W^{(4)}\) on the average fidelity \(F(t)\) is much larger than other parameters. In the NGQC+, the amplitude of coherent states is \(|\alpha|=0.5\), the Kerr nonlinearity is \(K=2\pi\times 12.5\)MHz, the energy gap is \(E_{gap}=4K\alpha^{2}=78.5\)MHz 1, and the total interaction time is \(T=1\mu s\). For simplicity, we refer to our scheme as the machine-learning-inspired protocol. Footnote 1: The \(\alpha\)-function is defined as \(\alpha=\frac{1}{2}\left(\frac{1}{2}\right)\), where \(\alpha=\frac{1}{2}\left(\frac{1}{2}\right)\), where \(\alpha=\frac{1}{2}\left(\frac{1}{ correct direction for learning. One can infer that the average fidelity is superior to the imposed constraint of \(\theta\) in Eq. (19). Thus, it is wise to choose the average fidelity instead of \(\theta\) as the objective function. As such, we show the comparison between the initial and final points of \(\mu(t)\) and \(\eta(t)\) in Fig. 4 (a) and (c). The initial forms take trigonometric functions, which are the outcomes of pre-training the neural network. The final forms are obtained by the outputs of the neural network. One finds that the final form has a clear deviation in the amplitude and the structure symmetry from the initial form after the entire training of the neural network. The final forms are no longer simple trigonometric functions, which can be clearly revealed by the derivatives of \(\mu(t)\) with respect to \(\tau^{(1)}\) and \(\eta(t)\) with respect to \(\tau^{(2)}\) shown in Fig. 4 (b) and (d). The introduction of the neural network can broaden the model space, in which the control parameters can take more extensive and feasible trial forms. To get more insights into the behaviors of \(\mu(t)\) and \(\eta(t)\) at the initial and final points of the training, we plot the trajectories of the eigenstates \(|\phi_{\pm}(t)\rangle\) on the Bloch sphere in Fig. 5: \[\vec{r}_{\pm}(t)=\sum_{k=x,y,z}\mathrm{Tr}\big{[}|\phi_{\pm}\rangle\langle\phi _{\pm}|\sigma_{k}\big{]}\vec{e}_{k}, \tag{28}\] where \(\vec{e}_{k}\) is the unit vector along the \(k\) axis. The differences between the initial and final \(\mu(t)\) and \(\eta(t)\) are magnified on the Bloch sphere. It can be seen that the evolution path varies a lot during the entire training. Thus, the neural-network ansatz shows unique advantages in quantum optimal control, which can obtain a more complex ansatz for possible control parameters. _Noise robustness._ Next, we evaluate the performance of our scheme under different noisy circumstances. First, we consider the systematic noise effect, such as instrument defects and imperfection operations. Systematic errors can cause the average value of measured data to deviate significantly from the ideal value. The influence of systematic errors may be present in the parameters of the control Hamiltonian that can be written as \(\Omega_{k}^{c}=(1+\delta_{k})\Omega_{k}\), \(k=x,y,z\), where \(\delta_{k}\) is the error coefficient. We plot the final-state average fidelity \(\mathcal{F}\) of the T gate with respect to the error coefficient \(\delta_{k}\) in Fig. 6. We can find that when \(\delta_{x}\in[-0.1,0.1]\) (\(\delta_{y}\in[-0.1,0.1]\)), the final-state average fidelity \(\mathcal{F}\) remains higher than \(0.9986\) (\(0.9984\)), while we can only obtain \(\mathcal{F}\geq 0.9611\) when \(\delta_{z}\in[-0.1,0.1]\). It is obvious that the noise in the \(z\) axis direction will cause more catastrophic decline in the final-state average fidelity than that in \(x\) and \(y\) axes. This effect can be understood because according to Eq. (3) the fluctuation in \(\Omega_{z}\) will cause persistent adverse effects on the dynamic phase. The unidirectional offset of \(\Omega_{z}\) will make the dynamic phase not vanish after a cycle, and then spoils the conditions of NGQC+. We also consider the random noise effect, in which the amplitude, waveform, and phase are random at any time. Each random noise is still subject to certain statistical distribution. If the amplitude distribution of a noise follows Gaussian distribution and its power spectral density is uniformly distributed, this noise is called additive white Gaussian noise (AWGN). AWGN is one of the typical random noise models. Therefore, we take AWGN as an example to analyze the robustness of our method to random processes and compare the robustness of the machine-learning-inspired and trigonometric-function based protocols. We add the AWGN to control Figure 4: The comparison between the initial and the final forms of (a) \(\mu\) and (c) \(\eta\). The initial forms take trigonometric functions, and the final forms are the outputs of the neural network. The first derivatives with respect to the inputs of the neural network (b) \(d\mu/d\tau^{(1)}\) and (d) \(d\eta/d\tau^{(2)}\) are also compared. Figure 5: Map of the changes of \(\mu\) and \(\eta\) into the changes of the evolution path of the eigenstates \(|\phi_{\pm}(t)\rangle\) on the Bloch sphere. The blue line is the evolution paths of \(|\phi_{+}(t)\rangle\). The red line is the evolution path of \(|\phi_{-}(t)\rangle\). \(|\mathcal{C}_{\pm}\rangle\) are the initial states of the evolution of \(|\phi_{\pm}(t)\rangle\). (a) The evolution path corresponds to the initial \(\mu\) and \(\eta\). (b) The evolution path corresponds to the final \(\mu\) and \(\eta\). parameters as \[\Omega_{k}^{q}(t)=\Omega_{k}(t)+\mathcal{A}_{G}[\Omega_{k}(t),\text{SNR}], \tag{29}\] where \(q\) represents each random generator of AWGN, \(\mathcal{A}_{G}[\Omega_{k}(t),\text{SNR}]\) is a function that generates AWGN for the original signal \(\Omega_{k}(t)\) with signal-to-noise ratio \(\text{SNR}=10\log_{10}(\text{P}_{\text{signal}}/\text{P}_{\text{noise}})\), and \(\text{P}_{\text{signal}}\) and \(\text{P}_{\text{noise}}\) are the power of signal and noise, respectively. Due to the random generation of AWGN, we perform a large amount of numerical simulations to estimate the random noise effect. The logarithms of the deviations \(\delta\mathcal{F}\) of the mean values of final-state average fidelities of the T gate from the ideal value of \(50p\) (\(p=1,2,3,\cdots\)) iterations of numerical simulations are plotted in Fig. 7 with \(\text{SNR}=10\). When \(p\) tends to infinity, the simulation consequence is pretty close to the actual impact of the random noise. The ideal value of the final-state average fidelity in the machine-learning-inspired protocol is \(0.999\), while that in the trigonometric-function-based protocol is \(0.8894\). It is observed that our scheme performs significantly better in the presence of random noise. Compared to the trigonometric-function-based protocol, the mean value of the final-state average fidelities under random noise in the machine-learning-inspired scheme exhibits fewer fluctuations with respect to \(p\) and approaches \(1-7.84\times 10^{-4}\), with a smaller deviation from the ideal value as \(p\) becomes sufficiently large. It is thus acknowledged that our scheme is robust against the random noise, and enhancing the performance of the gate can improve the robustness to a certain degree. As the system cannot be completely isolated from the environment, the inevitable interactions between the system and the environment will also lead to the decoherence. We mainly consider two dissipation factors, such as a single-photon loss and dephasing [47]. The evolution of the system can be described by the Lindblad master equation [47; 90]: \[\dot{\rho}(t) = -i[H_{\text{cat}}+H_{c}(t),\rho(t)] \tag{30}\] \[+\Gamma\mathcal{L}[a]\rho(t)+\Gamma_{\phi}\mathcal{L}[a^{\dagger }a]\rho(t).\] Here, \(\Gamma\) and \(\Gamma_{\phi}\) are the dissipation coefficients of a single-photon loss and dephasing, respectively, and the Lindblad superoperator \(\mathcal{L}\) acting on arbitrary operator \(o\) produces \(\mathcal{L}[o]\rho(t)=o\rho(t)o^{\dagger}-o^{\dagger}o\rho(t)/2-\rho(t)o^{ \dagger}o/2\). In the presence of decoherence, the evolution is no more unitary. We can no longer use Eq. (25) to measure the performance of the quantum gates. Therefore, we take the evolution with initial state \(|\mathcal{C}_{+}\rangle\) as an example and evaluate the fidelity of the T gate as \[F_{T}=\left\langle\mathcal{C}_{+}\Big{|}U_{T}^{\dagger}\rho(T)U_{T}\Big{|} \mathcal{C}_{+}\right\rangle. \tag{31}\] In our numerical simulation, we set \(\Gamma=\Gamma_{\phi}=0.05\)MHz, and we can obtain the fidelity of the T gate as Figure 6: The variation of final-state average fidelity \(\mathcal{F}\) of the T gate with respect to the systematic error coefficient \(\delta_{k}\), \(k=x,y,z\). Figure 7: The logarithms of the deviations \(\delta\mathcal{F}\) of the mean values of final-state average fidelities of the T gate from the ideal value with respect to simulation times \(\mathcal{R}\) under the random noise effect with \(\text{SNR}=10\) in the machine-learning-inspired (purple solid line) and trigonometric-function based (blue dash-dot line) protocols. The dotted lines represent the convergence values of the two protocols. Here, \(\mathcal{R}=50p,(p=1,2,3,\cdots)\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Gate & \(\mu_{0}\) & \(\eta_{0}\) & Fidelity & \(\theta_{\text{actual}}\) & \(\theta_{\text{ideal}}\) & Error \\ \hline T & 0 & 0 & 0.9999 & 2.7332 & \(7\pi/8\) & 0.0157 \\ X & \(3\pi/2\) & \(\pi/2\) & 0.9999 & 1.5459 & \(\pi/2\) & 0.0249 \\ H & \(\pi/4\) & \(\pi/2\) & 0.9997 & 1.5738 & \(\pi/2\) & 0.0030 \\ T\({}^{\dagger}\) & 0 & 0 & 0.9999 & 0.3569 & \(\pi/8\) & 0.0338 \\ R\({}_{\text{x}}(\pi/4)\) & \(\pi/2\) & \(-\pi/2\) & 0.9992 & 3.6019 & \(9\pi/8\) & 0.0676 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters and corresponding final-state average fidelities for the implementation of single-qubit gates. The rightmost three columns are to verify whether Eq. (19) is satisfied. We calculate \(\theta_{\text{actual}}\) with the output of the neural network, and the relative error between \(\theta_{\text{actual}}\) and \(\theta_{\text{ideal}}\). 0.9803. This means the leakage to unwanted levels outside the subspace is still very small, and our scheme is insensitive to decoherence. The implementations of the NOT gate (X gate), Hadamard gate (H gate), \(\mathrm{T}^{\dagger}\) gate, and \(\mathrm{R}_{\mathrm{x}}(\pi/4)\) gate are listed in Tab. 1. Here, \(\mathrm{R}_{\mathrm{x}}(\phi)=\exp\bigl{(}-\frac{\mathrm{i}}{2}\phi\sigma_{ \mathrm{x}}\bigr{)}\) is a rotation gate around the \(x\)-axis [21]. It can be seen that the machine-learning-inspired protocol excels for almost all kinds of single-qubit gates. Especially, our scheme shows superiority in phase gates, whose average fidelities can reach 0.9999, much higher than those in the trigonometric-function-based protocol. The performance of the X gate in the two protocols is equally remarkable. Through the neural network, we can implement the rotation gates which are unrealizable in the trigonometric-function-based protocol. For the H gate, the obtained results are not very accurate, and more sophisticated neural networks are awaited. Furthermore, we realize the modified controlled-NOT (CNOT) gate with the final-state average fidelity 0.9996. Here, \(\hat{U}_{\mathrm{CNOT}}=|\mathcal{C}_{+}\rangle\langle\mathcal{C}_{+}|\otimes \mathbb{I}+|\mathcal{C}_{-}\rangle\langle\mathcal{C}_{-}|\otimes(-i\sigma_{ x})\) and \(\mathbb{I}\) is the unit matrix acting on the cat-state subspace. The execution of a two-qubit controlled gate is shown in Appendix A. It is clear that for each gate, the higher the final-state average fidelity is, the smaller the error is. To conclude, through the introduction of the neural network, we can lift the restrictions imposed on \(\theta\) to a certain extent. We can realize arbitrary \(\theta\) by adjusting the initial parameters and the structure of the neural network. ## V The realization of Toffoli gate In the trigonometric-function-based protocol, it is scarcely possible to execute the single-shot multi-qubit gates, and the final-state average fidelity of synthetic multi-qubit gates by combining high-fidelity single- and two-qubit gates will be rather low. The Toffoli gate [20] \[\left[\begin{array}{cccccccc}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&-1&0&0&0&0&0\\ 0&0&0&-1&0&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&-1\\ 0&0&0&0&0&0&-1&0\\ \end{array}\right]\] is composed of the CNOT gate, H gate, T gate, and \(\mathrm{T}^{\dagger}\) gate, as is shown in Fig. 8. The final-state average fidelity of Toffoli gate is 0.5169, when all gates in Fig. 8 are realized in the trigonometric-function-based protocol, and the main limitation is due to the bad performance of the T gate. In the machine-learning-inspired protocol, we can realize higher-fidelity multi-qubit gates in the cascaded mode. The finely modified Toffoli gate can be synthesized by H gate and CNOT gate implemented in the trigonometric-function-based protocol and T gate and \(\mathrm{T}^{\dagger}\) gate shown in Tab.1, and the final-state average fidelity of such a three-qubit entangling gate increases to 0.9976, with improved performance of the T and \(\mathrm{T}^{\dagger}\) gates. However, we find that, although the final-state average fidelities of T and \(\mathrm{T}^{\dagger}\) gates are up to 0.9999, it is still challenging to synthesize a high-fidelity multi-qubit gate, which is seriously hindered by the lowest-fidelity quantum gates. When all gates in Fig. 8 are realized in the machine-learning-inspired protocol, the final-state average fidelity can further increase to 0.9981. Thus, our scheme can provide a feasible routine to realize multi-qubit gates in bosonic systems. ## VI Conclusion and Outlook In this paper, we present a machine-learning-inspired method of optimizing the performance of the imperfect gates with cat-state NGQC+ via reverse engineering. By utilizing periodic feature enhancement and corresponding biases, we can obtain a periodic function as an output of a neural-network ansatz with fixed initial values. The machine-learning-inspired protocol allows us to not have to solve the difficult nonlinear equation [Eq. (19)], which can be automatically satisfied when the final-state average fidelity tends to be 1. Through analyzing the variational forms of the control parameters and comparing with the simple trigonometric functions, we prove that the neural network can greatly expand the model space and realize a more complex ansatz for possible control parameters. We find the final-state average fidelities of the phase gates and NOT gate can reach 0.9999, and those of the Hadamard gate and CNOT gate can be up to 0.9996. In order to improve the performance of the Hadamard gate, we can expand the scale of the neural network by increasing the number of hidden units and hidden layers. We can also adjust the periodic relationship between \(\mu\) and \(\eta\) and the initial parameters in the hope of obtaining better results. An alternative approach in the neural network is to use multi-objective optimization. Meanwhile, we show that we developed an approach for implementing high-fidelity rotation gates that are challenging to realize using trigonometric function-based protocol. Our scheme demonstrates robustness against various types of decoherence effects. Additionally, we observe that once the Figure 8: The Toffoli gate is composed of the CNOT gate, H gate, T gate (\(\pi/8\)), and \(\mathrm{T}^{\dagger}\) (\(-\pi/8\)) gate. average fidelities of single- and two-qubit gates surpass a certain threshold, the average fidelities of the synthetic gate may not be significantly compromised. Combining high-fidelity single- and two-qubit gates, we can implement the Toffoli gate with high fidelity, which can not be simply realized in trigonometric-function-based protocol. In order to further improve the performance of the synthetic gate, we can use the average fidelity of the synthetic gate to guide the variational learning of the neural network, instead of only optimizing the single and two-qubit gates, and the improved scheme is left for a future study. We thus provide an alternative method of designing the control parameters. The machine-learning-inspired scheme paves the way for the optimization of continuous and periodic parameters in the quantum control, and can be generalized to more intricate neural networks featuring a substantial number of optimizable parameters, targeting increasingly complex quantum systems [66; 67; 68; 69]. ###### Acknowledgements. The authors appreciate very insightful discussions with Yimin Wang, Ming Xue and Meng-Jiao Lyu. This work is supported by College students' innovation and entrepreneurship training program projects of Nanjing University of Aeronautics and Astronautics under Grant 202210287094Z. W.-L. Y. kindly acknowledges support by the National Natural Science Foundation of China (NSFC) under Grant No. 12174194 and a startup fund of Nanjing University of Aeronautics and Astronautics under Grant No. 1008-YAH20006. A.M.O. kindly acknowledges Narodowe Centrum Nauki (NCN, Poland) Project No. 2021/43/B/ST3/02166 and is grateful for support via the Alexander von Humboldt Foundation Fellowship (Humboldt-Forschungspreis). ## Appendix A The realization of two-qubit controlled gate The Hamiltonian of two cavity modes driven by two Kerr-nonlinear resonators can be described as \[H_{\rm cat,2}=\sum_{n=1,2}\big{(}-Ka_{n}^{\dagger 2}a_{n}^{2}+\epsilon_{2}(e^{ 2i\xi}a_{n}^{\dagger 2}+e^{-2i\xi}a_{n}^{2})\big{)}. \tag{21}\] Here, \(a_{n}\) (\(a_{n}^{\dagger}\)) is the annihilation (creation) operator of the \(n\)th mode.The product states of two-mode coherent states \(\{|\alpha\rangle_{1}\otimes|\alpha\rangle_{2}\}\), with \(\alpha=\pm\sqrt{\epsilon_{2}/K}\exp(i\xi)\), are four-fold degenerate eigenstates of \(H_{\rm cat2}\). \(\{|\mathcal{C}_{\pm}\rangle_{1}\otimes|\mathcal{C}_{\pm}\rangle_{2}\}\) can span the four-dimensional subspace \(\mathcal{S}_{2}\) to implement the two-qubit gates. The control Hamiltonian [90; 91; 92] is given by \[H_{c2}(t) = \chi_{12}(t)a_{1}^{\dagger}a_{1}a_{2}^{\dagger}a_{2}+a_{1}^{ \dagger}a_{1}\Big{[}\lambda^{*}(t)a_{2}+\lambda(t)a_{2}^{\dagger}\Big{]} \tag{22}\] \[+ \epsilon^{*}(t)a_{2}+\epsilon(t)a_{2}^{\dagger}+\sum_{n=1,2} \chi_{n}(t)a_{n}^{\dagger}a_{n}.\] Here, \(\chi_{12}(t)\) is the cross-Kerr parameter, \(\lambda(t)\) is the longitudinal interaction strength between modes 1 and 2, \(\epsilon(t)\) is the strength of the extra driving of mode 2, and \(\chi_{n}\) (\(n\)=1,2) is the detuning of the \(n\)th mode. Similarly, it is assumed that the parameters of \(H_{c2}\) should be much smaller than the energy gaps between cat states and other eigenstates of \(H_{\rm cat,2}\). To realize the two-qubit controlled gate \(U_{2}(T,0)=|\mathcal{C}_{+}\rangle_{1}\langle\mathcal{C}_{+}|\otimes\mathbb{ I}_{2}+|\mathcal{C}_{-}\rangle_{1}\langle\mathcal{C}_{-}|\otimes U_{s}(T,0)\), the parameters of \(H_{c2}\) are set as follows: \[\chi_{12}(t) = \frac{-2\Omega_{z}\mathcal{N}_{+}^{2}\mathcal{N}_{-}^{2}}{|\ \alpha\ |^{4}\ ( \mathcal{N}_{+}^{2}-\mathcal{N}_{-}^{2})^{2}},\] \[\chi_{1}(t) = -\frac{|\ \alpha\ |^{2}\ (\mathcal{N}_{+}^{2}+\mathcal{N}_{-}^{2}) }{2\mathcal{N}_{+}\mathcal{N}_{-}}\chi_{12}(t),\] \[\chi_{2}(t) = -\ |\ \alpha\ |^{2}\ \frac{\mathcal{N}_{-}}{\mathcal{N}_{+}}\chi_{12}(t),\] \[{\rm Re}[\lambda(t)] = \frac{(\mathcal{N}_{+}\mathcal{N}_{-})^{\frac{3}{2}}}{4(\mathcal{ N}_{+}^{2}-\mathcal{N}_{-}^{2})\ |\ \alpha\ |^{3}}(\Omega_{x}\cos\xi-\Omega_{y}e^{2|\alpha|^{2}}\sin\xi),\] \[{\rm Im}[\lambda(t)] = \frac{(\mathcal{N}_{+}\mathcal{N}_{-})^{\frac{3}{2}}}{4(\mathcal{ N}_{+}^{2}-\mathcal{N}_{-}^{2})\ |\ \alpha\ |^{3}}(\Omega_{x}\sin\xi+\Omega_{y}e^{2|\alpha|^{2}}\cos\xi),\] \[{\rm Re}[\epsilon(t)] = -\ |\ \alpha\ |^{2}\ \frac{\mathcal{N}_{-}}{\mathcal{N}_{+}}{\rm Re}[ \lambda(t)],\] \[{\rm Im}[\epsilon(t)] = -\ |\ \alpha\ |^{2}\ \frac{\mathcal{N}_{-}}{\mathcal{N}_{+}}{\rm Im}[ \lambda(t)], \tag{23}\] which are slightly different from the parameters chosen in Ref. [60]. The optimization of two-qubit controlled gates in the neural networks is similar to that of single-qubit gates.
2306.01789
Edit Distance based RL for RNNT decoding
RNN-T is currently considered the industry standard in ASR due to its exceptional WERs in various benchmark tests and its ability to support seamless streaming and longform transcription. However, its biggest drawback lies in the significant discrepancy between its training and inference objectives. During training, RNN-T maximizes all alignment probabilities by teacher forcing, while during inference, it uses beam search which may not necessarily find the maximum probable alignment. Additionally, RNN-T's inability to experience mistakes during teacher forcing training makes it more problematic when a mistake occurs in inference. To address this issue, this paper proposes a Reinforcement Learning method that minimizes the gap between training and inference time. Our Edit Distance based RL (EDRL) approach computes rewards based on the edit distance, and trains the network at every action level. The proposed approach yielded SoTA WERs on LibriSpeech for the 600M Conformer RNN-T model.
Dongseong Hwang, Changwan Ryu, Khe Chai Sim
2023-05-31T16:53:23Z
http://arxiv.org/abs/2306.01789v2
# Edit Distance based RL for RNNT decoding ###### Abstract RNN-T is currently considered the industry standard in ASR due to its exceptional WERs in various benchmark tests and its ability to support seamless streaming and longform transcription. However, its biggest drawback lies in the significant discrepancy between its training and inference objectives. During training, RNN-T maximizes all alignment probabilities by teacher forcing, while during inference, it uses beam search which may not necessarily find the maximum probable alignment. Additionally, RNN-T's inability to experience mistakes during teacher forcing training makes it more problematic when a mistake occurs in inference. To address this issue, this paper proposes a Reinforcement Learning method that minimizes the gap between training and inference time. Our Edit Distance based RL (EDRL) approach computes rewards based on the edit distance, and trains the network at every action level. The proposed approach yielded SoTA WERs on LibriSpeech for the 600M Conformer RNN-T model. Dongseong Hwang, Changwan Ryu, Khe Chai Sim Google, U.S.A {dongseong, changwan, khechai}@google.com **Index Terms**: speech recognition, RNN-T, reinforcement learning, Actor-critic, teacher-forcing, edit distance ## 1 Introduction The Recurrent Neural Network-Transducer (RNN-T) [1] has demonstrated significant success in both academic and industrial settings for automatic speech recognition (ASR) [2, 3, 4, 5]. This can be attributed to several factors, including the availability of multiple public benchmarks that have established the RNN-T model as state-of-the-art (SoTA) for ASR [6, 7, 8, 9], such as LibriSpeech [10], SpeechStew [11], and Multi-lingual LibriSpeech [12]. Additionally, the RNN-T model offers seamless support for streaming ASR [13] and longform utterances [14], which are crucial requirements for many real-world applications. In comparison, other popular models such as the Connectionist temporal classification (CTC) [15] and attention-based models [16] have limitations. Specifically, CTC models generally exhibit higher word error rates (WER) than RNN-T models [8], while attention-based models are challenging to support streaming requirements and require non-trivial modifications to support longform inputs [17]. Despite its advantages in streaming and longform support, the RNN-T model is not without its limitations. Unlike attention-based ASR models, the RNN-T model requires teacher-forcing during training, which means that only ground truth labels are used regardless of the predicted output. As a result, the RNN-T model lacks experience in recovering from errors during training. This is in contrast to the scheduled sampling approach [18, 16] used by attention-based models, where ground truth tokens are replaced by predicted output tokens randomly. However, the RNN-T model's training objective is to maximize the probabilities of all possible alignments, which is incompatible with scheduled sampling. The CTC model exhibits similar limitations to the RNN-T model. An additional limitation of the RNN-T model is the significant discrepancy between its training objective and inference method. While beam search is the standard approach for ASR inference, there is no guarantee that it will identify the highest log likelihood hypothesis. The RNN-T model's training objective is to maximize the probability of all possible alignments, which differs substantially from the beam search approach. In contrast, the cross-entropy (CE) loss utilized by attention-based ASR models maximizes the token-level probability, which aligns more closely with the beam search algorithm. Minimum Word Error Rate Training (MWER) training was proposed as a solution to the problems caused by exposure bias due to teacher-forcing and the discrepancies between the training objective and inference method [19]. MWER is commonly employed in industry as a crucial step in production models, typically following RNN-T pretraining. Nevertheless, this training approach is a sentence-level policy gradient [20], which results in poor sample efficiency. We introduce a novel reinforcement learning (RL) technique for RNN-T decoding. Our research offers two noteworthy contributions: 1. We present a functional RL algorithm for the RNN-T model that generates a training signal for each token rather than for the entire sentence. Our proposed approach leads to notable improvements in the SoTA performance on LibriSpeech WERs. 2. We propose a novel reward engineering technique that utilizes edit distance to proficiently train both emission and blank actions within RNN-T models, thereby achieving direct minimization of the Word Error Rate (WER) metric. ## 2 Related work The automatic speech recognition (ASR) problem is widely recognized to exhibit exposure bias as a result of teacher forcing and the objective gap between training and inference. Notable adaptations of reinforcement learning (RL) have been proposed for ASR, including the Minimum Word Error Rate (MWER) training [19] approach, which is an ASR version of the REINFORCE algorithm [20], the most basic RL algorithm introduced in 1992. MWER works by generating the top k hypotheses using beam search and then providing a reward to hypotheses with a better than average WER and a penalty to those with a worse than average WER. MWER has training signal at the sentence level, meaning that all tokens in the same hypothesis are rewarded or penalized together. As WER of SoTA RNN T models is already less than \(10\%\), many of the tokens in the hypotheses are correct but receive an unfair penalty, which can hinder model training. For this reason, both the REINFORCE algorithm and MWER are known to suffer from poor sample efficiency. In 1999, actor-critic algorithms [21] were introduced as a solution to the issue of poor sample efficiency. This algorithm estimates the value of every action sequence and compute the policy gradient in equation (4) for every action. This approach provides a training signal for each action, resulting in improved sample efficiency. However, a new challenge arises: how to estimate the value. This question has been a major focus of research in RL since the introduction of actor-critic algorithms. The Optimal Completion Distillation (OCD) method [22] provides an innovative solution to the value estimation problem. By leveraging the ability to compute the edit distance for every token, OCD is able to calculate the exact Q-value [23] instead of relying on Q-value estimation. Although OCD works well for character tokens, it is not feasible to apply this method to subword units due to the high computational cost of assigning a Q-value to each subword unit per action. Additionally, if the hypothesis predicts an incorrect subword unit, computing the Q-value for all subword units on the incorrect action becomes a non-trivial problem. There was a previous attempt to adapt a gradient policy with token-level reward to attention-based ASR in [24]. Our work builds upon this previous study but focuses on RNN-T [1] model. While the earlier paper assigned negative rewards to only the last token, we identified this approach as problematic. Instead, we propose assigning negative rewards to every incorrect action and also propose a value assignment method for blank tokens. ## 3 Methods ### Training Reinforcement learning (RL) generates data via a behavior policy and subsequently computes policy gradient through reward and target policy [20, 21]. In our study, the target policy is a Recurrent Neural Network-Transducer (RNN-T) [1] model that generates a softmax distribution of subword IDs (i.e., WordPiece [25]), while the behavior policy is a beam search algorithm based on the RNN-T model. We configure the behavior policy's beam search algorithm to emulate ASR inference time, as our training objective is to optimize the ASR inference process itself. The training procedure for our model consists of two stages. Firstly, we conduct regular training of RNN-T model. Once a converged checkpoint is obtained, we proceed to further fine-tune the model using the RL objective. In the context of our study, the process of beam search involves the use of labels, \(\mathbf{y}_{U}\), and audio features, \(\mathbf{x}_{T}\), to generate a hypothesis, \(\hat{\mathbf{y}}_{U}\), and an associated action sequence, \(\hat{\mathbf{y}}_{\hat{U}}\), where \((\mathbf{y},\phi)\in\hat{\mathbf{y}}\) and \(\hat{u}\) represents the index of the action sequence. This process is illustrated in Figure 2. Once the hypothesis, \(\hat{\mathbf{y}}_{U}\), is obtained, we can compute the reward, \(\mathbf{r}_{U}\), using the edit distance metric. Specifically, we define the error, \(e_{u}\), as the increase in edit distance resulting from a given action. If an action does not contribute to an increase in edit distance, we assign a positive value, \(r_{p}\), as the reward. The specific value of the hyper parameter \(r_{p}\) can vary, and we found that \(r_{p}=0.1\) performed effectively in the experimental results presented in Section 5.1. \[\mathbf{r}_{u}=\begin{cases}-e_{u},&\text{if }e_{u}>0\\ r_{p},&\text{otherwise}\end{cases} \tag{1}\] In Figure 2, we present a TPU-friendly approach to computing rewards, which involves a scatter operation to distribute subword IDs into their constituent characters, followed by computation of the edit distance and error at the character level. The errors are then gathered into token-level values, which are used to compute the reward. Once the reward, \(\mathbf{r}_{u}\), is obtained, we can compute the corresponding value, \(\mathbf{V}_{\hat{u}}\), using a discount factor, \(\gamma\). The reward, \(\mathbf{r}_{u}\), is assigned only to emission actions (i.e., \(y_{k}\in\mathbf{y}\)), whereas the value, \(\mathbf{V}_{\hat{u}}\), is assigned to both emission and blank actions (i.e., \(\hat{y}_{k}\in\hat{\mathbf{y}}=(\mathbf{y},\phi)\)), which together constitute the action space. It is noteworthy that the value computed based on edit distance offers training signals for both emission and blank actions. It naturally incentivizes or disincentivizes blank actions associated with a good or bad emission action, respectively. In our experiments presented in Section 5.1, we found that setting \(\gamma=0.95\) provided effective results. \[\mathbf{V}_{\hat{u}}=\mathbf{r}_{u}+\gamma\mathbf{V}_{\hat{u}+1} \tag{2}\] Using the action sequence, \(\hat{\mathbf{y}}_{\hat{U}}\), and the corresponding value, \(\mathbf{V}_{\hat{u}}\), we can compute the policy gradient [26]. In equation (4), \(P_{\theta}(\hat{\mathbf{y}}_{\hat{u}}|\hat{\mathbf{y}}_{\hat{u}-1},\mathbf{x}_{T})\) refers to the probability of subword ID by the RNN-T model. \[\nabla_{\theta}J(P_{\theta})=\mathbb{E}_{r\sim P_{\theta}}[\sum_{\hat{u}=0}^ {\hat{U}}\nabla_{\theta}\log P_{\theta}(\hat{\mathbf{y}}_{\hat{u}}|\hat{\mathbf{y}}_{ \hat{u}-1},\mathbf{x}_{T})V_{\hat{u}}] \tag{3}\] Figure 1: The schematic diagram illustrates how beam search generates hypotheses. Incorrect subwords are highlighted in red and assigned negative rewards, while correct ones receive positive rewards. Figure 2: This schematic diagram depicts the process of computing the value for the RL loss, based on the hypothesis ’help whydl’ generated by beam search decoding. An approximation for the expectation can be obtained by calculating the sample mean. We gather a collection of trajectories denoted as \(\mathcal{D}=\{\tau_{i}\}_{i=1,\ldots,N}\) where each trajectory is obtained by beam search using the policy \(P_{\theta}\). The estimation of the policy gradient can be achieved using equation (4), where the number of trajectories in \(\mathcal{D}\) (denoted as \(|\mathcal{D}|\) or \(N\)) corresponds to the number of top k hypotheses obtained through beam search. \[\nabla_{\theta}J(P_{\theta})\approx\frac{1}{|\mathcal{D}|}\sum_{\tau\in \mathcal{D}}\sum_{a=0}^{\hat{U}}\nabla_{\theta}\log P_{\theta}(\hat{\mathcal{ Y}}_{a}|\hat{\mathcal{Y}}_{a-1},\mathbf{x}_{T})V_{a} \tag{4}\] The high variance of the gradient estimator is a significant challenge for policy gradient methods [27]. This issue stems, in part, from the challenge of assigning value to the actions that influenced future rewards. This difficulty mainly arises due to the high level of noise in the estimated value, making it challenging to determine whether a given action is beneficial or detrimental. Notably, this problem has received significant attention in the field of RL [26, 28, 27]. In contrast, our proposed method does not have this challenge by computing the value based on edit distance, which enables us to determine precisely whether a given action is advantageous or disadvantageous. Our reward proposal assigns negative values to incorrect actions, which is a relatively uncommon in RL situations. Since \(\boldsymbol{V}_{a}\) is a value that is detached from the neural network via stop gradient in equation (4), we can compute the RL loss using the following formulation: \[\mathcal{L}_{\text{RL}}=-J(P_{\theta})=\frac{1}{N}\sum_{\tau\in \mathcal{D}}\sum_{a=0}^{\hat{U}}-\log P_{\theta}(\hat{\mathbf{y}}_{a}|\hat{ \mathbf{y}}_{a-1},\mathbf{x}_{T})V_{a} \tag{5}\] In order to promote stability during training, we incorporate the RL loss with the RNN-T loss. In our study, we found that setting the weighting factor \(\lambda=0.5\) provided effective results in Section 5.1. \[\mathcal{L}_{\text{total}}=\lambda\mathcal{L}_{\text{RL}}+\mathcal{L}_{\text{ RNN-T}} \tag{6}\] Algorithm 1 outlines all the necessary steps for computing the RL loss. ``` input audio features, \(\mathbf{x}_{T}\) and labels, \(\mathbf{y}_{U}\). 1. Perform beam search to decode the top k hypotheses, as shown in Figure 1. 2. Compute the reward per token based on the edit distance using equation (1). 3. Compute the value per action using equation (2) as shown in Figure 2. 4. Calculate the RL loss, \(\mathcal{L}_{\text{RL}}\), using equation (5). ``` **Algorithm 1** Calculate the Edit distance RL loss ### Inference Beam search is a commonly used technique for Automatic Speech Recognition (ASR) models. In our approach, we directly optimize the behavior policy during inference, which necessitates the use of beam search. Consequently, we use beam search during the RNN-T decoding process. Our beam search configuration is set with a beam expansion size of 5 and a top-k value of 4. ## 4 Experiments ### Experiment Setup The Lingvo [29] open-source machine learning framework was employed for all experiments conducted in this study. #### 4.1.1 Model Architecture In this study, we employed the W2v-BERT Conformer XL model (\(0.6\)B) introduced in [6]. The audio input feature is defined as log mel, comprising 80 dimensions, with a stride of \(10ms\) and a window of \(25ms\). For the labels, \(1k\) WordPiece subword units [25] were utilized. The RNN-T model functions as a policy network, with its actions being the subword IDs. The audio encoder is composed of \(24\) Conformer blocks [30] with a model dimension of \(1024\). The self-attention layer is made up of \(8\) heads with \(128\) left and right context length, and the convolution kernel size is \(5\). The decoder consists of a 2-layer LSTM label encoder with \(2048\) units projected down to \(640\) output units, and a joint network with a single feed-forward layer with \(640\) units. The model has a total of \(0.6\)B weights. #### 4.1.2 Data The LibriSpeech dataset [10], which comprises 960 hours of audio and corresponding human transcriptions, is used for all of our experiments such as both RNN-T training and RL training. #### 4.1.3 Training procedure In this study, the baseline model underwent a pre-training phase as described in the W2v-BERT paper [6], consisting of a two-step process: firstly, W2v-BERT pretraining using the Libri-Light dataset [31] (60,000 hours), and secondly, RNN-T fine-tuning using the LibriSpeech dataset [10] (960 hours). The model was trained using a batch size of 256 and was found to converge at 15k steps. Once the model has converged, we then further finetuning it using the RL loss defined in equation (6). The RL finetune was found to converge at an additional 7k steps. The RNN-T pretraining was conducted on 16 TPU V3 cores [32] for a duration of half day, followed by RL finetuning on 32 TPU V3 cores for an additional half day. ### Experiment results We first train the RNN-T baseline model (B0), and then perform further finetuning using the RL objective (E1). For comparison, we also perform finetuning using the MWER [19] objective (E2). The EDRL method utilizes an action-level policy gradient, while MWER utilizes a sequence-level policy gradient. The results in Table 1 show that our RL method (E1) outperforms the RNN-T baseline (B0), whereas our RNN-T baseline already outperforms the WER reported for W2v-BERT [8] using only the LibriSpeech dataset. In the W2v-BERT paper [8], self-training was further conducted via Noisy Student Training [33] on the unlabeled Libri-Light dataset [31] of 60,000 hours, resulting in the achievement of a SoTA WER. Our RL model attains a WER that is similar to the SoTA WER, without leveraging self-training on extensive unlabeled data. In contrast, even after an extensive hyperparameter search, MWER [19] was not able to achieve better WERs than the RNN-T baseline in our setup. To the best of our knowledge, this is the first RL algorithm that achieves SoTA performance on the LibriSpeech dataset using an RNNT model. ## 5 Discussion ### Ablation study As we propose new algorithm, we conduct the extensive ablation study. #### 5.1.1 Negative reward We put forth a negative reward function denoted by \(-e_{u}\) in equation (1). The function implies that each subword ID receives a distinct negative reward ranging from 1 to the length of the subword. To address concerns surrounding varying rewards, we assigned a constant reward of -1 for incorrect subword IDs. However, this approach led to model divergence as it exploited the reward gap. Specifically, the model began to hallucinate at the end of the utterance since the maximum negative reward was constrained to -1. Thus, rather than utilizing ad-hoc reward engineering, we chose to enable the model to undergo training based on raw edit distance. #### 5.1.2 Positive reward We conducted a sweep of positive rewards, denoted by \(r_{p}\), across the values (1, 0.9, 0.7, 0.5, 0.3, 0.1, 0.05, 0.01). Our findings indicate that a positive reward value of \(r_{p}=0.1\) demonstrated optimal performance. #### 5.1.3 Discount factor We found that the discount factor, alongside the learning rate, is one of the most crucial hyper parameters. We conducted a sweep of discount factor values, denoted by \(\gamma\), ranging from 0 to 0.999, with the specific values (0, 0.1, 0.5, 0.9, 0.93, 0.94, 0.95, 0.96, 0.98, 0.99). Our findings suggest that a discount factor value of \(\gamma=0.95\) produced the best performance. #### 5.1.4 RL loss weight We performed a sweep of RL loss weight values, denoted by \(\lambda\) in equation 6, ranging from 0.003 to 1, with the specific values (0.003, 0.1, 0.5, 1). Our results suggest that an RL loss weight of \(\lambda=0.5\) produced the best performance. #### 5.1.5 RNN-T loss weight We performed a sweep of RNN-T loss weight values, while keeping the RL loss weight fixed at 0.5, across the range of (0.0, 0.1, 0.2, 0.5, 1.0). Our results suggest that an RNN-T loss weight of \(1.0\) produced the best performance. If the RNN-T loss weight is set to \(0.0\), the WERs increase to \(100\%\) due to deletion errors, indicating that the RNN-T model generates too many blank actions. Without the RNN-T loss as guidance, it is challenging for our RL proposal to differentiate between emission and blank actions. Consequently, we treat the RL loss as an auxiliary loss to the RNN-T loss, with the RL loss weight set to \(\lambda=0.5\) and the RNN-T loss weight set to \(1.0\). The approach employed by InstructGPT (i.e., Chat-GPT) [34] for utilizing RL loss as an auxiliary loss is analogous. The corresponding weights assigned to cross-entropy (CE) and reinforcement learning (RL) are 27.8 and 1, respectively. The self-contained RL objective for the sequence-to-sequence model is a topic of potential investigation for future research. ### Limitation The motivation behind this study is to narrow the gap between the WER and the Oracle WER, which represents the best WER among the top-k beam search hypotheses. As shown in Table 3, the Oracle WER is significantly better than the regular WER. Therefore, if we can select the oracle hypothesis from the top-k hypotheses, it would be a significant improvement over the SoTA WERs. The previous ranking approach [35] involves using audio features, joint features, labels, and hypotheses as input to rank the top-k hypotheses. However, this approach only yields marginal improvements in WERs and is not readily adaptable to streaming and long-form ASR. In this study, we attempt to address this challenge using RL, but our improved results are still far from the Oracle WERs. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**WER**} \\ & **dev** & **dev-other** & **test** & **test-other** \\ \hline B0 (RNN-T pretrain) & \(1.4\) & \(2.7\) & \(1.5\) & \(2.7\) \\ \hline E1 (EDRL, ours) & \(1.4\) & **2.6** & **1.4** & **2.6** \\ E2 (MWER [19]) & \(1.5\) & \(2.8\) & \(1.5\) & \(2.9\) \\ \hline W2v-BERT XL [8] & \(1.5\) & \(2.9\) & \(1.5\) & \(2.9\) \\ Self-training on 60k [8] & \(1.3\) & \(2.6\) & \(1.4\) & \(2.7\) \\ \hline \hline \end{tabular} \end{table} Table 1: The table displays LibriSpeech WERs without language model (LM) fusion, and demonstrates that our proposed EDRL method outperforms the RNN-T baseline. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**WER**} \\ & **dev** & **dev-other** & **test** & **test-other** \\ \hline RNN-T WER & \(1.4\) & \(2.7\) & \(1.5\) & \(2.7\) \\ Oracle WER & \(0.55\) & \(1.3\) & \(0.56\) & \(1.4\) \\ \hline \hline \end{tabular} \end{table} Table 2: The table presents a comparison of the average LibriSpeech WERs for different discount factors, denoted by \(\gamma\). \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**WER**} \\ & **dev** & **dev-other** & **test** & **test-other** \\ \hline RNN-T WER & \(1.4\) & \(2.7\) & \(1.5\) & \(2.7\) \\ Oracle WER & \(0.55\) & \(1.3\) & \(0.56\) & \(1.4\) \\ \hline \hline \end{tabular} \end{table} Table 3: The table displays LibriSpeech WERs and Oracle WER after RNN-T training ## 6 Conclusions In this paper, we proposed a novel approach for improving the performance of RNN-T models on speech recognition tasks using reinforcement learning (RL). Our approach involves directly optimizing beam search for inference time, which leads to better performance compared to the RNN-T baseline and other existing approaches. Our approach represents the first successful utilization of RL for the LibriSpeech dataset, and it has achieved SoTA WERs. Our findings suggest that RL has the potential to enhance the accuracy of RNN-T models in speech recognition tasks, paving the way for further research in this area.
2309.12616
Unlocking Model Insights: A Dataset for Automated Model Card Generation
Language models (LMs) are no longer restricted to ML community, and instruction-tuned LMs have led to a rise in autonomous AI agents. As the accessibility of LMs grows, it is imperative that an understanding of their capabilities, intended usage, and development cycle also improves. Model cards are a popular practice for documenting detailed information about an ML model. To automate model card generation, we introduce a dataset of 500 question-answer pairs for 25 ML models that cover crucial aspects of the model, such as its training configurations, datasets, biases, architecture details, and training resources. We employ annotators to extract the answers from the original paper. Further, we explore the capabilities of LMs in generating model cards by answering questions. Our initial experiments with ChatGPT-3.5, LLaMa, and Galactica showcase a significant gap in the understanding of research papers by these aforementioned LMs as well as generating factual textual responses. We posit that our dataset can be used to train models to automate the generation of model cards from paper text and reduce human effort in the model card curation process. The complete dataset is available on https://osf.io/hqt7p/?view_only=3b9114e3904c4443bcd9f5c270158d37
Shruti Singh, Hitesh Lodwal, Husain Malwat, Rakesh Thakur, Mayank Singh
2023-09-22T04:46:11Z
http://arxiv.org/abs/2309.12616v1
# Unlocking Model Insights: A Dataset for Automated Model Card Generation ###### Abstract Language models (LMs) are no longer restricted to ML community, and instruction-tuned LMs have led to a rise in autonomous AI agents. As the accessibility of LMs grows, it is imperative that an understanding of their capabilities, intended usage, and development cycle also improves. Model cards are a popular practice for documenting detailed information about an ML model. To automate model card generation, we introduce a dataset of 500 question-answer pairs for 25 ML models that cover crucial aspects of the model, such as its training configurations, datasets, biases, architecture details, and training resources. We employ annotators to extract the answers from the original paper. Further, we explore the capabilities of LMs in generating model cards by answering questions. Our initial experiments with ChatGPT-3.5, LLaMa, and Galactica showcase a significant gap in the understanding of research papers by these aforementioned LMs as well as generating factual textual responses. We posit that our dataset can be used to train models to automate the generation of model cards from paper text and reduce human effort in the model card curation process. The complete dataset is available on OSF. ## 1 Introduction Model cards (representative sample in Table 1) were proposed by Mitchell et al. (2019) to document the training details and intended usage of models and democratize the model development process. However, significant efforts are required to create the model card as several specific details about the model need to be extracted and organized. In recent years, the documentation of models and datasets has gained significant attention due to the rapid influx of newly proposed models and datasets. Recently, some conferences such as NeurIPS (Denton et al., 2023) mandate the submission of datasheets for datasets, and a majority of conferences (EMNLP (Bouamor et al., 2023), AAAI (Chen and Neville, 2023), ACL (Boyd-Graber et al., 2022)) mandate discussion of reproducibility checklist, limitations and ethical considerations of the work. This highlights the importance of documenting datasets and methodology for future consumers of these artifacts. However, submission of these artifacts is restricted to specific venues and not mandated by all venues. Recently, huggingface added model cards for popular models manually (Mitchell, 2023). However, manual construction of model cards is a time-consuming process, and it is difficult to manually update model cards of plethora of models submitted everyday. As a result, model cards for the majority of existing models do not exist or are incomplete. We propose a dataset in the format of question-answers that can be used to train models to generate model cards. We create a set of twenty general questions that seek relevant details about models (Table 2). We provide answers for 25 models extracted from the research papers. Our dataset can be used to train models that can extract information from research papers and automatically generate these model cards, saving time. Our dataset differs from existing QA datasets for academic papers (Saikh et al., 2022; Dasigi et al., 2021; Jin et al., 2019; Pappas et al., 2018; Tsatsaronis et al., 2015) as our question-answer pairs are specifically targeted at generating the model cards. We evaluate large language models (LLMs) such \begin{table} \begin{tabular}{|p{227.6pt}|} \hline **Model Name: BERT-BASE-CASED** \\ Train Data: BooksCorpus (8B words) \& English Wikipedia (2.5B words) \\ Infrastructure: 4 Cloud TPUs in Pod configuration (16 TPU chips) \\ Train Objective: MLM and NSP \\ \hline \end{tabular} \end{table} Table 1: A representative model card documenting information for the BERT-BASE-CASED model. as ChatGPT-3.5, LLaMa 7B, Galactica 125M, and Galactica 1.3B in generating the answers for model card questions in a zero-shot setting. Our experiments showcase that existing models perform poorly in generating answers. Additionally, their performance is worse if we consider factual details in the generated text. Inspected LLMs often generated memorized answers absent in the source papers and non-existent dataset names. Among the evaluated models, ChatGPT-3.5 performs the best; however, that can also be attributed to its huge size in comparison to the other probed models. ## 2 Dataset In this section, we discuss the dataset statistics, curation pipeline, and annotation strategy. To construct our dataset, we focused on gathering ML model cards in a question-answering format. Each instance within the dataset comprises a pair consisting of a question and its corresponding answer, specifically addressing various aspects of the ML model. The questions (Table 2) encompass topics such as model training, architecture, problem statement, datasets used, etc. The domain of our dataset is limited to computational linguistics, and we select language models (LMs) which are successors of the transformer model Vaswani et al. (2017). The dataset consists of 500 question-answer pairs for 25 models, with answers extracted from the research papers by the annotators. The curation pipeline (Figure 1) has three phases: (i) Question Formulation, (ii) Preliminary Annotation, and (iii) Expert Annotation. The **Question Formulation** stage designs a standardized set of twenty questions that cover important aspects of the model, such as training details, architecture, problem statement, and model bias. These questions offer valuable insights into the model. We utilized the same question set for all the models included in our final collection of model card QA pairs. The complete set of questions is provided in Table 2. In **Preliminary Annotation**, we curate a list of 30 popular LMs such as Longformer Beltagy et al. (2020), Transformer-XL Dai et al. (2019), BART Lewis et al. (2020), etc.; and employ 25 annotators to select a model of their preference from the model list. The 25 annotators extract the preliminary answers for different models from the re \begin{table} \begin{tabular}{p{22.8pt} p{22.8pt}} \hline **Id** & **Question** \\ \hline Q1 & What is the main problem statement being addressed in this paper? \\ Q2 & What gaps in previous literature does this paper tries to address? \\ Q3 & What are the main contributions of the paper? \\ Q4 & Is the model proposed only for a specific domain, like code, images, specific text domains like finance, biomedical, etc? If yes, is it possible to extend the model to other domains? \\ Q5 & What datasets and tasks is the model evaluated on? \\ Q6 & Does the model show any bias/prejudice that is mentioned in paper? \\ Q7 & List the limitations of the model discussed in the paper. \\ Q8 & List the datasets on which the model was trained alongwith a brief summary and the size of each dataset. \\ Q9 & List the tokenizer used alongwith the size of the vocabulary. \\ Q10 & List the preprocessing techniques used on the dataset. \\ Q11 & Describe the architecture details (whether it is encoder-decoder, encoder-only, or decoder-only framework, number of layers, number of heads, embedding dimension, total parameters). In case multiple models of varying sizes are trained, list details for all configurations. \\ Q12 & Describe the training setup (e.g., learning rate, steps, epochs, optimizer, etc.) \\ Q13 & Describe the computational resources used to train the model. \\ Q14 & Are all details necessary to reproduce the paper provided? \\ Q15 & What is the pretraining objective of the model? \\ Q16 & What is the loss function that is used to train the model? \\ Q17 & Consider the transformer model as the base architecture for encoder-decoder (ED) models. Similarly, consider BERT and GPT as the base for encoder-only (E) and decoder-only (D) architectures. How is the architecture of this paper different from the base architectures transformer or BERT, or GPT (depending on ED, E, or D respectively)? \\ Q18 & What experiments are conducted in the paper? Provide a brief summary of each experiment by commenting on the task description, input, expected output, and evaluation metric. \\ Q19 & Are ablation studies conducted in the paper? If yes, which parameters are included in the ablation study? \\ Q20 & List the future work mentioned in the paper. \\ \hline \end{tabular} \end{table} Table 2: Question set used for extracting model card details. Figure 1: Twenty questions are formulated to cover model details exhaustively. The annotation pipeline consists of three stages: (i) Question Formulation, (ii) Preliminary annotation, and (iii) Expert annotation. spective research papers. The annotators are undergraduate and graduate students who have in-depth knowledge of traditional ML and basic knowledge of DL, including transformer architectures such as BERT. In **Expert Evaluation**, a subject expert with expertise in the field of DL (masters student in CS with prior experience in DL architectures and frameworks) reviews the answers extracted in the preliminary annotation stage. They examine the papers and assess the answers for accuracy, completeness, and relevance. Their expertise allows them to identify any inaccuracies or inconsistencies in the preliminary annotation stage answers and provide accurate assessments. By incorporating both the preliminary and expert annotation stages, the curation pipeline adds an extra layer of annotation and combines different perspectives and expertise levels to establish a comprehensive and reliable ground truth dataset for model cards. All the annotators are provided with an annotated example for BERT (included here). The annotators were instructed to extract complete answers from the research paper. The answer can span multiple sentences and paragraphs. ## 3 Benchmarking LLMs for Model Card Generation Recent developments in instruction-following LMs (Ouyang et al., 2022; Chung et al., 2022; Kopf et al., 2023; Taori et al., 2023; Chiang et al., 2023) show improved performance for downstream tasks and has led to their increased usage in QA tasks. We evaluate the performance of LLMs in zero-shot QA for generating the model cards. ChatGPT-3.5 (Wu et al., 2023) by OpenAI uses RLHF, and its training data details are publicly unavailable. LLaMa (Touvron et al., 2023) by Meta AI is trained on trillisms of tokens from public data, including arXiv LaTeXfiles, Github, and Wikipedia. Galactica (Taylor et al., 2022) is a scientific LM by Meta AI, trained on a 106 billion tokens data including research papers, scientific KBs, and Github (detailed discussion in appendix). ### Prompting LLMs for QA We evaluate ChatGPT-3.5, LLaMa (7B parameters), Galactica (125M parameters), and Galactica (1.3B parameters) in a zero-shot setting to test their ability to generate answers for model card questions. The subject expert annotator prompts ChatGPT for a particular model in a single session, starting with a general prompt trying to elicit model details. Figure 2, showcases the prompting procedure employed to generate answers. Based on the ground truth, the subject expert also marks the ChatGPT-3.5 answer into correct and incorrect spans. For LLaMa evaluation, we download the original LLaMa 7B weights made available by Meta AI and then use the llama.cpp repository (Gerganov, 2023) to convert it to a 4-bit integer quantized model. We directly use the Galactica 125M and Galactica 1.3B models made available by Meta AI to generate the answers. For LLaMa and Galactica models, we restrict the output size to 1000 tokens. ### Qualitative and Quantitative Evaluation We perform a quantitative evaluation of the LLM-generated responses by computing the BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and BERT-Score (Zhang* et al., 2020). We represent the BERT-Score Precision, Recall, and F1 Score as BS-P, BS-R, and BS-F1, respectively. However, these automatic evaluation metrics do not take into account the factuality of the answers. For e.g., an LLM-generated answer which exactly matches the ground truth answer but differs in training dataset name receives a high score, disregarding the inaccurate dataset name, a crucial aspect in an answer describing the training dataset. For e.g., the mBART (Liu et al., 2020) model is evaluated on the WMT-16, WMT-17, WMT-18, and WMT-19 datasets, but LLaMa generated answer says'mBART achieves state-of-art accuracy on WMT-14 dataset', which is incorrect and not mentioned in the paper. To better understand the factuality of generated answers, we do a small-scale qualitative evaluation. An expert annotator (different from the annotator Figure 2: Prompting ChatGPT-3.5 for generating answers for model card questions. in data curation) evaluates the LLM-generated answers for five models in the dataset for each of the LLMs (ChatGPT-3.5, LLaMa, Galactica 125M, and Galactica 1.3B). We choose three different labels, Completely Correct (CC), Partially Correct (PC), and Incorrect (IC); to evaluate the answers. If the LLM answer exactly matches the ground truth answer in terms of relevance, factuality, and exhaustively covers all crucial facts, the response generated by the LLM is marked CC. If the generated response consists of some correct and some incorrect facts, it is marked PC. If none of the facts in the generated answer are correct, the answer is marked IC. Special attention is paid to dataset, model, and task names during evaluation, and any mismatch in the names is marked as incorrect. In the earlier provided example, where LLaMa listed WMT-14 dataset while the ground truth was WMT-16, WMT-17, WMT-18, and WMT-19 datasets, the subject expert annotator marked the answer as incorrect, even though the phrase 'WMT' will be matched by other automatic evaluation metrics. BS-F1 for these pair of sentences is high, 0.795, even though the dataset is incorrect. We present representative examples of CC, PC, and IC in Table 3. ### Results of zero-shot QA using LLMs We showcase the average BLEU, ROUGE-L, and BERT Score P, R, and F1 in Table 5. We observe that ChatGPT-3.5 performs the best across all metrics, and LLaMa 7B performs comparably to ChatGPT-3.5. It should be noted that the models have different parameters, and hence the results should not be generalized to bigger LLaMa and Galactica models. Question-wise scores (Table 4) show that ChatGPT-3.5 achieves the highest ROUGE-L scores in answering Q6 and Q13, questions about model biases and computational resources. Galactica 125M performed worst than all models, often generating repetitive text, which could also be attributed to its size. A major limitation of all LLMs is that their text generation is not grounded on facts. To analyze the LLM-generated answers for factuality, we present the results for qualitative evaluation in Table 6. The low scores indicate that most of the generated answers are incorrect and often memorized based on the frequency in the training dataset. For example, a manual analysis reveals that ChatGPT-3.5 often reports the training infrastructure as 3 V100 GPUs, irrespective of the model for which we probe for the training details. While ChatGPT-3.5 generated mostly partially correct responses, other models' responses were often incorrect, with Galactica 125M having almost 92% incorrect responses. ## 4 Potential Usage We posit that our dataset of QA pairs can be used to train models for generating model cards. Our dataset can be used for instruction tuning LLMs for generating ML model cards by prompting specific questions about different aspects of the model. Similarly, our dataset of answers generated from LLaMa and Galactica models can be utilized as there are initial answers from LLMs and ground truths. So LLMs can be prompted with the whole conversation, where they are provided with a question, incorrect answer, and then pointed the correct answer. For ChatGPT-3.5-generated responses, we provide another layer where we label which spans of the answer are incorrect. ## 5 Related Work **Information Seeking Datasets for Research Papers:** Previous works curate datasets from research papers for fact verification and facilitating paper reading. ScienceQA (Saikh et al., 2022) comprises 100k synthetic question-answer-context triples, where the questions are generated from a filtered set of noun phrases extracted from 1825 IJCAI papers. QASPER (Dasigi et al., 2021) is a dataset of 5049 questions over 1585 NLP papers, with extractive, abstractive, and binary yes/no answers. Multiple works curate QA datasets in the biomedical domain, such as BIOASQ Tsatsaronis et al. (2015), BioRead Pappas et al. (2018), and PubMedQA dataset (Jin et al., 2019). SciFact(Wadden et al., 2020) dataset consists of 1.4k \begin{table} \begin{tabular}{|p{113.8pt} p{113.8pt} p{113.8pt}|} \hline **LLM-generated** & **Ground Truth** & **Label** \\ \hline The architecture is based on the BERT architecture, an encoder-only model. & DistiffBERT has the general architecture as BERT. & CC \\ The pretraining objective is same as BERT, i.e., predict some masked tokennext sentence prediction objective. & We follow the BERT –PC training, i.e., but without the sentence prediction objective. & BERT –PC training, i.e., MLM \\ The model achieves state-of-art accuracy on WMT-14 dataset. & Results indicate a IC new SOTA on the WMT-16 English-Romanian. & CC \\ \hline \end{tabular} \end{table} Table 3: Representative examples of LLM-generate text and ground truth, marked as CC, PC, and IC. Artifact names and concepts (underlined) are crucial in determining the label. expert-written biomedical scientific claims and evidence abstracts with labels (support or refutes) and rationales. In comparison to the existing datasets, our dataset is highly specific and curates model card information for an efficient understanding of ML models. **Model cards and allied concepts:** Similar to Model cards (Mitchell et al., 2019), datasheets for datasets (Gebru et al., 2021) are proposed to document dataset information and bridge the gap between dataset creators and consumers. Datasheets are proposed to document the data collection process, sources, intended use cases, etc., to promote transparency. RiskCards(Derczynski et al., 2023) document the risks associated with LMs by constructing a detailed skeleton to document risks such as harm type (which group is at what type of harm), references, and sample prompts and LM output. AI Usage Cards (Wahle et al., 2023) is proposed to standardize the reporting of the usage of AI technologies such as ChatGPT-3.5. These works emphasize the need for documenting artifacts in this age of information overload. We posit that our dataset will be utilized by models to assist in automating model card generation. This has positive prospects as instruction tuning has shown promise in improving the ability of LLMs to follow instructions and show significant improvement in downstream tasks. Apart from our QA dataset, we also provide the QA pairs generated by various LLMs such as ChatGPT-3.5, LLaMa, and Galactica. Lastly, model cards for some models are available on huggingface; however, those are unstructured and available in free-form text. Our dataset provides structured model card details in a QA format that can be leveraged by LLMs to learn model card generation. There are no ethical concerns with our dataset and it doesn't contain offensive content or personally identifiable information. ## 6 Conclusion To summarize, we curate a QA dataset of 500 pairs from research papers of 25 machine learning models. A set of 20 general questions are curated for the purpose of seeking crucial details of model training, dataset, problem statement, ablation studies, etc. We employed a two-stage annotation pipeline consisting of preliminary and expert annotation stages to ensure the quality of the curated dataset. Next, we evaluate the capability of LLMs ChatGPT-3.5, LLaMa, and Galactica in generating answers for the model card questions. All the evaluated models include infactual details in the answer, highlighting the potential for developing better models for model card generation. In the future, we plan on expanding the question set and covering models from other domains, such as CV and robotics.
2310.00371
ConSOR: A Context-Aware Semantic Object Rearrangement Framework for Partially Arranged Scenes
Object rearrangement is the problem of enabling a robot to identify the correct object placement in a complex environment. Prior work on object rearrangement has explored a diverse set of techniques for following user instructions to achieve some desired goal state. Logical predicates, images of the goal scene, and natural language descriptions have all been used to instruct a robot in how to arrange objects. In this work, we argue that burdening the user with specifying goal scenes is not necessary in partially-arranged environments, such as common household settings. Instead, we show that contextual cues from partially arranged scenes (i.e., the placement of some number of pre-arranged objects in the environment) provide sufficient context to enable robots to perform object rearrangement \textit{without any explicit user goal specification}. We introduce ConSOR, a Context-aware Semantic Object Rearrangement framework that utilizes contextual cues from a partially arranged initial state of the environment to complete the arrangement of new objects, without explicit goal specification from the user. We demonstrate that ConSOR strongly outperforms two baselines in generalizing to novel object arrangements and unseen object categories. The code and data can be found at https://github.com/kartikvrama/consor.
Kartik Ramachandruni, Max Zuo, Sonia Chernova
2023-09-30T13:24:26Z
http://arxiv.org/abs/2310.00371v1
# ConSOR: A Context-Aware Semantic Object Rearrangement Framework for Partially Arranged Scenes ###### Abstract Object rearrangement is the problem of enabling a robot to identify the correct object placement in a complex environment. Prior work on object rearrangement has explored a diverse set of techniques for following user instructions to achieve some desired goal state. Logical predicates, images of the goal scene, and natural language descriptions have all been used to instruct a robot in how to arrange objects. In this work, we argue that burdening the user with specifying goal scenes is not necessary in partially-arranged environments, such as common household settings. Instead, we show that contextual cues from partially arranged scenes (i.e., the placement of some number of pre-arranged objects in the environment) provide sufficient context to enable robots to perform object rearrangement _without any explicit user goal specification_. We introduce ConSOR, a Context-aware Semantic Object Rearrangement framework that utilizes contextual cues from a partially arranged initial state of the environment to complete the arrangement of new objects, without explicit goal specification from the user. We demonstrate that ConSOR strongly outperforms two baselines in generalizing to novel object arrangements and unseen object categories. The code and data are available at [https://github.com/kartikvrama/consor](https://github.com/kartikvrama/consor). ## I Introduction Consider a service robot tasked with putting away newly delivered groceries, or cleaning a living room. In both tasks, the environment is most likely already partially arranged, and that arrangement provides valuable clues for where new items should be placed. For example, the pantry may already contain unfinished boxes of cereals and pasta on different shelves, while the left drawer of the refrigerator may contain half-finished vegetables. Thus, new items, such as a box of oatmeal, should be placed in accordance with the user's existing organization scheme (e.g., near the cereal). Similarly, a book may naturally be placed alongside other books on the shelf rather than next to households. The general problem of identifying the correct item placement in a complex environment is known as the _object rearrangement problem_[1]. Prior work on object rearrangement has explored a diverse set of techniques for following user instructions to achieve some desired goal state. Logical predicates [2, 3], images of the goal scene [4, 5, 6], and natural language descriptions [7, 8, 9] have all been used to instruct a robot in how to arrange objects. However, all of the above techniques place a burden on the user to explicitly describe the goal state, or else to explicitly demonstrate the rearrangement task so that the robot can learn from demonstrations [10, 11]. In this work, we posit that contextual cues from partially arranged scenes (i.e., the placement of some number of pre-arranged objects in the environment) provide sufficient context to enable robots to perform object rearrangement _without any explicit user goal specification_. Closely related to our work are those of Abdo et al. [12] and Wu et al. [13], which reason about object similarities by learning object relationships from demonstrations of arranged environments, which are then generalized to novel environments. However, these works require that the desired organizational style in the goal state be known _a priori_ (e.g., specified by the user) instead of inferring this style from scene context. We introduce ConSOR, a Context-aware Semantic Object Rearrangement framework that utilizes contextual cues from a partially arranged initial state of the environment to complete the arrangement of new objects, without explicit goal specification from the user. Figure 1 presents an overview of our framework. ConSOR reasons about the semantic properties of objects in the environment, and the context provided by the number of containers and existing placement of objects into containers, to infer the desired placement for new, unarranged objects. Additionally, ConSOR leverages prior commonsense knowledge from pre-trained ConceptNet embeddings to perform zero-shot generalization to scenes with objects unseen during training. Our work makes the following contributions: * We formalize the problem of object rearrangement in partially arranged environments. * We present ConSOR, a Context-aware Semantic Object Rearrangement framework that replaces human instruction with contextual cues from the initial state of the environment to infer the desired goal state of an object rearrangement task. * We contribute a dataset of \(8\)k rearranged goal states from a dataset of \(38\) household objects, with each goal state associated with one of four predefined organizational _schemas_. * We demonstrate that ConSOR is able to generalize both to novel arrangements and novel object classes, achieving high performance across all four organizational schemas we tested. We compare ConSOR with two baselines, a collaborative filtering-based approach to grouping objects based on learned pairwise similarity scores [12] and the GPT-3 large language model [14], on a withheld set of novel object arrangements and object types. Our results show that ConSOR strongly outperforms both baselines in every tested category, without assuming that the target organization scheme is known _a priori_. ## II Related Work Numerous works in the literature have proposed approaches to goal-conditioned object rearrangement. The means by which a user specifies the goal varies. In some works, the goal is represented by a set of logical predicates encoding relationships between objects [2, 3]. Alternately, in _visual object rearrangement_, the goal is specified as an image, and the robot must perform object matching between the initial and goal images to determine the required object placement [4, 5, 6]. A third form of goal specification is _natural language instruction_, and recent work in language-conditioned manipulation has contributed techniques that ground a language description of the desired goal to the observable environment while performing zero-shot generalization to novel language commands [7, 8, 9, 15]. Critically, all of the above methods are ineffective in the absence of an explicit goal specification. To perform rearrangement without goal specification, some recent works take the approach of learning user-specific preferences, often modeling these preferences from a single demonstration [10, 11]. These methods translate preferences encoded in the user demonstration, such as the order of moving objects or a preferred location of an object category, to a novel environment in a zero-shot manner, thereby eliminating the need to constantly provide task instructions. However, the above methods do not model object similarities, thereby limiting the scope of these approaches to template-like arrangement tasks (e.g., table setting, arranging an office desk). Additionally, these methods still require a demonstration for every new user or preference style. Closest to our work is the collaborative filtering technique by Abdo et al. that learns user-specific preferences of grouping objects in containers as pairwise similarities between object categories [12]. This technique is also extended to both rearrange novel object categories and optimize probing for the preferences of a new user. However, the agent in Abdo et al. assumes that the type of organization being sought is known _a priori_ (e.g., the robot knows to organize items by class, or by action affordances). Thus, for example, in putting away groceries or organizing a shelf, the user would always be required to specify the target organization type. Our work relaxes this assumption and does not assume the organization type is known _a priori_. Instead, our model infers the desired object similarities from contextual cues in the observed initial state. Additionally, the approach by Abdo et al. is limited to approaching rearrangement as modeling pairwise similarities between object instances while our proposed framework can model more general semantic similarities between objects. Finally, their method requires a mixture of experts in order to generalize to novel object categories while the proposed ConSOR framework only needs the ConceptNet labels of novel objects to perform zero-shot generalization. Another approach to object rearrangement is to learn user-agnostic object placement preferences from crowdsourced object arrangements. Toris et al. present a multi-hypothesis model to learn to pick and place task templates representing the preferred placement locations of objects from human demonstrations [16]. Sarch et al. propose an embodied AI rearrangement framework that learns from commonsense object-location preferences in tidy households to identify misplaced objects in a novel home environment and move the object to the best matching receptacle [17]. In a similar work, Kant et al. contribute a benchmark and baseline for tidying household environments by identifying and rearranging misplaced objects without instruction using commonsense knowledge derived from a large language model [18]. Though the above rearrangement approaches successfully avoid task goal specifications, the methods only focus on modeling object-receptacle preferences and do not reason about pairwise object similarities when placing objects. In a different work, Wu et al. propose an imitation learning framework to learn the target distribution of desired object arrangements from expert examples as a gradient field [13]. The learned target gradient field can then be used as a reward Fig. 1: Our semantic object rearrangement framework, ConSOR, takes partially arranged object scenes (left), uses a Transformer-based neural architecture to infer contextual cues about the likely goal arrangement, and generates the desired arrangement state (right). function to train a Reinforcement Learning (RL) agent to rearrange objects. However, the approach by Wu et al. requires separate models for different goal arrangement distributions and cannot identify the desired target distribution from the initial environment state, thereby restricting its usage to rearrangement tasks with only a single organizational style. ## III Object Rearrangement of Partially Arranged Environments We formalize object rearrangement in partially arranged environments as an instance of the general class of object rearrangement problems [1] in which the goal arrangement state is not explicitly stated to the robot, but instead must be inferred from the context provided by already arranged items in the scene. Specifically, the robot is presented with: * a fixed set of receptacles \(\mathcal{R}\) in which objects can be placed (e.g., shelves, bins, containers), * a set of prearranged objects, \(\mathcal{X}_{P}\), arranged within the receptacles to match the user's intended organization schema (e.g., partially filled pantry where items are separated by meal type), and * a set of unarranged objects, \(\mathcal{X}_{U}\), for which the robot must find the correct receptacle in order to match the user's organizational schema. Note that the user's schema is not explicitly specified and must be inferred from \(\mathcal{X}_{P}\) and \(\mathcal{R}\). Multiple schemas can be applied to the same set of objects, and the robot's challenge is to infer the correct one and place \(\mathcal{X}_{U}\) accordingly. At any given time, we model the state \(\mathcal{S}\) of the rearrangement environment as a set of tuples \(\{x_{i},\ldots,x_{N_{x}}\}\), where \(N_{x}\) is the number of object instances in \(\mathcal{S}\). Each object instance is represented as \(x_{i}=(o_{i},r_{i},i)\), where \(o_{i}\) is the object category/class, \(r_{i}\in\mathcal{R}\) is the receptacle in which \(o_{i}\) is placed, and \(i\) is an identifier index to distinguish object instances of the same category (e.g., multiple bowls in the same scene are assigned different values of \(i\) in the state representation \(\mathcal{S}\)). \(\mathcal{R}\) contains a work surface \(T\) and the set of immovable containers \(\mathcal{C}=\{C_{i},\ldots C_{N_{C}}\}\), where \(N_{C}\) is the total number of containers in \(\mathcal{S}\). Note that the containers in \(\mathcal{C}\) can generalize to different cabinets, shelves, or drawers in a real household. We represent a receptacle \(r_{j}\) that does not contain any object in \(\mathcal{S}\) by artificially placing a 'null' object in the receptacle and adding it to the state. Given a partially arranged initial state \(\mathcal{S}^{initial}\), our goal is to reach a desired goal state \(\mathcal{S}^{goal}\) by moving objects on \(T\) (i.e., objects in \(\mathcal{X}_{U}\)) to containers \(\mathcal{C}\) such that the resulting arrangement matches the user's latent organizational schema. Note that our current problem does not consider visual semantics such as appearance similarities; however, this formulation can be extended to consider visual features by adding observation information to the state representation. ## IV ConSOR Transformer Model Architecture To address the above problem, we introduce the Context-aware Semantic Object Rearrangement framework (ConSOR). ConSOR uses a learned Transformer encoder to generate an object-centric latent embedding space from the partially arranged initial state that mimics the object grouping in the desired goal state. The object-centric embeddings are then clustered to determine object placements in the predicted goal state. Prior work in robot rearrangement has combined object-centric state representations with the attention capabilities of Transformer encoders to enhance the generalization capabilities of these models to novel objects, scenes and tasks [10, 15, 19]. We adopt a similar approach in designing the encoder model of ConSOR and augment Fig. 2: Model architecture showing how ConSOR encodes a rearrangement scene and transforms it into a learned embedding space, which is then used to determine object placements in the predicted goal state. Each green bin represents a container receptacle \(c_{i}\in\mathcal{C}\). In the initial state, \(\mathcal{X}_{\mathcal{P}}\) is the set of objects already in a green bin, and \(\mathcal{X}_{U}\) is the set of objects on the blue table. (right) Predicted receptacle assignments for all objects in \(\mathcal{X}_{U}\) are shown using white arrows. object-centric representations with commonsense knowledge from ConceptNet to generalize to novel object categories. Figure 2 presents a detailed structure of the ConSOR framework. Given the initial state \(\mathcal{S}^{initial}\), we project each object instance \(x_{i}^{initial}=(o_{i},r_{i}^{initial},i)\) to a higher-dimensional space using a scene encoder \(h(x_{i}^{initial})\to e_{i}\), where \(e_{i}=[e_{o_{i}},e_{r_{initial}},e_{i}]\). Specifically, \(e_{o_{i}}\) is the pre-trained ConceptNet Numberbatch vector [20] corresponding to the category \(o_{i}\), \(e_{r_{initial}}\) is a positional encoding to indicate which receptacle the object lies in, and \(e_{i}\) is a positional encoding of the indicator index \(i\). The encoded scene \(\mathcal{V}^{initial}=\{h(x_{i}^{initial}),\ldots h(x_{N_{x}}^{initial})\}\) is passed through ConSOR's Transformer encoder to output normalized latent embeddings \(\mathcal{L}=\{l_{i},\ldots l_{N_{x}}\}\). The encoder is trained with a triplet margin loss [21] to group embeddings of objects sharing the same receptacle in the goal state together and move embeddings of objects in different receptacles away from each other. In this manner, the encoder learns to generate a latent embedding space from the partially arranged initial state that mimics the object grouping in the desired goal state. During evaluation, ConSOR chooses the container to place each unarranged object instance \(x_{i}^{U}\in\mathcal{X}_{U}\) by calculating the centroid of each container in the latent space and choosing the container whose centroid has the highest cosine similarity with the corresponding latent vector \(l_{i}^{U}\). Mathematically, the predicted placement \(\hat{r}_{i}^{U}\) of instance \(x_{i}^{U}\) is determined as \[\hat{r}_{i}^{U}=\operatorname*{arg\,max}_{c\in\mathcal{C}}l_{i}^{U}\cdot l_{c }^{centroid} \tag{1}\] where \(l_{c}^{centroid}\) is the latent centroid embedding of container \(c\) in \(\mathcal{S}_{initial}^{initial}\). The encoder model of ConSOR consists of three stacked Transformer encoder layers, followed by an MLP layer to reduce the dimension of the generated embeddings and a \(L_{2}\) normalization layer. We train the model for \(30\) epochs (learning rate=\(1e-3\), batch size=\(64\), dropout=\(0.5\)) and perform early stopping based on the success rate obtained from evaluating on the validation dataset. ## V Dataset of Organizational Schemas for Object Rearrangement To evaluate object rearrangement in partially arranged environments, we contribute a novel dataset of arranged scenes generated using household objects from the AI2Thor simulator [22]. To generate each scene, we defined four organizational schemas to determine how objects are grouped in the goal state: 1. Class schema (\(F_{class}\)): grouping objects based on the affinity of their semantic concepts in WordNet [23], 2. Utility schema (\(F_{utility}\)): grouping objects based on the affinity of their product categories mined from a popular retail store (Walmart [24]), 3. Affordance schema (\(F_{affordance}\)): grouping objects with similar action affordance labels (6 affordance labels in total) gathered from the Moving Objects dataset [25], and 4. One-of-everything (\(F_{OOE}\)): distributing objects in containers such that each container holds exactly one of each object type (referred to as OOE for brevity). We created a schema-balanced dataset by generating \(1980\) training, \(110\) validation, and \(110\) test goal scenes from each of the four schemas using a set of \(28\) object categories taken from the AI2Thor simulator [22] and grounded in WordNet. Example object categories include fruits, vegetables, office supplies, kitchen and dining accessories, cleaning supplies, bathroom accessories, and home decor. We also generated a secondary test dataset of \(120\) goal scenes using \(10\) novel object categories to test the generalization capability of ConSOR to object categories that were entirely unseen during training. Table I lists the objects present in our dataset. Partially arranged initial scenes are generated from goal scenes by sequentially removing randomly selected objects in \(\mathcal{X}^{\mathcal{P}}\) from containers and placing them on work surface \(T\). In this manner, we systematically vary the degree to which the presented organization is complete. Figure 3 shows example arranged scenes from each schema. In the Class Schema (A1 and B1), objects are grouped by class similarity, such that vegetables should be placed in one bin, and kitchen items in the other. Note that Fig. 3: Goal states for each schema from two different sets of objects. The top row of scenes are from the test dataset with seen object categories, while the bottom row of scenes are from the test dataset with unseen object categories. in B1, the model is asked to generalize to cleaning supplies, with the goal of grouping them with kitchen items rather than vegetables due to more closely aligned similarity, as only two containers are provided in this example. In the Utility Schema (A2, B2), the robot is provided with three containers. In A2, vegetables, a soap dispenser, and cooking supplies are organized into different bins. In B2, the model generalizes to previously unseen objects, placing cleaning supplies in their own bin. In the Affordance Schema (A3, B3), objects are grouped by their afforded functionality. As this example highlights, detecting the desired organizational structure from a partial scene can be quite challenging. The key clue is given by the spoon, which is placed separately from other kitchen items. This is due to the spoon's shape (long handle with shallow convex hull) differing from that of the other objects (round with deep convex hull), thereby resulting in different affordances. Thus, the robot must learn to appropriately group the remaining items. Furthermore, note that scenes B2 and B3 have different initial states but end up in the same goal state; this type of aliasing makes the partial rearrangement problem complex, causes the differences between schemas to be less obvious, and requires that the learned model pay close attention to contextual cues in the initial state. Finally, in the One-of-Everything Schema (A4, B4), the robot's objective is to place one of each object in the bins, akin to packing a lunch, or conference gift bags. We include this schema because, while appearing simple, it can pose quite a challenge to machine learning algorithms because similar objects must be separated rather than binned together. As we will show in the results, prior works struggle to find solutions to this schema. Note that, although we define four types of schemas, ConSOR is trained only on the initial and goal states without any schema labels. Instead, ConSOR learns to distinguish between scenes of different schemas by learning the differences between them from the training data. ## VI Baselines and Metrics We compare ConSOR performance against two baselines: _Abdo-CF_: _Abdo-CF_ is a collaborative filtering technique proposed by Abdo et al. that learns user-specific pairwise object similarities from multiple preference ranking matrices of different users [12]. The learned pairwise preferences per user are then used to identify the placements of query objects via spectral clustering. Critically, this approach requires that the organization or schema type be known _a priori_ and given as an input. By comparison, ConSOR implicitly infers the schema from contextual cues in the partially arranged initial scene. Additionally, _Abdo-CF_ generalizes to unseen objects by relying on a mixture of experts providing object similarities of the unseen object, while ConSOR only requires the ConceptNet labels of unseen objects. _GPT-3_: The Generative Pre-trained Transformer 3 (_GPT-3_) is an autoregressive language model that generates natural language in response to user input [14]. Prior works have demonstrated that large language models such as _GPT-3_ are able to reason about sequential tasks and physical spaces [26, 27, 28]. To utilize _GPT-3_ as a baseline, we prompt the model with a description of the partially arranged initial state along with one unlabeled demonstration from each schema to inform the language model about the desired output. Figure 4 shows a prompt taken from the test dataset and the corresponding response from GPT-3. In our problem formulation, we seek to transform an initial object arrangement, represented by state \(\mathcal{S}^{initial}\), to a goal object arrangement, represented by the goal state \(\mathcal{S}^{goal}\), where the goal is not know to the robot _a priori_ and must be inferred. We therefore evaluate object arrangement performance by measuring the similarity between the achieved object arrangement state and the goal state. To quantify this difference, we introduce a distance measure derived from _edit distance_, a widely used string similarity metric in computational linguistics [29]. Specifically, we define the Scene Edit Distance (\(SED\)) between states \(\mathcal{S}^{A}\) and \(\mathcal{S}^{B}\) as the minimum number of object displacements that must be made in \(\mathcal{S}^{A}\) to reach \(\mathcal{S}^{B}\). In our problem formulation, this is equivalent to the number of misplaced objects in \(\mathcal{S}^{A}\) compared to \(\mathcal{S}^{B}\) and vice-versa. Additionally, we derive two aggregate evaluation metrics from \(SED\) to measure rearrangement performance across an entire dataset. The first is the Success Rate, \(M^{SR}\), which corresponds to the fraction of goal states predicted correctly: \[M^{SR}=\frac{1}{D}\cdot\sum_{i=1}^{D}\mathds{1}(SED(\hat{\mathcal{S}}_{i}, \mathcal{S}^{goal}_{i})=0) \tag{2}\] where \(\hat{\mathcal{S}}_{i}\) is the predicted goal state, \(\mathcal{S}^{goal}_{i}\) the ground truth state for the initial state \(S^{initial}_{i}\), and \(D\) is the total number of examples in the test dataset. The second metric is the Average Non-zero SED \(M^{NSD}\) or the average \(SED\) between incorrectly predicted goal Fig. 4: Prompt given to GPT-3 and its response for a one-of-everything schema scene. The misplaced objects are marked in red underline. states (\(SED>0\)) and their ground truth states. This is defined as: \[M^{NSED}=\frac{\sum_{i=1}^{D}SED(\hat{\mathcal{S}}_{i},\mathcal{S}_{i}^{goal}) \cdot\mathds{1}(SED(\hat{\mathcal{S}}_{i},\mathcal{S}_{i}^{goal})>0)}{\sum_{i=1 }^{D}\mathds{1}(SED(\hat{\mathcal{S}}_{i},\mathcal{S}_{i}^{goal})>0)} \tag{3}\] Together, the above two metrics capture a model's performance, such that \(M^{SR}\) reports the percentage of arrangements that an algorithm gets completely right, and \(M^{NSED}\) reports the degree of dissimilarity for scenes that were not correct (non-zero SED). ## VII Evaluation Results In this section, we present results of two generalization experiments, first evaluating generalization to previously unseen arrangements with known objects, and second evaluating zero-shot generalization to novel object categories. Additionally, we present insights characterizing the differences in learned embedding spaces across schemas, and evaluate the effect of training data size on performance. ### _Generalizing to Unseen Object Arrangements_ Table II presents a summary of evaluation results from testing on scenes with unseen object arrangements. ConSOR yields a higher \(M^{SR}\) than both _Abdo-CF_ and _GPT-3_ across all four schemas. Notably, our framework is also the only method in our evaluation to perfectly rearrange \(F_{OOE}\) scenes. Additionally, ConSOR has the least \(M^{NSED}\) score across all four schemas, indicating that, in the rare cases that errors occur, ConSOR generates state predictions that are closer to the true goal state than the baseline approaches. _Abdo-CF_ has the second-best performance in three out of four of the schemas while failing to successfully rearrange a single \(F_{OOE}\) scene. We attribute the failure in \(F_{OOE}\) by _Abdo-CF_ to the inductive bias of collaborative filtering, as it is difficult to mimic the \(F_{OOE}\) schema using pairwise object similarities. _GPT-3_ performs the worst on three out of four schemas with a slightly higher \(M^{SR}\) in \(F_{OOE}\) than _Abdo-CF_. This shows that the general-purpose commonsense knowledge learned by _GPT-3_ is insufficient to model an organizational schema with a specific set of semantic constraints, thus necessitating the need for our proposed framework. ### _Zero-Shot Generalization to Novel Object Categories_ Table III shows our evaluation results from testing on scenes with novel object categories. We do not evaluate the baseline _Abdo-CF_ on novel object categories as this method requires external semantic knowledge to rearrange objects unseen during training, and the lack of this knowledge with other methods leads to an unfair comparison. ConSOR is able to successfully leverage the commonsense knowledge embedded in ConceptNet to perform zero-shot generalization to completely novel scenes and outperform _GPT-3_. Also, in comparison to our model's performance on unseen object arrangements, ConSOR retains performance on three out of four schemas, with \(F_{utility}\) showing the largest drop in performance. We believe this drop in performance is due to the Utility schema deviating the most from the commonsense knowledge embedded in ConceptNet. Fig. 5: Predicted goal states generated by ConSOR for scenes with novel object categories. Correct object placements are shown using a dash-dotted green bounding box and arrow, while incorrect object placements are shown using a dashed red bounding box and arrow. Figure 5 presents some of the correct and incorrect goal predictions made by ConSOR for scenes with novel object categories. We observe in Figure 5(a) that ConSOR occasionally places objects of the same category in separate containers even when the desired goal schema is not \(F_{OOE}\). We hypothesize that this may be attributed to the model lacking confidence about the desired schema, resulting in a goal state with a 'hybrid' schema. Figures 5(b) and 5(c) show two accurate goal predictions, belonging to \(F_{OOE}\) and \(F_{utility}\) respectively. In both, ConSOR leverages contextual cues about the desired schema from the initial state, such as the number of containers in the scene and the current object arrangement, to perfectly generate the goal state. ### _Visualizing the difference in scene organization across schemas in learned embedding space_ Evaluation results in previous sections show that ConSOR successfully learns a mapping from the initial partially arranged rearrangement state to a latent space of embeddings mimicking the desired object grouping. We visualize the embeddings generated by our method for four different initial state configurations from the test dataset in Figure 6 using a T-SNE projection [30]. We observe that, for the same set of object categories, the learned embedding space of ConSOR adapts itself to the structure of the scene (number of containers) as well as the organization of objects in the initial state. For example, the latent spaces of scenes A (Class schema) and D (One-of-everything Schema) highlight the differences in the two scenes, namely the different numbers of containers in the scene and whether the same object categories are grouped together or separately. On the other hand, scenes B (Affordance schema) and C (Utility schema) can only be differentiated by observing whether 'Pen' or 'Cloth' is grouped with 'Pot', and this is seen in the latent space as well. We also note that ConSOR is able to identify the differences between scenes of different Fig. 6: Visualizing the learned embedding space of ConSOR for scenes of different schemas. The right side of each cell shows the initial scene and ground truth object placements, and the left side is a T-SNE projection of the generated object-centric embeddings in two dimensions. Fig. 7: Measuring the change in performance of each rearrangement method based on the number of goal scenes in the training data. The plot on the left is a line diagram of success rate averaged across all schemas versus the size of training data, with the success rate of _GPT-3_ shown as a dotted line. The plot on the right is a series of box plots showing the varying range of non-zero \(SED\) scores for ConSOR and _Abdo-CF_ with an increase in training data. schemas without being explicitly trained with the actual schema labels, and instead learns to differentiate schemas from unlabelled training data. ### _Effect of Size of Training Data on Performance_ Finally, we evaluate the effect of training data size on the performance of ConSOR and _Abdo-CF_. Figure 7 shows the change in average \(M^{SR}\) and \(M^{N{SED}}\) values for different numbers of goal scenes in the training data. All metrics were calculated using the same test dataset as the previous sections. We observe the average \(M^{SR}\) of ConSOR steadily rising and the variance of \(M^{N{SED}}\) scores decreasing with more training data, while _Abdo-CF_ performance saturates after \(496\) training goal scenes. Across all training data sizes, we find that ConSOR has a higher average \(M^{SR}\) and lower mean \(M^{N{SED}}\) score than _Abdo-CF_. ## VIII Conclusion and Discussion This work introduces ConSOR, a semantic reasoning framework for object rearrangement. ConSOR relies on contextual cues from a partially arranged environment to infer the desired goal state by generating a learned object-centric latent space that mimics the arrangement in the desired goal state. Additionally, ConSOR leverages external commonsense knowledge from ConceptNet to perform zero-shot generalization to rearrange scenes with novel object categories. We evaluated our proposed framework on a dataset of 8k arranged scenes, each belonging to one of four 'organizational schemas', and found that our approach strongly outperforms both the _Abdo-CF_ and _GPT-3_ baseline across all tested conditions. Note that ConSOR outperforms the next leading baseline, _Abdo-CF_, even though the baseline is explicitly given the target schema type as input (e.g., _class schema_) while ConSOR is required to infer this information automatically from context. In a real-world setting, ConSOR would therefore require significantly less effort from the user. One assumption of our approach is that the robot knows which items have been pre-arranged, and thus serve as useful context, and which still need to be put away. Depending on the real-world application, making this distinction may be simple or quite difficult (e.g., bags of new groceries are easy to detect, out-of-place living room objects pose a greater challenge). Recent work on visuo-semantic commonsense priors may help inform this decision in the future [17, 18].
2309.04074
Computationally Efficient Data-Driven Discovery and Linear Representation of Nonlinear Systems For Control
This work focuses on developing a data-driven framework using Koopman operator theory for system identification and linearization of nonlinear systems for control. Our proposed method presents a deep learning framework with recursive learning. The resulting linear system is controlled using a linear quadratic control. An illustrative example using a pendulum system is presented with simulations on noisy data. We show that our proposed method is trained more efficiently and is more accurate than an autoencoder baseline.
Madhur Tiwari, George Nehma, Bethany Lusch
2023-09-08T02:19:14Z
http://arxiv.org/abs/2309.04074v1
Computationally Efficient Data-Driven Discovery and Linear Representation of Nonlinear Systems For Control ###### Abstract This work focuses on developing a data-driven framework using Koopman operator theory for system identification and linearization of nonlinear systems for control. Our proposed method presents a deep learning framework with recursive learning. The resulting linear system is controlled using a linear quadratic control. An illustrative example using a pendulum system is presented with simulations on noisy data. We show that our proposed method is trained more efficiently and is more accurate than an autoencoder baseline. ## I Introduction Linear dynamics are desirable for control due to the applicability of a rigorous control system toolkit to guarantee controllability, observability, and stability for dynamical systems. However, nearly all dynamical systems are inherently nonlinear and thus require linearization techniques or complex nonlinear state estimation and control. Traditional linearization techniques linearize around an operating point, only approximate for small time horizons, and require real-time or recursive techniques, which increase the computational burden and unpredictability of the system. Additionally, nonlinear state estimation and control techniques often cannot guarantee stability without making several assumptions. Koopman theory, first proposed in 1931 [1], has gained traction over the last few years as a solution for linearizing nonlinear systems. In a nutshell, the theory states that the dynamics of a system can be described linearly by an infinite-dimensional Koopman operator; due to its infinite dimensions, for practical use, it is typically approximated using data-driven methods such as Extended Dynamic Mode Decomposition (EDMD) [2, 3, 4]. Constructing a feed-forward neural network (NN) with the EDMD makes it possible to find an approximate general Koopman operator linearization applicable to a region of state space. Another advantage of using a data-driven method is that it does not require system knowledge, and the nonlinear system can be completely unknown. Recently, several works have focused on developing data-driven frameworks for linearization and system identification of nonlinear systems. Notably, Lusch et al. [4] presented a data-driven method for discovering Koopman eigenfunctions. A modified autoencoder was implemented for identifying nonlinear coordinates on which the dynamics are globally linear. However, the framework handles the continuous spectra with a generalization of Koopman representations, which makes the application of linear control non-trivial. Additionally, even though the models can accurately predict longer intervals than traditional linearization schemes, error propagation still compounds. Junker et al. [5] implemented the prediction step from [2] that reduced the propagation of the error by reevaluating the observable function at every time step through the extraction of the state vector. The authors applied their implementation to a simplified golf robot and produced highly accurate representations that strongly align with the nonlinear dynamics of the robot. They showed that the technique could correctly represent system properties such as stability, controllability, and observability. In training any neural network, the amount of data that can be generated or gathered is an important factor in how successful the network will be in its prediction. [6] utilizes a deep neural network (DNN) to implement a real-time, online MPC in robotic simulations. Although proving successful in implementation, some shortcomings for some simulations were large training data requirements and the handling of non-Lipschitz terms. Gathering these large amounts of data, especially in real-world scenarios, could be lengthy and difficult, as seen in [7, 8, 9]. In [10], Xiao et al. investigated using a deep learning-based EDMD approach to constructing a set of observables able to linearize vehicle dynamics for an autonomous control approach. Compared to other neural network approaches, which often lack interpretability, this method proved to be significantly better at long-term prediction due to its novel, multi-step prediction loss function, similar to the loss function we implemented in this work. A similar example to the problem presented in this paper is the inverted pendulum case. In [11], the authors looked at inverting the pendulum on a cart and then comparing a traditional linearized model to that of a \(4\times 4\) and \(16\times 16\)-sized Koopman operator. Finding little improvement in the approximation with the larger, higher-dimensional operator was an interesting finding. The operators presented in [11] performed better than traditional linearization techniques for large-angle initial conditions for the pendulum. Zinage et al. [12] developed a learning-based controller using Lyapunov theory. The framework ensures stability. However, system knowledge is required for physics-inspired learning, and the autoencoder needs a large amount of training data.
2309.14501
Dynamics of the Fibonacci Order of Appearance Map
The \textit{order of appearance} $ z(n) $ of a positive integer $ n $ in the Fibonacci sequence is defined as the smallest positive integer $ j $ such that $ n $ divides the $ j $-th Fibonacci number. A \textit{fixed point} arises when, for a positive integer $ n $, we have that the $ n^{\text{th}} $ Fibonacci number is the smallest Fibonacci that $ n $ divides. In other words, $ z(n) = n $. In 2012, Marques proved that fixed points occur only when $ n $ is of the form $ 5^{k} $ or $ 12\cdot5^{k} $ for all non-negative integers $ k $. It immediately follows that there are infinitely many fixed points in the Fibonacci sequence. We prove that there are infinitely many integers that iterate to a fixed point in exactly $ k $ steps. In addition, we construct infinite families of integers that go to each fixed point of the form $12 \cdot 5^{k}$. We conclude by providing an alternate proof that all positive integers $n$ reach a fixed point after a finite number of iterations.
Molly FitzGibbons, Steven J. Miller, Amanda Verga
2023-09-25T19:51:58Z
http://arxiv.org/abs/2309.14501v1
# Dynamics of the Fibonacci order of appearance map ###### Abstract. The _order of appearance_\(z(n)\) of a positive integer \(n\) in the Fibonacci sequence is defined as the smallest positive integer \(j\) such that \(n\) divides the \(j\)-th Fibonacci number. A _fixed point_ arises when, for a positive integer \(n\), we have that the \(n^{\rm th}\) Fibonacci number is the smallest Fibonacci that \(n\) divides. In other words, \(z(n)=n\). In 2012, Marques proved that fixed points occur only when \(n\) is of the form \(5^{k}\) or \(12\cdot 5^{k}\) for all non-negative integers \(k\). It immediately follows that there are infinitely many fixed points in the Fibonacci sequence. We prove that there are infinitely many integers that iterate to a fixed point in exactly \(k\) steps. In addition, we construct infinite families of integers that go to each fixed point of the form \(12\cdot 5^{k}\). We conclude by providing an alternate proof that all positive integers \(n\) reach a fixed point after a finite number of iterations. ## 1. Introduction In 1202, the Italian mathematician Leonardo Fibonacci introduced the Fibonacci sequence \(\{F_{n}\}_{n=0}^{\infty}\), defined recursively as \(F_{n}=F_{n-1}+F_{n-2}\) with initial conditions \(F_{0}=0\) and \(F_{1}=1\). By reducing \(\{F_{n}\}_{n=0}^{\infty}\) modulo \(m\), we obtain a periodic sequence \(\{F_{n}\mod m\}_{n=0}^{\infty}\). This new sequence and its divisibility properties have been extensively studied, see for example [M1, M5]. To see why the reduced sequence is periodic, note that by the pigeonhole principle if we look at \(n^{2}+1\) pairs \((F_{k},F_{k-1})\), at least two are identical (mod \(n\)) and the recurrence relation generates the same future terms. **Definition 1.1**.: The _order (or rank) of appearance_\(z(n)\) for a natural number \(n\) in the Fibonacci sequence is the smallest positive integer \(\ell\) such that \(n\mid F_{\ell}\). Observe that the function \(z(n)\) is well defined for all \(n\) since the Fibonacci sequences begins with \(0,1,\ldots\) and when reduced by modulo \(n\), a \(0\) will appear again in the periodic sequence. Thus, there will always be a Fibonacci number that is congruent to \(0\mod n\) for each choice of \(n\). The upper bound of \(n^{2}+1\) on \(z(n)\) is improved in [S], which states \(z(n)\leq 2n\) for all \(n\geq 1\). This is the sharpest upper bound on \(z(n)\). In [M2], sharper upper bounds for \(z(n)\) are provided for some positive integers \(n\). Additional results on \(z(n)\) include explicit formulae for the order of appearance of some \(n\) relating to sums containing Fibonacci numbers [M3] and products of Fibonacci numbers [M4]. We study repeated applications of \(z\) on \(n\) and denote the \(k^{\rm th}\) application of \(z\) on \(n\) as \(z^{k}(n)\). We are interested in the following quantity. **Definition 1.2**.: The _fixed point order_ for a natural number \(n\) is the smallest positive integer \(k\) such that \(z^{k}(n)\) is a fixed point. If \(n\) is a fixed point, then we say the _fixed point order_ of \(n\) is \(0\). Table 1 shows which values occur after repeated iterations of \(z\) on the first \(12\) positive integers. We further the study of repeated iterations of \(z\) on \(n\). In Section 2, we provide some useful properties of the order of appearance in Fibonacci numbers. In the remaining sections, we prove our main results, found below. **Theorem 1.3**.: _For all positive integers \(k\), there exist infinitely many \(n\) with fixed point order \(k\)._ **Theorem 1.4**.: _Infinitely many integers \(n\) iterate to each fixed point of the form \(12\cdot 5^{k}\)._ **Theorem 1.5**.: _All positive integers \(n\) have finite fixed point order._ Theorem 1.5 was proved first in [LT] by showing that within finite \(k\), \(z^{k}(n)=2^{a}3^{b}5^{c}\) where \(a,b,c\in\mathbb{Z}_{\geq 0}\) and then proving that \(2^{a}3^{b}5^{c}\) iterates to a fixed point in a finite number of steps. It was later proved in [Ta1] using a relationship between the Pisano period of \(n\) and \(z(n)\). We provide an alternate proof using a minimal counterexample argument. ## 2. Auxiliary Results Here we include some needed results from previous papers. **Lemma 2.1**.: _Let \(n\) be a positive integer. Then \(z(n)=n\) if and only if \(n=5^{k}\) or \(n=12\cdot 5^{k}\) for some \(k\geq 0\)._ A proof of Lemma 2.1 can be found in [M1, SM]. **Lemma 2.2**.: _For all \(a\in\mathbb{Z},a\geq 3\), \(z\left(2^{a}\right)=2^{a-2}\cdot 3\). For all \(b\in\mathbb{Z}^{+},z(3^{b})=4\cdot 3^{b-1}\)._ Lemma 2.2 is Theorem 1.1 of [M2]. **Lemma 2.3**.: _Let \(n\geq 2\) be an integer with prime factorization \(n=p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{m}^{e_{m}}\) where \(p_{1},p_{2},\ldots,p_{m}\) are prime and \(e_{1},e_{2},\ldots,e_{m}\) are positive integers. Then_ \[z(n)\ =\ z(p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{m}^{e_{m}})\ =\ \operatorname{lcm} \left(z(p_{1}^{e_{1}}),z(p_{2}^{e_{2}}),\ldots,z(p_{m}^{e_{m}})\right). \tag{2.1}\] A proof of Lemma 2.3 can be found in Theorem 3.3 of [R]. Lemma 2.3 has been generalized as follows. **Lemma 2.4**.: _Let \(n\geq 2\) be an integer with prime factorization \(n=p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{m}^{e_{m}}\) where \(p_{1},p_{2},\ldots,p_{m}\) are prime and \(e_{1},e_{2},\ldots,e_{m}\) are positive integers. Then_ \[z\left(\operatorname{lcm}(m_{1},m_{2},\ldots,m_{n})\right)\ =\ \operatorname{lcm} \left(z(m_{1}),z(m_{2}),\ldots,z(m_{n})\right). \tag{2.2}\] A proof of Lemma 2.4 can be found in Lemma 4 of [Ty]. **Lemma 2.5**.: _For all primes \(p\), \(z(p)\leq p+1\)._ A proof of Lemma 2.5 can be found in Lemma 2.3 of [M1]. **Lemma 2.6**.: _For all positive integers \(n\), \(z(n)\leq 2n\), with equality if and only if \(n=6\cdot 5^{k}\) for some \(k\in\mathbb{Z}_{\geq 0}\)_ Lemma 2.6 is proven in [S]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(n\setminus k\) & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & **1** & & & & \\ 2 & 3 & 4 & 6 & **12** & \\ 3 & 4 & 6 & **12** & & \\ 4 & 6 & **12** & & & \\ 5 & 5 & & & & \\ 6 & **12** & & & & \\ 7 & 8 & 6 & **12** & & \\ 8 & 6 & **12** & & & \\ 9 & **12** & & & & \\ 10 & 15 & 20 & 30 & **60** & \\ 11 & 10 & 15 & 20 & 30 & **60** \\ 12 & **12** & & & & \\ \hline \end{tabular} \end{table} Table 1. Iterations of \(z\) on \(n\), numbers in bold are fixed points. **Lemma 2.7**.: _For all primes \(p\neq 5\), we have that \(\gcd\left(p,z(p)\right)=1\)._ Lemma 2.7 is proven in Lemma 2.3 of [M1]. **Lemma 2.8**.: _If \(n|F_{m}\), where \(F_{m}\) is the \(m^{\text{th}}\) number in the Fibonacci sequence, then \(z(n)|m\)._ Lemma 2.8 is Lemma 2.2 of [M1]. **Lemma 2.9**.: _For all odd primes \(p\), we have \(z\left(p^{e}\right)=p^{\max(e-a,0)}z(p)\) where \(a\) is the number of times that \(p\) divides \(F_{z(p)}\), \(a\geq 1\). In particular, \(z\left(p^{e}\right)=p^{r}z(p)\) for some \(0\leq r\leq e-1\)._ For a proof of Lemma 2.9, see Theorem 2.4 of [FM]. ## 3. Infinitely many integers take a given number of iterations to reach a fixed point In this section we first prove Lemma 3.1, which helps us show that when \(z^{i}(n)\) is written as the product of a constant relatively prime to \(5\) and a power of \(5\), then \(z^{i}\left(n\cdot 5^{a}\right)\) can be written as the product of that same constant and another power of \(5\). Table 2 lists the smallest \(n\) that takes exactly \(k\) iterations to reach a fixed point for positive integers \(k\) up to \(10\). **Lemma 3.1**.: _Let \(z^{i}(5^{a}\cdot n)=c_{(i,a,n)}5^{a_{i}}\), where \(c_{(i,a,n)}\) is a constant that is relatively prime to \(5\) and depends on \(i\) and \(n\), and \(a_{i}\in\mathbb{Z}^{+}\). Fix \(i,n\in\mathbb{Z}_{\geq 0}\). Then \(c_{(i,a,n)}\) remains the same for all choices of \(a\)._ Proof.: Let the prime factorization of an integer \(n\) be \(n=5^{e_{1}}p_{2}^{e_{2}}\cdots p_{m}^{e_{m}}\) where \(e_{1}\geq 0\) and each of the \(e_{2},e_{3},\ldots,e_{m}\geq 1\). We proceed by induction on the number of iterations of \(z\). First suppose \(i=1\) and let the prime factorization of \(\operatorname{lcm}\left(z(p_{2}^{e_{2}}),\ldots,z(p_{m}^{e_{m}})\right)\) equal \(5^{f_{1}}q_{2}^{f_{2}}\cdots q_{r}^{f_{r}}\) where \(f_{1}\geq 0\) and each of the \(f_{2},\ldots,f_{r}\geq 1\). Observe that \[z(5^{a}\cdot n) = \operatorname{lcm}\left(z(5^{e_{1}+a}),z(p_{2}^{e_{2}}),\ldots,z (p_{m}^{e_{m}})\right) \tag{3.1}\] \[= \operatorname{lcm}\left(z\left(5^{e_{1}+a}\right),\operatorname{ lcm}\left(z(p_{2}^{e_{2}}),\ldots,z(p_{m}^{e_{m}})\right)\right)\] \[= \operatorname{lcm}\left(5^{e_{1}+a},5^{f_{1}}q_{2}^{f_{2}} \cdots q_{r}^{f_{r}}\right)\] \[= q_{2}^{f_{2}}\cdots q_{r}^{f_{r}}\cdot 5^{\max(e_{1}+a,f_{1})}.\] Thus, \(c_{(1,a,n)}=q_{2}^{f_{2}}\cdots q_{r}^{f_{r}}\) for any non-negative integer \(a\) when \(n\) is not a power of \(5\) or when \(\operatorname{lcm}\left(z(p_{2}^{e_{2}}),\ldots,z(p_{m}^{e_{m}})\right)\) is not a power of \(5\), and \(c_{(1,a,n)}=1\) otherwise. \begin{table} \begin{tabular}{|c||c|c|} \hline k & n & FP \\ \hline 1 & 1 & 1 \\ 2 & 4 & 12 \\ 3 & 3 & 12 \\ 4 & 2 & 12 \\ 5 & 11 & 60 \\ 6 & 89 & 60 \\ 7 & 1069 & 60 \\ 8 & 2137 & 60 \\ 9 & 4273 & 60 \\ 10 & 59833 & 60 \\ \hline \end{tabular} \end{table} Table 2. First \(n\) that takes \(k\) iterations to reach a fixed point. Next, assume that for some \(i,n\in\mathbb{Z}^{+}\), we have \(z^{i}(5^{a}\cdot n)=c_{(i,a,n)}5^{a_{i}}\) where \(c_{(i,a,n)}\) is the same for all choices of \(a\in\mathbb{Z}_{\geq 0}\). First suppose \(c_{(i,a,n)}=1\). Then \[z^{i+1}(5^{a}\cdot n) = z(z^{i}(5^{a}\cdot n)) \tag{3.2}\] \[= z\left((c_{(i,a,n)})\cdot 5^{a_{i}}\right)\qquad\text{where $a_{i} \in\mathbb{Z}_{\geq 0}$}\] \[= \operatorname{lcm}\left(z(1),z(5^{a_{i}})\right)\] \[= 5^{a_{i}}.\] Therefore, for any choice of \(a\), \(c_{(i+1,a,n)}=1\). Now suppose \(c_{(i,a,n)}\neq 1\). Then let the prime factorization of \(c_{(i,a,n)}=q_{1}^{f_{1}}\cdots q_{r}^{f_{r}}\), where \(q_{1},\ldots,q_{r}\neq 5\) since \(\gcd\left(c_{(i,a,n)},5\right)=1\). Let \(\operatorname{lcm}\left(z(q_{1}^{f_{1}}),\ldots,z(q_{r}^{f_{r}})\right)=5^{g_ {1}}h_{2}^{g_{2}}\cdots h_{j}^{g_{j}}\) where \(h_{2},\ldots,h_{j}\) are primes not equal to \(5\). Then \[z^{i+1}(5^{a}\cdot n) = z(z^{i}(5^{a}\cdot n)) \tag{3.3}\] \[= z\left((c_{(i,a,n)})\cdot 5^{a_{i}}\right)\qquad\text{where $a_{i} \in\mathbb{Z}_{\geq 0}$}\] \[= \operatorname{lcm}\left(z(q_{1}^{f_{1}}),\ldots,z(q_{r}^{f_{r}}),z(5^{a_{i}})\right)\] \[= \operatorname{lcm}\left(\operatorname{lcm}\left(z(q_{1}^{f_{1}}),\ldots,z(q_{r}^{f_{r}})\right),z(5^{a_{i}})\right)\] \[= \operatorname{lcm}\left(5^{g_{1}}h_{2}^{g_{2}}\cdots h_{j}^{g_{j }},5^{a_{i}}\right)\] \[= h_{2}^{g_{2}}\cdots h_{j}^{g_{j}}\cdot 5^{\max(g_{1},t)}\] \[= c\left(i,a,n\right)\cdot 5^{\max(g,t)}. \tag{3.4}\] We use Lemma 3.1 in our proof of Theorem 1.3 to show that if there exists an integer \(n\) that takes exactly \(k\) iterations of \(z\) to reach a fixed point, then there are infinitely many integers that take exactly \(k\) iterations of \(z\) to reach a fixed point. The following lemma provides us with information on the \(k^{\text{th}}\) iteration of \(z\) on powers of \(10\), enabling us to find integers that require exactly \(k\) iterations of \(z\) to reach a fixed point for any positve integer \(k\). **Lemma 3.2**.: _For all \(k,m\in\mathbb{Z},k\geq 0,m\geq 4\) and \(2k+2\leq m\), \(z^{k}(10^{m})=3\cdot 5^{m}\cdot 2^{m-2k}\)._ Proof.: We proceed by induction on the number of iterations of \(z\). Observe that when \(k=1\), \[z\left(10^{m}\right) = \operatorname{lcm}\left(z\left(2^{m}\right),z\left(5^{m}\right)\right) \tag{3.5}\] \[= \operatorname{lcm}\left(3\cdot 2^{m-2},5^{m}\right)\] \[= 3\cdot 5^{m}\cdot 2^{m-2}.\] Now suppose that \(z^{k}(10^{m})\ =\ 3\cdot 5^{m}\cdot 2^{m-2k}\) for some positive integer \(k\). Then we have \[z^{k+1}(10^{m}) = z\left(z^{k}\left(10^{m}\right)\right) \tag{3.6}\] \[= z\left(3\cdot 5^{m}\cdot 2^{m-2k}\right)\] \[= \operatorname{lcm}\left(z\left(3\right),z\left(5^{m}\right),z(2^{m -2k})\right)\] \[= \operatorname{lcm}\left(4,5^{m},2^{m-2k-2}\cdot 3\right).\] By assumption, \(m\geq 2(k+1)+2=2k+4\), thus we have \(z^{k+1}(10^{m})\ =\ 3\cdot 5^{m}\cdot 2^{m-2(k+1)}\) Using Lemmas 3.1 and 3.2, we now prove Theorem 1.3: _For all \(k\in\mathbb{Z}_{\geq 0}\), there exist infinitely many \(n\) with fixed point order \(k\)._ Proof of Theorem 1.3.: Let \(g,h\in\mathbb{Z}^{+},g>h\). Then \(g=h+\ell\) for some \(\ell\in\mathbb{Z}^{+}\). Suppose that \(z^{h}(n)\) is a fixed point. Then \(z^{g}(n)=z^{\ell}\left(z^{h}(n)\right)\), so \(z^{g}(n)\) is also a fixed point. Similarly, if \(z^{g}(n)\) is not a fixed point, then \(z^{h}(n)\) cannot be a fixed point for any \(h<g\). Note that by Lemma 3.2 \[z^{k}(10^{2k+2})\ =\ 3\cdot 5^{2k+2}\cdot 2^{(2k+2)-2k}\ =\ 12\cdot 5^{2k+2} \tag{3.7}\] and \[z^{k-1}(10^{2k+2})\ =\ 3\cdot 5^{2k+2}\cdot 2^{(2k+2)-2(k-1)}\ =\ 12\cdot 5^{2k+2} \cdot 2^{2}. \tag{3.8}\] Thus, \(10^{2k+2}\) takes exactly \(k\) iterations of \(z\) to reach a fixed point, as \(z^{k-1}(10^{2k+2})\) is not a fixed point. We prove that we can find infinitely many integers that take exactly \(k\) iterations to reach a fixed point once one such integer is identified (which we have just done). We first consider the case where an integer \(n\) goes to a fixed point of the form \(12\cdot 5^{a^{\prime}}\), where \(a^{\prime}\in\mathbb{Z}_{\geq 0}\), in exactly \(k\) iterations of \(z\). Thus, \(z^{k}(n)=12\cdot 5^{a^{\prime}}\) and \(z^{k-1}(n)=c\cdot 5^{b^{\prime}}\) for some non-negative integer \(b^{\prime}\) and positive integer \(c\neq 1,12\). Let \(r\) be an arbitrary positive integer. By Lemma 3.1, we have \(z^{k}\left(5^{r}\cdot n\right)=12\cdot 5^{a^{\prime\prime}}\) and \(z^{k-1}\left(5^{r}\cdot n\right)=c\cdot 5^{b^{\prime\prime}}\) for non-negative integers \(a^{\prime\prime}\) and \(b^{\prime\prime}\). Thus, \(5^{r}\cdot n\) requires exactly \(k\) iterations to reach a fixed point. Next we consider the case where \(n\) goes to a fixed point of the form \(5^{a^{\prime}}\) in exactly \(k\) steps. Then, \(z^{k}(n)=5^{a^{\prime}}\) and \(z^{k-1}(n)=c\cdot 5^{b^{\prime}}\) for some non-negative integer \(b^{\prime}\) and positive integer \(c\neq 1,12\). Let \(r\) be an arbitrary positive integer. By Lemma 3.1, we have \(z^{k}\left(5^{r}\cdot n\right)=5^{a^{\prime\prime}}\) and \(z^{k-1}\left(5^{r}\cdot n\right)=c\cdot 5^{b^{\prime\prime}}\) for non-negative integers \(a^{\prime\prime}\) and \(b^{\prime\prime}\). Thus, \(5^{r}\cdot n\) requires exactly \(k\) iterations to reach a fixed point. As \(r\) is arbitrary, there are infinitely many integers with fixed point order \(k\) for any positive integer \(k\). ## 4. Infinitely many integers go to each fixed point We begin this section with a proof about the \(k^{\text{th}}\) iteration of \(z\) on powers of \(2\). **Lemma 4.1**.: _For all \(k,a\in\mathbb{Z}\) such that \(2\leq k\) and \(4\leq a\), \(z^{k}(2^{a})=\operatorname{lcm}\left(2^{a-2k}\cdot 3,4\right)\)._ Proof.: We induct on \(k\); we use Lemma 2.2 to note that \(z(2^{a})=z^{a-2}\cdot 3\) (valid as \(a\geq 3\)) with base case \(k=2\): \[z^{2}(2^{a}) =\ z\left(z(2^{a})\right)\] \[=\ z(2^{a-2}\cdot 3)\] \[=\ \operatorname{lcm}\left(z(2^{a-2}),z(3)\right)\] \[=\ \operatorname{lcm}\left(2^{a-4}\cdot 3,4\right). \tag{4.1}\] For the inductive step, assume that \(z^{k}(2^{a})=\operatorname{lcm}\left(2^{a-2k}\cdot 3,4\right)\) for some \(k\). We show that \(z^{k+1}(2^{a})\ =\ \operatorname{lcm}\left(2^{a-2(k+1)}\cdot 3,4\right)\). First suppose that \(a>2k+2\). Then \[z^{k+1}(2^{a}) =\ z\left(z^{k}(2^{a})\right)\] \[=\ z\left(\operatorname{lcm}\left(2^{a-2k}\cdot 3,4\right)\right)\] \[=\ z\left(2^{a-2k}\cdot 3\right)\] \[=\ \operatorname{lcm}\left(z(2^{a-2k}),z(3)\right)\] \[=\ \operatorname{lcm}\left(2^{a-2k-2}\cdot 3,4\right)\] \[=\ \operatorname{lcm}\left(2^{a-2(k+1)}\cdot 3,4\right). \tag{4.2}\] Now suppose that \(a\leq 2k+2\). Then \[z^{k+1}(2^{a}) =\ z\left(z^{k}\left(2^{a}\right)\right)\] \[=\ z\left(\operatorname{lcm}\left(2^{a-2k}\cdot 3,4\right)\right)\] \[=\ z\left(12\right)\] \[=\ 12\] \[=\ \operatorname{lcm}\left(2^{a-2(k+1)}\cdot 3,4\right). \tag{4.3}\] We now use Lemma 4.1 in our proof of Lemma 4.2, which proves that all powers of \(2\) go to the fixed point \(12\) and determines how many iterations of \(z\) it takes for a power of \(2\) to reach \(12\). **Lemma 4.2**.: _For all \(a\in\mathbb{Z}^{+}\), \(2^{a}\) reaches the fixed point \(12\) in finitely many iterations of \(z\). For \(a\geq 4\), exactly \(\lceil\frac{a}{2}\rceil-1\) iterations of \(z\) are required to reach \(12\)._ Proof.: When \(a\leq 4\), the claim follows from straightforward computation. Notice that \(z^{4}(2)=12,z^{2}(2^{2})=12,z^{2}(2^{3})=12,z(2^{4})=12\). We prove for \(a>4\) using Lemma 4.1. Note that if \(a\) is even, then \(\lceil\frac{a}{2}\rceil=\frac{a}{2}\). Thus, in the case where \(a\) is even, \[z^{\lceil\frac{a}{2}\rceil-1}(2^{a})\ =\ \operatorname{lcm}(2^{a-2(\frac{a}{2}-1 )}\cdot 3,4)\ =\ \operatorname{lcm}(2^{a-a+2}\cdot 3,4)\ =\ \operatorname{lcm}(2^{2}\cdot 3,4)\ =\ 12. \tag{4.4}\] So \(2^{a}\) takes at most \(\lceil\frac{a}{2}\rceil-1\) iterations of \(z\) to reach a fixed point when \(a\) is even. We next show that it takes exactly \(\lceil\frac{a}{2}\rceil-1\) by showing that \(z^{(\lceil\frac{a}{2}\rceil-1)-1}(2^{a})\) is not a fixed point: \[z^{(\lceil\frac{a}{2}\rceil-1)-1}(2^{a})=\operatorname{lcm}(2^{a-2(\frac{a}{2 }-2)}\cdot 3,4)\ =\ \operatorname{lcm}(2^{a-a+4}\cdot 3,4)=\operatorname{lcm}(2^{4} \cdot 3,4)=12\cdot 2^{2}, \tag{4.5}\] which is not a fixed point. When \(a\) is odd, \(\lceil\frac{a}{2}\rceil-1=\frac{a-1}{2}\), giving us \[z^{\lceil\frac{a}{2}\rceil-1}(2^{a})\ =\ \operatorname{lcm}(2^{a-2(\frac{a-1}{2 })}\cdot 3,4)\ =\ \operatorname{lcm}(2^{a-a+1}\cdot 3,4)\ =\ \operatorname{lcm}(2\cdot 3,4)\ =\ 12. \tag{4.6}\] However \[z^{(\lceil\frac{a}{2}\rceil-1)-1}(2^{a})\ =\ \operatorname{lcm}(2^{a-2(\frac{a-1}{2 }-1)}\cdot 3,4)\ =\ \operatorname{lcm}(2^{a-a+1+2}\cdot 3,4)\ =\ \operatorname{lcm}(2^{3}\cdot 3,4)\ =\ 12\cdot 2, \tag{4.7}\] which is not a fixed point. Lemma 4.2 and Lemma 3.1 now yield Theorem 1.4: _Infinitely many integers \(n\) go to each fixed point of the form \(12\cdot 5^{k}\)._ Proof of Theorem 1.4.: Using Lemma 4.2, we know that \(z^{\lceil\frac{a}{2}\rceil-1}(2^{a}\cdot 5^{0})=12\). Thus, by Lemma 3.1, \(z^{\lceil\frac{a}{2}\rceil-1}(2^{a}\cdot 5^{b})=12\cdot 5^{b^{\prime}}\) for some nonnegative integer \(b^{\prime}\). We show that \(b=b^{\prime}\) by inducting on \(t\) to show that \(z^{t}(2^{a}\cdot 5^{b})=2^{a^{\prime}}\cdot 3\cdot 5^{b},a^{\prime}\in \mathbb{Z}^{+}\), for all \(a>t,a>2\). When \(t=1\), \[z(2^{a}\cdot 5^{b}) = \operatorname{lcm}\left(z(2^{a}),z(5^{b})\right) \tag{4.8}\] \[= \operatorname{lcm}\left(2^{a-2}\cdot 3,5^{b}\right)\] \[= 2^{a-2}\cdot 3\cdot 5^{b}.\] Now suppose that \(z^{t}(2^{a}\cdot 5^{b})=2^{a^{\prime}}\cdot 3\cdot 5^{b}\) for some positive integer \(a^{\prime}\). Then \[z^{t+1}(2^{a}\cdot 5^{b}) = z\left(z^{t}(2^{a}\cdot 5^{b})\right) \tag{4.9}\] \[= z\left(2^{a^{\prime}}\cdot 3\cdot 5^{b}\right)\] \[= \operatorname{lcm}\left(z(2^{a^{\prime}}),z(3),z(5^{b})\right).\] If \(a^{\prime}\leq 3\), then \(\operatorname{lcm}\left(z(2^{a^{\prime}}),z(3),z(5^{b})\right)=2^{2}\cdot 3 \cdot 5^{b}\). If \(a^{\prime}>3\), then \[\operatorname{lcm}\left(z(2^{a^{\prime}}),z(3),z(5^{b})\right) = \operatorname{lcm}\left(2^{a^{\prime}-2}\cdot 3,4,5^{b}\right) \tag{4.10}\] \[= 2^{a^{\prime}-2}\cdot 3\cdot 5^{b}.\] A straightforward calculation shows that \(2\cdot 5^{b}\) and \(2^{2}\cdot 5^{b}\) iterate to the fixed point \(12\cdot 5^{b}\) (see Appendix 1 for a proof). Therefore \(2^{a}\cdot 5^{b}\) iterates to the fixed point \(12\cdot 5^{b}\) for all \(a\in\mathbb{Z}^{+}\). ## 5. All integers have finite fixed point order We now prove that when \(a,b\) are relatively prime, \(z^{k}(ab)=\operatorname{lcm}(z^{k}(a),z^{k}(b))\). We will use this in the proof of Theorem 1.5. **Lemma 5.1**.: _Let \(n=ab\) where \(\gcd(a,b)=1\). Then \(z^{k}(n)=\operatorname{lcm}(z^{k}(a),z^{k}(b))\)._ Proof.: We first consider the case where \(n\) has only one prime in its prime factorization. Without loss of generality, suppose \(a=1\) and \(b=n\) and \(z^{k}(n)=\operatorname{lcm}(1,z^{k}(n))\). If \(n=1\), then \(a=b=1\) and \(z^{k}\left(1\right)=1=\operatorname{lcm}\left(z^{k}(1),z^{k}(1)\right)\). Next consider when \(n\) has at least two distinct primes in its prime factorization. Let the prime factorization of \(n=p_{1}^{e_{1}}\cdots p_{m}^{e_{m}}\) and let \(a=p_{1}^{e_{1}}\cdots p_{r}^{e_{r}}\), \(b=p_{r+1}^{e_{r+1}}\cdots p_{m}^{e_{m}}\) where \(1\leq r<m\). Note that the primes are not necessarily in increasing order. We proceed by induction. In the base case \(k=1\), and using Lemma 2.3 we have: \[z(n) = \operatorname{lcm}\left(z\left(p_{1}^{e_{1}}\right),\ldots,z(p_{ r}^{e_{r}}),z(p_{r+1}^{e_{r+1}}),\ldots,z(p_{m}^{e_{m}})\right) \tag{5.1}\] \[= \operatorname{lcm}\left(\operatorname{lcm}\left(z(p_{1}^{e_{1}}), \ldots,z(p_{r}^{e_{r}})\right),\operatorname{lcm}\left(z\left(p_{r+1}^{e_{r+1} }\right),\ldots,z(p_{m}^{e_{m}})\right)\right)\] \[= \operatorname{lcm}\left(z(p_{1}^{e_{1}}\cdots p_{r}^{e_{r}}),z(p_ {r+1}^{e_{r+1}}\cdots p_{m}^{e_{m}})\right)\] \[= \operatorname{lcm}\left(z(a),z(b)\right).\] For the inductive step, assume that for some \(k\geq 1\), \(z^{k}(n)=\operatorname{lcm}\left(z^{k}(a),z^{k}(b)\right)\). We show that \(z^{k+1}(n)=\operatorname{lcm}\left(z^{k+1}(a),z^{k+1}(b)\right)\). We have \[z^{k+1}(n) =\ z\left(z^{k}(n)\right)\] \[=\ z\left(\operatorname{lcm}(z^{k}(a),z^{k}(b))\right)\] \[=\ \operatorname{lcm}\left(z(z^{k}(a)),z(z^{k}(b))\right)\qquad \text{by Lemma \ref{lem ## Appendix 1. We first prove that \(2^{2}\cdot 5^{b}\) iterates to the fixed point \(12\cdot 5^{b}\). Observe: \[z^{2}\left(4\cdot 5^{b}\right) = z\left(z\left(4\cdot 5^{b}\right)\right)\] (6.1) \[= z\left(\operatorname{lcm}\left(z(4),z(5^{b})\right)\right)\] \[= z\left(\operatorname{lcm}\left(6,5^{b}\right)\right)\] \[= z\left(6\cdot 5^{b}\right)\] \[= \operatorname{lcm}\left(z(2),z(3),z(5^{b})\right)\] \[= \operatorname{lcm}\left(3,4,5^{b}\right)\] \[= 12\cdot 5^{b}.\] 2. Next we prove that \(2\cdot 5^{b}\) iterates to the fixed point \(12\cdot 5^{b}\). Observe: \[z^{4}(2\cdot 5^{b}) = z^{3}\left(z(2\cdot 5^{b})\right)\] (6.2) \[= z^{3}\left(\operatorname{lcm}\left(z(2),z(5^{b})\right)\right)\] \[= z^{3}\left(3\cdot 5^{b}\right)\] \[= z^{2}\left(z\left(3\cdot 5^{b}\right)\right)\] \[= z^{2}\left(\operatorname{lcm}\left(z(3),z(5^{b})\right)\right)\] \[= z^{2}\left(\operatorname{lcm}\left(4,5^{b}\right)\right)\] \[= z^{2}\left(4\cdot 5^{b}\right)\] \[= 12.\]
2309.11479
Did atmospheric thermal tides cause a daylength locking in the Precambrian? A review on recent results
After the initial suggestion by Zahnle and Walker (1987) that the torque accelerating the spin rate of the Earth and produced by the heating of the atmosphere by the Sun could counteract the braking lunir-solar gravitational torque in the Precambrian, several authors have recently revisited this hypothesis. In these studies, it is argued that the geological evidences of the past spin state of the Earth play in favor of this atmospheric tidal locking of the length of the day (LOD). In the present review of the recent literature, we show that the drawn conclusions depend crucially on the consideration of the stromatolite geological LOD estimates obtained by Pannella at 1.88 and 2.0 Ga, which are subject to large uncertainties. When only the most robust cyclostatigraphic estimates of the LOD are retained, the LOD locking hypothesis is not supported. Moreover, the consideration of the published General Circulation Model numerical simulations and of new analytical models for the thermal atmospheric tides suggest that the atmospheric tidal resonance, which is the crucial ingredient for the LOD locking in the Precambrian, was never of sufficiently large amplitude to allow for this tidal LOD lock.
Jacques Laskar, Mohammad Farhat, Margriet L. Lantink, Pierre Auclair-Desrotour, Gwenaël Boué, Matthias Sinnesael
2023-09-20T17:25:08Z
http://arxiv.org/abs/2309.11479v2
# Did atmospheric thermal tides cause a daylength locking in the Precambrian? ###### Abstract After the initial suggestion by Zahnle and Walker (1987) that the torque accelerating the spin rate of the Earth and produced by the heating of the atmosphere by the Sun could counteract the braking lunir-solar gravitational torque in the Precambrian, several authors have recently revisited this hypothesis. In these studies, it is argued that the geological evidences of the past spin state of the Earth play in favor of this atmospheric tidal locking of the length of the day (LOD). In the present review of the recent literature, we show that the drawn conclusions depend crucially on the consideration of the stromatolite geological LOD estimates obtained by Pannella at 1.88 and 2.0 Ga, which are subject to large uncertainties. When only the most robust cyclostatigraphic estimates of the LOD are retained, the LOD locking hypothesis is not supported. Moreover, the consideration of the published General Circulation Model numerical simulations and of new analytical models for the thermal atmospheric tides suggest that the atmospheric tidal resonance, which is the crucial ingredient for the LOD locking in the Precambrian, was never of sufficiently large amplitude to allow for this tidal LOD lock. M 25th Sept, 2023 * 1 Introduction * 1.1 Atmospheric thermal tides * 1.2 A possible lock of the length of the day * 2 Geological archives for Precambrian LOD estimates * 2.1 Stromatolites * 2.2 Tidal rhythmites * 2.3 Cycloststratigraphy * 3 Discussion of the recently published results * 3.1 Mitchell and Kirscher (2023) * 3.2 Wu et al (2023a) * 3.2.1 The modeled gravitational tides: artificial resonances? * 3.2.2 Atmospheric thermal tides: model limitations * 3.2.3 The asymmetry of the Lamb resonance * 3.2.4 The temperature problem * 3.3 Bao et al (2022) * 3.4 Farhat et al (2023) * 4 Conclusions ## 1 Introduction Since the work of Georges Darwin, it is known that the body tides exerted by the Sun and Moon on Earth slow down the spin of the Earth and make the Moon recede away (Fig.1) (Darwin, 1879; MacDonald, 1964; Goldreich, 1966; Kaula, 1964; Mignard, 1979, 1980; Hut, 1981; Touma and Wisdom, 1994; Neron De Surgy and Laskar, 1997). More elaborate tidal models take into account the oceanic tides which also slow down the rotation of the Earth and let the Moon go away (Webb, 1982; Green et al, 2017; Tyler, 2021; Daher et al, 2021), but none of these tidal models could fit both the present tidal recession of the Moon of \(3.83\pm 0.008\) cm/yr (Williams and Boggs, 2016) and the age of the Moon of \(4.425\pm 0.025\) Ga (Maurice et al, 2020). Elaborated along the lines of (Webb, 1982), the recent semi-analytical model of (Farhat et al, 2022) provides a coherent scenario for the Earth-Moon tidal evolution, with an excellent fit to the present recession rate and the age of the Moon. It starts with a global ocean in the ancient eons, and then switches to a hemispheric ocean model similar to (Webb, 1982), but which follows the continental evolution in the most recent times. Although no geological data was used for the elaboration of the model, it is independently in good agreement with geological estimates of the past Earth-Moon distance, and in particular of the geological constraints obtained by cycloststratigraphic methods (Fig.2). Nevertheless, following the original suggestion of (Zahnle and Walker, 1987), two recent papers propose that the spin of the Earth was trapped in a resonance between the thermal atmospheric torque and the solid and oceanic torque during the Precambrian Eon (Mitchell and Kirscher, 2023; Wu et al, 2023a). Both studies advocate that this locking of Precambrian daylength is supported by geological observations. However, such a scenario is incompatible with the Earth-Moon evolution presented in (Farhat et al, 2022). In this review, which is aimed to the stratigraphic community, we will explicit the differences in the modelling approaches and discuss the use of geological data that can be compared with the models. ### Atmospheric thermal tides Atmospheric thermal tides have been recognized since the XVIII th century (see Wilkes, 1949). Due to the heating of the atmosphere by the Sun, the atmosphere locally expands and the pressure decreases at the subsolar point, which induces a redistribution of the mass of the atmosphere, with two main components: a diurnal component, which at equilibrium is opposite to the subsolar point, and a semi-diurnal component, orthogonal to the Sun direction (Fig.3). As for the solid tides, as the Earth spin rotation is faster than its orbital motion around the Sun, the Earth rotation drags these atmospheric bulges with a Figure 1: In Darwin model, the Moon produces a tidal bulge on the Earth, but as the Earth is not totally elastic, this bulge is driven by the fast rotation of the Earth slightly off from the Moon direction. From this results a braking torque that slows down the spin of the Earth. By conservation of the angular momentum in the Earth-Moon system, the Moon goes away. positive offset from their equilibrium position. At present, the gravitational attraction of the Sun on these bulges induces an accelerating torque, opposite to the solid and ocean tides (Chapman and Lindzen, 1970; Goldreich and Soter, 1966; Gold and Soter, 1969; Ingersoll and Dobrovolskis, 1978; Dobrovolskis and Ingersoll, 1980; Correia and Laskar, 2001, 2003; Correia et al, 2003; Leconte et al, 2015; Auclair-Desrotour et al, 2017, 2019). ### A possible lock of the length of the day At present, the thermal atmospheric tidal torque is a small part (\(\sim 6.4\) %) of the solid and oceanic tidal friction (Volland, 1990; Farhat et al, 2023), but there are two elements that can change this ratio, in favor of the atmospheric thermal tides. First, the atmospheric torque amplitude is dependent on the spin rate of the Earth, and as for the oceanic tides, there exists a known resonance of the planetary Lamb wave (Bretherton, 1969), that we name here the Lamb resonance (Farhat et al, 2023), occurring for a faster spin value of the Earth, where the atmospheric torque is largely increased (Lindzen and Blake, 1972; Zahnle and Walker, 1987; Bartlett and Stevenson, 2016). In addition, the oceanic tidal friction is at present close to a resonance, but its value was smaller in the past, for a faster rotation spin value (e.g. Farhat et al, 2022). These elements led Zahnle and Walker (1987) to propose that at some time in the Precambrian, the Figure 4: LOD locking due to thermal atmospheric tides resonance for the scenarios of (Zahnle and Walker, 1987) (in red) and (Bartlett and Stevenson, 2016) (in grey). The geological indicators that are plotted are limited to pre-2016 published results, adapted from (Laskar, 2020) and (Williams, 2000) (see references therein). The dotted red line is the LOD provided by equation 41 of (Laskar et al, 2004) with a simple Darwin tidal model. The dotted black line is an empirical fit using a simplified tidal model adjusted to the geological data (Walker and Zahnle, 1986). The stromatolite data points at 1.88 and 2.0 Ga are from (Pannella, 1972a,b). Figure 3: Thermal atmospheric tides. Due to the heating of the Sun at the subsolar point, a redistribution of the atmosphere creates two main components: a diurnal component (in light blue), and a semi diurnal components (in blue) that are offset from their equilibrium position because of the Earth’s fast rotation. They create an accelerating torque on the Earth’s spin motion. Figure 2: Evolution of the length of day (LOD) in the past following the model of (Farhat et al, 2022). The nominal LOD values are in purple, with the uncertainty provided by the blue lines (adapted from fig.5 of (Farhat et al, 2022)). The circles represent the available cyclostratigraphic data with their uncertainty, when available, from various sources: references within (Farhat et al, 2022) (in blue), (Zhou et al, 2022) (in red), and (Zeeden et al, 2023) (in orange). Tidal rhythmites values are represented by yellow squares, with references from (Farhat et al, 2022) with the addition of a data point at 3.2 Ga from the Moodies group (Eulenfeld and Heubeck, 2023). accelerating thermotidal torque couter-acted the braking luni-solar gravitational tidal torque, which led to a lock of the length of the day (LOD) at about 21h for an extended period of more than 1 Ga. This assumption was probably motivated by the difficulty to fit the existing geological indicators of the past Earth-Moon distance and LOD with more simple models (e.g. Williams, 2000; Hinnov, 2018; Laskar, 2020). This hypothesis was recently revisited by Bartlett and Stevenson (2016). Both in (Zahnle and Walker, 1987) and (Bartlett and Stevenson, 2016), a large part of the observational constraints was provided from the Precambrian estimates of the LOD resulting from stromatolites deposit analysis (Pannella, 1972a,b) (Fig.4). ## 2 Geological archives for Precambrian LOD estimates In his review on geological archives for LOD estimates, Williams (2000) mentioned several varieties of biocarchives (bivalves, corals, brachiopods) which we will not consider here, as they were not present in the Precambrian. For these, one can also consult (Rosenberg et al, 1975) or (Lambeck, 1980). We will concentrate here on the stromatolites (2.1), the tidal rhythmites (2.2), and the promising cyclost stratigraphic records (2.3). ### Stromatolites The model of (Bartlett and Stevenson, 2016) relies heavily on the adjustment to stromatolite data (Pannella, 1972a,b), although the authors themselves warn the reader, and we can quote from (Bartlett and Stevenson, 2016): _However, these data, particularly the early stromatolite data (Pannella, 1972), should not be taken too seriously. (Zahnle andWalker, 1987) Paleontologists Scrutton (1978) and Hofmann (1973) also found these data to be unreliable and unsuitable for precise quantitative analysis._ These data have also been used in a crucial manner in the recent studies (Mitchell and Kirscher, 2023; Wu et al, 2023a; Bao et al, 2022). Stromatolites are layered organo-sedimentary structures formed by the capture of sediments by microbial organisms, typically formed in shallow marine and lacustrine environments. They are some of the oldest known forms of life, and are an important archive for studying Precambrian paleoenvironments. Studies of recent analogues have shown that daily rhythms of biological growth and binding of sediment can be preserved in stromatolites layering, as the microorganisms (algae) respond actively to daylight (e.g. Logan et al, 1964; Monty, 1967; Davies, 1970; Gebelein, 1969). In addition, environmental fluctuations influence rates of growth and the supply of material, i.e., layering thickness, including tidal and seasonal variations (Gebelein and Hoffman, 1968; Pannella, 1976, 1972b). As such, investigations have been made of diurnal growth layers interpreted from fossil stromatolites that can be structured in larger tidal and seasonal banding; these observations have in turn been used to estimate past LOD and the length of the lunar month. The analyses of Precambrian stromatolites by Pannella (1972a,b) are a well-known example of such a study, and, as discussed are often used in long-term reconstructions of the past Earth-Moon system (e.g. Bartlett and Stevenson, 2016; Bao et al, 2022; Mitchell and Kirscher, 2023; Wu et al, 2023a). The three oldest, most critical LOD data that are often referenced are based on stromatolite sequences from the ca. 1.88-Ga Gunflint Formation (Fralick et al, 2002), the correlative Biwabik Formation (Lake Superior region), and the Paleoproterozoic Great Slave Supergroup (Northwestern Territories); the exact stratigraphic origin and thus age of the latter sample is unknown, but is usually placed around 2 Ga, following the compilation figure of Williams (2000). However, as emphasized in many studies (e.g. Hofmann, 1973; Scrutton, 1978; Lambeck, 1980), and by Pannella himself in the original publication, these stromatolite-based estimates are rarely, if ever, suitable for precise quantitative interpretation. The formation of daily laminae depends on a fine environmental balance that can easily be disturbed, and therefore stromatolite growth patterns are rarely considered to be complete due to periods of non-deposition and/or post-depositional erosion by storms, for example. For this reason, counts or sequences of laminae should generally be interpreted as minimum estimates, and not as most likely values (Pannella, 1972a,b; Lambeck, 1980). Another source of uncertainty arises from ambiguities associated with determining the exact number of daily laminae per lunar monthly or annual bundle, which is often characterized by a low reproductibility rate, and further challenged by a lack of independent temporal controls. This is a challenge that is, however, not unique to the stromatolite archive (tidal rhythmites, for instance, feature similar challenges). Concerning the stromatolite specimens studied by Pannella (1972a,b), we specifically note that his counts of the number of diurnal laminae per larger seasonal growth band yield a significantly lower interpreted number of days per year than the observed number of daily increments between lesser growth marks that are believed to indicate the synodic month, established from the same sequences. In addition, Mohr (1975) arrived at a very different number and conflicting interpretation for time-equivalent specimens. In conclusion, while stromatolites can provide useful insights in past tidal dynamics, they should typically not be considered as providing accurate or precise numerical values for past LOD reconstructions. ### Tidal rhythmites Tidal rhythmites are laminae deposits related to semi-diurnal or diurnal tidal cycles that can occur in estuaries or deltas. Silty, and muddy sediment is carried by ebb tidal currents. These currents transport the sediment in suspension through the main ebb channel to deeper offshore water, where it settles and forms graded layers. During slack water between tides, muddy caps can be deposited on the sandy laminae. The amount of sediment carried by ebb tidal currents and the effectiveness of tides in transporting and depositing sediment are directly related to the tidal amplitude (Williams, 2000). In an ideal scenario, the analysis of the time series of the thickness of these laminae should allow to recover all tidal periodicities: * The lunar day, interval between two passage of the Moon at the meridian. * The lunar synodic month, which separates two full Moons, and is thus recovered by the recognition of spring tides of large amplitude, occurring at syzygy, when the Moon and the Earth have same longitude, when seen from the Sun. * The tropical year, separating two passages of the Earth at the spring equinox (also called vernal equinox), possibly determined as the time of maximal tidal amplitude in the year. * Finally, if the record is sufficiently long, the nodal period of the Moon (18.6 yr at present) could be recognized (Walker and Zahnle, 1986). Although very promising, this method has its drawbacks. The locations where high-quality sequences of such laminae formations can be observed are rare. Moreover, they often led to divergent analyses when the deposits are analysed by different groups. The Weeli Wolli Formation in Western Australia, dated 2.45 Ga, was interpreted by (Walker and Zahnle, 1986) on the basis of the lunar nodal cycle and led to an Earth-Moon distance of \(51.9\pm 3.3\) Earth radius (\(R_{E}\)), while the analysis of (Williams, 1989, 1990) led to the much larger value of \(54.6\pm 1.8\)\(R_{E}\), with the analysis of laminae couplets, grouped in synodic fortnightly increments, and annual cycles. Although the results of these two studies are still consistent within uncertainty (given the very large error bars), these estimates are based on two fundamentally different and mutually incompatible interpretations of the same layering patterns. In the same way, the analysis of the Elatina Formation in South Australia, dated 620 Ma, led Williams (1989, 1990) to determine an Earth-Moon distance of \(58.16\pm 0.30R_{E}\) while Sonett and Chan (1998) derived \(56.04\pm 0.03R_{E}\) from their analysis of the same sequence. Sonett and Chan (1998) also re-analyzed their previous determination of the Earth-Moon distance for the Big Cottonwood Formation in Utah, at 900 Ma, and found a value corresponding to an Earth-Moon distance of \(55.06\pm 1.44R_{E}\), while their previous determination of the same sequence was \(57.1R_{E}\)(Sonett et al, 1996). We note that the new determinations from (Sonett and Chan, 1998) are in agreement with the new model of (Farhat et al, 2022) (Fig.2). However, given the difficulties associated with interpreting Earth-Moon parameters from these tidal sequences, additional independent studies should be required to further verify the determination of (Sonett and Chan, 1998). ### Cycloststratigraphy Due to the gravitational interactions of the Earth with the other planets, the orbital plane of the Earth is moving in a complex motion composed of a slow rotation (precession of the node) and a composition of nearly periodic motions that make the inclination of the Earth orbital plane oscillate (e.g. Laskar, 2020). This induces as well an oscillation of the tilt of the Earth (or obliquity, \(\varepsilon\)) of present amplitude 1.3 degrees around its averaged value. The variation of insolation on the surface of the Earth depends also on the precession of perihelion, and of the variation of eccentricity, which are dominated by the so-called long eccentricity term of 405 kyr period and the short eccentricities of main periods 95 kyr and 124 kyr. Finally, the pull of the Moon and Sun on the equatorial bulge of the Earth induces a slow precessional motion of its spin axis at \(50.475838\) arcsec/yr that corresponds to a period of about 26kyr (Laskar, 2020). Neglecting the eccentricity of the Earth and Lunar orbit, and the inclination of the Moon, the lunisolar precession can be expressed as \(\alpha\cos\varepsilon\), where \(\varepsilon\) is the obliquity and where the precession constant \(\alpha\) is (eq.4.14 from (Laskar, 2020)) \[\alpha=\frac{3}{2}G\left[\frac{m_{\odot}}{a_{\odot}^{3}}+\frac{m_{M}}{a_{M}^{3 }}\right]\frac{E_{l0}}{\gamma_{0}^{2}}\gamma \tag{1}\] where G is the gravitational constant, \(\odot\) refers to the Sun, and \(M\) to the Moon, \(m\) and \(a\) are the masses and semi-major axes, \(\gamma\) the Earth's spin angular velocity, \(\gamma_{0}\) its present value, and \(E_{l0}\) the dynamical ellipticity at present. As \(a_{\odot}\) can be considered as constant, the precession constant \(\alpha\) thus depends mostly on the evolution of \(a_{M}\) and \(\gamma\) which evolve in time under tidal dissipation (highlighted in red in equation (1)). The resulting changes in insolation drive climatic changes on Earth (astronomical climate forcing) that can be recorded in the Earth's sedimentary archive. These sediments can today be studied (e.g. Gradstein et al, 2004, 2012, 2020; Montenari, 2018) and inform us on past astronomical changes. Over very long timescales, beyond 60 Ma, the planetary orbital motions can no longer be predicted with accuracy (Laskar et al, 2011b, a), but for the Earth-Moon evolution, the tidal dissipation will dominate, and a reconstruction of the past evolution of the Earth-Moon distance can still be achieved. The variation of \(a_{M}\) and \(\gamma\) will induce a change in the precession period that can be imprinted in the sedimentary record (Berger et al, 1992; Meyers and Malinverno, 2018). In a reverse way, the determination of the precession frequency from the sedimentary record, and the use of a dynamical model that will link the semi-major axis \(a_{M}\) to the angular spin velocity of the Earth \(\gamma\) can allow to retrieve both \(a_{M}\) and \(\gamma\). This determination requires a time scale for the sedimentary record, which can be provided either by absolute radiometric age dating, but also, and more often by the use of the 405 kyr eccentricity period as a metronome for stratigraphic cycles (see (Laskar, 2020) and references therein). In recent years, this technique for determining the past state of the Earth-Moon system has made large progress, and many groups have obtained converging results using various methods for the determination of the precession frequency (e.g. Meyers and Malinverno, 2018; Lantink et al, 2022; Zeeden et al, 2023). Moreover, these data are in good agreement with the tidal model of (Farhat et al, 2022) (see fig.6 of (Farhat et al, 2022) and fig.5 of (Zeeden et al, 2023)). Another crucial advantage of cyclost stratigraphy, relative to, for example, stromatolites and tidal rhythmites, is the potential of independent age control by, for example, radioistotopic geochronology and integrated stratigraphic approaches. Using an integrated stratigraphic approach it is also possible to verify interpretations in time-equivalent sections that should have the same time-dependent astronomical signatures (e.g. Olsen et al, 2019; Sinnesael et al, 2019). The coherence of these sets of data leads us to consider that they are the most robust among the geological proxies for the determination of the past precession frequency of the Earth and determination of Earth-Moon system parameters. ## 3 Discussion of the recently published results The solution of (Farhat et al, 2022) is in agreement with the most recent determinations of tidal rhythmites (Sonett and Chan, 1998) and with the recent cycloststratigraphic data (Meyers and Malinverno, 2018; Lantink et al, 2022; Zhou et al, 2022; Zeeden et al, 2023) (Fig.2). One could thus think that the fate of the thermal tides locking hypothesis was settled. But the recent publication of the two papers (Mitchell and Kirscher, 2023; Wu et al, 2023) in major journals requires some additional discussion to clarify the situation. ### Mitchell and Kirscher (2023) In their compilation of geological constraints of the Preamphrian length of the day, Mitchell and Kirscher (2023) have included most of the available data. Their analysis is purely empirical. They search for the best linear fit, made by pieces over sequences of data. One could wonder on the status of their fit, which is not continuous, as a piecewise linear model would be. The goal is thus not to find an empirical model for the Earth-Moon evolution, but to search for the best fitted trends in the LOD over extended periods. From these fits, they conclude to a probable lock of the LOD between 1 Ga and 2 Ga. When comparing to the results of (Farhat et al, 2022) (Fig.5), one can observe that the stromatolites data at 1.88 Ga and 2 Ga from (Pannella, 1972b, a) are essential for the conclusions of (Mitchell and Kirscher, 2023). If these data, which are questionable as we discuss in section 2.1, are not taken into account, the fit will no longer lead to this locked value of the LOD between 1 and 2 Ga. It is also puzzling that the cycloststratigraphic point of (Grotzinger, 1986) is nearly exactly on the (Farhat et al, 2022) curve (Fig.5). It should be noted, however, that the datum point of (Grotzinger, 1986) was not originally given in that paper but was derived by Mitchell and Kirscher (2023). Grotzinger (1986) only proposed that there is eustatic sea-level cyclicity within the Milankovitch frequency band recorded in platform carbonates from the Rocknest Formation, at a scale of \(1-15\) m and possibly of \(75-100\) m or \(75-200\) m. (Mitchell and Kirscher, 2023) then assumed that a 10 m cycle represents climatic precession and 87.5 m is related to short eccentricity. However, this interpretation is poorly constrained. Due to the large uncertainty of this data point, we should consider this as a simple coincidence, until some new quantitative analysis in the spirit of (Mevers and Malinverno, 2018) is performed on the same 1.89 Ga Rocknest Formation sample (Mitchell and Kirscher, 2023). Figure 5: Comparison of the data and fit of (Mitchell and Kirscher, 2023) (black and red lines) with the model of (Farhat et al, 2022) (purple curve with uncertainty in blue). As in Fig.2, tidal rhythmites are yellow squares, and cycloststratigraphic data are color circles (light blue with references in (Farhat et al, 2022), red from (Zhou et al, 2022) and orange from (Zeeden et al, 2023)). The stromatolite data from (Pannella, 1972a,b) are highlighted with a dark green circle while the cycloststratigraphic data from (Grotzinger, 1986) is circled in light green. Adapted from Fig.2 of (Mitchell and Kirscher, 2023). ### Wu et al (2023a) In the recent work of Wu et al (2023a), the authors presented a new analytical model of thermal tides to address the resonant locking hypothesis. The model's free parameters were constrained such that the resulting thermotidal torque drives an LOD history that best fits their compilation of LOD geological proxies. As previously, in figure 6, we compare (Wu et al, 2023a) (black curve) with (Farhat et al, 2022) (purple curve). Here again, one can see that the model of (Wu et al, 2023a) relies heavily on the stromatolite data of (Pannella, 1972a,b) to establish the resonance locking of the LOD1. Moreover, (Wu et al, 2023a) curve misses entirely the new cycloststratigraphic determinations of the Earth-Moon state at 2.46 Ga obtained by Lantink et al (2022) in the Joffre Gorge, Australia. Footnote 1: Note that these data points from Pannella (Pannella, 1972a,b) occur at a different age position in the figures \(1-3\) of (Wu et al, 2023a), compare to previous LOD compilations (e.g. Williams, 2000; Bartlett and Stevenson, 2016; Mitchell and Kirscher, 2023). In particular, we note that the two closely spaced points at ca. 1.63 Ga most likely represent Pannella’s analyses of the time-correlative Gunflint (1972a) and Biwabik (1972b) Formations dated at ca. 1.88 Ga (Fralick et al, 2002). However, used literature data were not provided in (Wu et al, 2023a) to verify this observation. Wu et al (2023a) elaborated a physical model to support their claims. Moreover, the authors performed a suite of GCM (General Circulation Model) numerical simulations, using the LMD-G (Hourdin et al, 2006) and PlaSim (Fraedrich et al, 2005) GCMs, to infer the Earth's Figure 6: Comparison of the data and fit of (Wu et al, 2023a) (in black) with the model of (Farhat et al, 2022) (purple curve with uncertainty in blue). As in Fig.2, tidal rhythmites are yellow squares, and cyclost stratigraphic data are color circles (light blue with references in (Farhat et al, 2022), red from (Zhou et al, 2022) and orange from (Zeeden et al, 2023)). The stromatolite data from (Pannella, 1972a,b) are highlighted with a dark green circle. Note that these data points seem to be misplaced in the (Wu et al, 2023a) figure reproduced here (see note 1). Adapted from Fig.3 of (Wu et al, 2023a). paleo-temperature evolution that is required to generate the constrained history of the thermotidal torque. We dedicate the rest of this section to discuss the details behind the adopted model in (Wu et al, 2023a) and its predictions. #### 3.2.1 The modeled gravitational tides: artificial resonances? The dynamical evolution of the Earth's rotational motion in (Wu et al, 2023a) is driven by the luni-solar gravitational tidal torque and the solar thermotidal torque. For the former, the authors used the tidal history of (Webb, 1982), where Laplace's Tidal Equations (the equations describing the tidal response of a shallow fluid layer; LTEs hereafter) were solved semi-analytically over a hemispherical equatorial ocean on the surface of the Earth. While the work of Webb (1982) was seminal in coupling LTEs with the dynamical evolution of the Earth-Moon system, the modeled history of the lunar orbit in (Webb, 1982) yielded a lunar formation epoch that is incompatible with the geologically constrained lunar age (see Fig.3 of (Webb, 1982)). To efficiently remedy the latter discrepancy, Wu et al (2023a) tweak the tidal dissipation history of Webb (1982) (see their Fig. S1) by normalizing it with a constant factor, such that the resultant orbital history of the Moon features its proper temporal origin. As a byproduct of this modeling choice, the authors have modified the spectrum of oceanic normal modes in such a way that tidal resonances are characterized with artificial amplitudes (see Green et al, 2017; Daher et al, 2021; Farhat et al, 2022). Though the authors focus on modeling thermal tides, gravitational tides remain the dominant driver of the Earth's rotational evolution, providing the background of the tidal torque upon which the thermotidal counterpart would significantly contribute only in the vicinity of the Lamb resonance. As such, since the authors are constraining the history of the total torque to fit a compilation of geological LOD proxies, an artificial modeled spectrum of gravitational tidal dissipation may yield an artificial spectrum of thermal tides. Namely, the resultant thermotidal history could be characterized by either an artificial timing of the Lamb resonance occurrence, an artificial amplitude of the Lamb resonance, or both. #### 3.2.2 Atmospheric thermal tides: model limitations For the thermotidal contribution to the Earth's rotational history evolution, Wu et al (2023a) develop a simplified analytic model of thermal tides that is used to compute the thermotidal torque (Eqs. S28-S29 therein). The model is parameterized by a number of free parameters (16 in total) that are constrained such that the resulting thermotidal torque, added to the gravitational tidal torque, would drive an LOD history that fits the compilation of LOD geological proxies (see Figure 6). The developed model essentially resembles a band-pass filter, similar to that developed in Bartlett and Stevenson (2016). It ignores the Coriolis force, which may be significant in the case of a fast rotator like the Earth, along with the vertical velocity of tidal waves. The model also assumes an isothermal structure of the atmosphere. This choice is classical in the literature of atmospheric dynamics as it simplifies the mathematical framework of the rather complex theory (e.g., Chapman and Lindzen, 1970; Lindzen and Blake, 1972; Auclair-Desrotour et al, 2019). However, for the Earth, atmospheric temperature measurements (e.g., Figures 2.1-2.3 of Pierrehumbert, 2010) show that the massive troposphere (\(\sim\)80\(\%\) of atmospheric mass) controlling the tidal mass redistribution is characterized by a negative temperature gradient. The latter is in fact closer to an idealised adiabatic profile than it is to an idealised isothermal profile. These modeling choices could deliver inaccuracies in the determination of the resonant period (Farhat et al, 2023). However, this was somewhat compensated by the authors in modeling the resonant period as a free parameter that is constrained by the geological data. The other essential quantity of interest is the amplitude of the thermotidal torque when the resonance is encountered. The latter is dependent on several variables, of which the least constrained in the case of the Earth is the rate of energy dissipation by the atmosphere. Namely, as the atmosphere is heated by the shortwave incident stellar flux and the infrared emission from the ground, it dissipates energy via multiple pathways including radiative cooling and frictional interactions with the surface. As it is difficult to properly model these mechanisms in the analytical theory, energy dissipation is usually modeled by a free parameter (the parameter \(Q_{\rm th}\) in the work of Wu et al, 2023a). This unconstrained parameter predominantly controls the amplitude and the spectral width of the resonant thermotidal torque and, consequently, the lifetime of the Lamb resonance and whether it was sufficient to counteract the gravitational tide. Dissipative radiative transfer and atmospheric cooling, however, can be properly accommodated in GCM simulations. To that end, Wu et al (2023a) presents, to date, the first study that uses GCMs to simulate the Lamb resonance specifically for the Earth. Their results, using the two aforementioned GCMs, estimate the dissipation parameter to be \(Q_{\rm th}\approx 10\), which would render the maximum amplitude of the torque insufficient for the LOD locking. For the LOD evolution, however, the authors used values of \(Q_{\rm th}\) that are one order of magnitude larger (\(Q_{\rm th}\approx 100\)) such that the thermotidal torque would be sufficient to counteract the gravitational counterpart. Consequently, the used thermotidal torque is amplified by a factor of \(\sim\)\(30\) relative to its present value2. The author's reasoning lies in the need for such a large thermotidal torque so that the LOD proxies, specifically the stromatolites in (Pannella, 1972a,b), can be explained. This brings us back to Section 2.1 in questioning the reliability of this data set as a robust constraint for informing dynamical models, especially when present with evidence from GCMs to the contrary. Footnote 2: which means that the amplitude of the surface pressure anomaly is amplified by a factor of \(\sim\)60 as can be inferred from Farhat et al (2023). #### 3.2.3 The asymmetry of the Lamb resonance An interesting signature of the GCM simulations of the Lamb resonance in (Wu et al, 2023a) lies in the spectrum of the thermotidal torque shown in their Figure S4. We reproduce this spectrum in Figure 7. The GCM spectrum, shown by the black dots, features an asymmetry in the peaks of the Lamb resonance whereby the two peaks of the torque around the resonance do not share the same amplitude. Namely, the accelerating part of the torque has an amplitude that is almost half that of the decelerating part. The former part is required to occur with a sufficient amplitude such that it counteracts the decelerating gravitational torque, but it appears from this spectrum to be reduced. The authors, however, ignored this signature present in the GCM simulations in favor of the spectrally symmetric Lamb resonance obtained from their analytical model, which is shown by the black curve in Figure 7. In the recent work of Farhat et al (2023), the authors propose that such an asymmetry can be obtained if one accounts for the thermal inertia budget in the ground and the lowermost atmospheric layer. Namely, due to the thermal inertia in these layers, the infrared heating of the atmosphere by the ground becomes asynchronous with the incident solar flux. This delayed ground response is shown to be responsible for maneuvering the atmospheric tidal bulge in such a way that creates an amplitude asymmetry between the two peaks. In Figure 7, we show by the red curve how the model of Farhat et al (2023) can properly explain the spectral asymmetry of the GCM-produced spectrum when taking into account the thermal inertia effects. It is important to note that the reduction of the positive peak of the torque goes hand in hand with the relative contribution of the ground in heating the atmosphere. Namely, the more abundant the greenhouse gases are in the atmosphere, which is predicted for the Precambrian from various geological proxies (see e.g., Catling and Zahnle, 2020), the more the atmosphere would be prone to infrared thermotidal heating, and consequently the more the accelerating thermotidal torque would be reduced. Figure 7: The spectrum of the thermotidal torque as a function of the length of day adapted from Fig.S4 of (Wu et al, 2023a). The black dots are simulated using the PlaSim GCM, while the solid black curve shows the prediction of the analytical model of Wu et al (2023a). The red curve shows a fit to the GCM results using the analytical model of Farhat et al (2023), where the physical effects of the delayed thermal response of the ground were taken into account. These effects proved to induce the notable amplitude asymmetry between the peaks around the Lamb resonance. Namely, the accelerating peak of the tidal torque is reduced in amplitude relative to the decelerating peak. #### 3.2.4 The temperature problem One naturally wonders how the discussed modeling limitations carry over to the model predictions. The resulting timing of the Lamb resonance occurrence requires a mean Earth temperature in the Proterozoic of \(40-55^{\circ}\)C, computed by the authors using the PlaSim GCM (see Figure 7 of Wu et al, 2023). Though a warm climatic interval fits evidence on a Proterozoic glacial gap (e.g., Hoffman et al, 2017), such extreme temperatures are in contrast with geochemical analysis using phosphates (e.g., Blake et al, 2010), geological carbon cycle models (e.g., Sleep and Zahnle, 2001; Krissansen-Totton et al, 2018), numerical results of 3D GCMs (e.g., Charnay et al, 2020), and the fact that solar luminosity was 10-25% lower during the Precambrian (e.g., Gough, 1981). Such extreme temperature estimates would also require elevated amounts of partial pressure of CO\({}_{2}\), reaching 200 mbar. This exceeds inferred estimates from various geochemical proxies (see the review by Catling and Zahnle, 2020, and references therein). More importantly, however, this temperature increase will enhance the asynchronous thermotidal heating of the atmosphere by the ground in the infrared, as we describe in Section 3.2.2. The latter would significantly attenuate the peak of the tidal torque near resonance, rendering it insufficient for the LOD locking. The adopted model in (Wu et al, 2023) did not account for this feedback effect. Moreover, atmospheric dissipation is also enhanced with the increased temperature, as discussed in (Farhat et al, 2023), which has an additional effect of attenuating the resonant amplitude of the torque. ### Bao et al (2022) While they do not invoke a thermal tides LOD trapping, Bao et al (2022) try also to reconciliate the LOD history with the (Pannella, 1972, 2) data. This time, they propose that between 2 Ga and 1.5 Ga, a sudden growth of the Earth core led to a reduction of the spin rate of the Earth. In figure 8, we have compared their solution with (Farhat et al, 2022) (purple curve). In this case again, the scenario depends crucially on the stromatolite data of (Pannella, 1972, 2). If these data are removed, there is no longer the necessity to search for some peculiar scenario, and it can be recognized that the model of (Farhat et al, 2022) fits most of the reliable geological data in a satisfactory manner. ### Farhat et al (2023) In their recent work, Farhat et al (2023) have revisited the atmospheric thermal tides computations for rocky planets and in particular for the Earth. They have constructed an ab initio model of thermal tides on rocky planets with a neutrally stratified atmosphere. This feature is a major change with respect to previous models, where closed-form solutions are usually obtained assuming that the atmosphere is isothermal (Lindzen and McKenzie, 1967; Chapman and Lindzen, 1970; Lindzen and Blake, 1972; Auclair-Desrotour et al, 2019; Wu et al, 2023). Although both atmospheric structures provide appreciable mathematical simplifications, neutral stratification appears to better capture the negative temperature gradient that characterises the troposphere of the Earth, which contains most of the atmospheric mass. As the stability of stratification with respect to convection determines the strength of the Archimedean force exerted on fluid particles in the vertical direction, the neutral stratification approximation annihilates the buoyancy effects in the tidal response. The upward-travelling internal gravity waves are thus filtered out from the solution, leaving only the horizontal compressibility forces responsible for the propagation of the Lamb wave. Another major change with respect to previous models (Lindzen and McKenzie, 1967; Chapman and Lindzen, 1970; Lindzen and Blake, 1972; Ingersoll and Dobrovolskis, 1978; Dobrovolskis and Ingersoll, 1980; Auclair-Desrotour et al, 2019; Wu et al, 2023) is the consideration of heat absorption near the ground level and heat exchange between Figure 8: Comparison of the data and fit of (Bao et al, 2022) (in solid blue) with the model of (Farhat et al, 2022) (purple curve with uncertainty in green). As in Fig.2, tidal rhythmites are yellow squares, and cyclost stratigraphic data are color circles (light blue with references in (Farhat et al, 2022), red from (Zhou et al, 2022) and orange from (Zeeden et al, 2023)). The stromatolite data from (Pannella, 1972, 2) are highlighted with a dark green circle. Adapted from Fig.11 of (Bao et al, 2022). the atmosphere and ground that takes into account the thermal diffusive processes in the planetary surface layer. This model allows to obtain a closed-form solution for the frequency-dependent atmospheric tidal torque, and is in agreement with simulations using GCMs, both for Earth-like and Venus-like planets. Specifically, when applied to the Earth, their model predicts a resonant rotational period of 22.8 hr, which is in agreement with a recent analysis of pressure data on global scales (Sakazaki and Hamilton, 2020)3, and the GCM prediction of Wu et al (2023). As such, the model predicts the occurrence of the Lamb resonance not in the Precambrian, but in the Phanerozoic, with an amplitude that is insufficient to counteract the luni-solar gravitational tidal torque. This does not exclude the occurrence of the crossing of the resonance, but as the luni-solar gravitational tidal torque remains larger than the thermoidal torque, no LOD trapping can occur. The crossing of the resonance then results only in a small change of the Earth's rotational decceleration: the spin decceleration rate is slightly increased before the resonance, and then reduced to roughly its previous value after the crossing of the resonance. Footnote 3: This value is in agreement with the \(11.38\pm 0.16\,\mathrm{hr}\) semi-diurnal period obtained by analyzing the spectrum of normal modes using pressure data on global scales (see Table 1 of Sakazaki and Hamilton, 2020, first symmetric gravity mode of wavenumber \(k=-2\)) The (Farhat et al, 2023) model depends on two parameters (\(\sigma_{0},\alpha_{\mathrm{A}}\)), which are the cooling frequency and opacity parameters, respectively. The frequency \(\sigma_{0}\) is the inverse of the timescale associated with energy dissipation, which is assumed to result from radiative cooling in the model. The higher \(\sigma_{0}\), the more efficient is energy dissipation. The frequency \(\sigma_{0}\) is thus tightly associated to the amplitude of the Lamb resonance (thus related to the parameter \(Q_{\mathrm{th}}\) appearing in (Wu et al, 2023)). The opacity parameter \(\alpha_{\mathrm{A}}\) quantifies the fraction of incident Solar flux that is transferred to the atmosphere in the thermal tidal forcing. Consequently, this parameter takes its values between 0 (no tidal forcing) and 1 (maximal tidal forcing). Other model parameters are related to the present day atmospheric gas mixture and surface temperature, and they are therefore well constrained. For the thermotid torque to cancel the gravitational torque in the Precambrian (and thus the LOD locking to occur in the Precambrian), the \((\sigma_{0},\alpha_{\mathrm{A}})\) pair needs to be below the associated black solid curve of (Fig.9). Nevertheless, the observation of the present thermal atmospheric response and the constraint on the cooling frequency \(\sigma_{0}\) deduced from the cooling timescale estimated by Leconte et al (2015), impose to the \((\sigma_{0},\alpha_{\mathrm{A}})\) pair to be inside the intersection of the two shaded regions, above the LOD lock threshold. Moreover, (Farhat et al, 2023) show that the crossing of the resonance, within the limitations of their analytical model, most probably occurred in the Mesozoic, and not in the Precambrian. In this case, the curve to consider is the dashed line, which leads to an even less probable LOD locking. ## 4 Conclusions The famous astronomer Carl Sagan (1934-1996) used to say that extraordinary claims require extraordinary evidences, which is another version of the Occam's razor in science, stating that simpler explanations should be preferred to more complicated ones, in absence of strong Figure 9: Amplitude of the Lamb resonance with respect to the two parameters \(\sigma_{0}\) and \(\alpha_{\mathrm{A}}\) of the (Farhat et al, 2023) model. In order for the thermotid response to cancel the gravitational counterpart in the Precambrian the parameters \((\sigma_{0},\alpha_{\mathrm{A}})\) need to be below the solid black line. The dashed line defines the same threshold needed for the Mesozoic, at 250 Ma. The horizontal shaded area corresponds to typical values of the radiative cooling rate \(\sigma_{0}\). The other shaded area defines the region of parameter space corresponding to the presently observed semi-diurnal tidal bulge. Adapted from (Farhat et al, 2023). arguments. Adapted to the present problem of the evolution of the Earth-Moon distance and LOD, it can be expressed as: Do we need a LOD lock by thermal tides to explain the evolution of the Earth-Moon system over its age? Is there a strong evidence for a LOD lock in the Precambrian? The answer to the first question is clearly negative, as the (Farhat et al, 2022) model provides a coherent scenario for the tidal history of the Earth-Moon system, without the need of a resonant atmospheric tidal lock. We have seen also how all papers advocating for a LOD lock by thermal tides (Bartlett and Stevenson, 2016; Mitchell and Kirscher, 2023; Wu et al, 2023a), or the alternate scenario of a growing Earth's core (Bao et al, 2022) rely critically on the stromatolite LOD estimates of Pannella (1972a,b). However, as emphasised by Pannella himself, and by several authors that studied these data afterwards, the validity of the stromatolite-based LOD estimates derived from the Paleoproterozoic Gunflint-Biwabik Formations and Great Slave Supergroup should be questioned (see section 2.1). There is thus at present no reliable geological evidences to support these alternate scenarios. Moreover, (Mitchell and Kirscher, 2023) presented a cyclostratigraphy-based datum from cyclicities in the 1.9-Ga Rocknest Formation (Grotzinger, 1986) which is not compatible with the stromatolite data of Pannella. In addition, the solution of (Wu et al, 2023a) complies with the questionable stromatolite data of Pannella (1972b,a) but not with the more reliable cyclostratigraphic data of (Lantink et al, 2022). The crucial importance of the stromatolite data at 1.88 Ga and 2.0 Ga used in previous models is an important motivation for the search for alternate estimates of the LOD in this time interval or more generally in the interval of 1.5 Ga to 2.0 Ga. A preference should be given to high resolution cyclostratigraphic data, in the spirit of (Meyers and Malinverno, 2018). In particular, it would be very useful to re-analyze the cyclostratigraphic data at 1.9 Ga of (Grotzinger, 1986). More generally, we would like here to emphasize the importance of taking dating (age) uncertainty into consideration when fitting variables through any type of empirical estimate derived from the geological records. The two recent analytical, semi-analytical, and numerical studies of Wu et al (2023a) and Farhat et al (2023), although providing opposite conclusions, have improved our understanding of the possibility of atmospheric thermotidal daylength locking in the Precambrian. The problem addressed in Wu et al (2023a) can be summarized as follows: two parameterized spectra of tidal torques, one gravitational and one thermal, are combined, and the parameters of the two counterparts are constrained such that the combined torque drives an LOD evolution that fits a compilation of geological proxies. Much can be appreciated in that work, especially in highlighting the significance of Earth-Moon angular momentum depletion via thermal tides, simulating the Lamb resonance for the Precambrian Earth using GCMs, and establishing, using GCMs, for the first time a correlation between the resonant period, temperature evolution, and atmospheric compositional variations. Moreover, in the limited sense, the adopted analytical models of (Wu et al, 2023a), laying down the used spectra of the torques, appear to capture the fundamental dynamical behavior of oceanic and atmospheric tides. However, a closer look at the hierarchy of modeling assumptions in the two models and the constraints imposed by the geological proxies reveal that the story is much more nuanced. In short, stringent constraints on the LOD history were imposed by a subset of quantitatively questionable proxies, as we discuss in Section 2.1. The latter were combined with a spectrum of oceanic tides that does not physically describe the tidal response of the Earth's paleo-occans. As such, the modeled atmosphere was constrained to encounter the Lamb resonance with an unrealistic amplitude for the torque and in a whistle-stop fashion such that the stromatolite records in the Proterozoic can be explained. Using a neutrally stratified analytical model that is more adapted to the Earth's atmosphere than the usual isothermal model, and taking into account heat diffusion mechanism in the vicinity of the ground interface, Farhat et al (2023) conclude that the amplitude of the Lamb resonance is not sufficient for the thermotidal torque to counteract the luni-solar gravitational tidal torque in the Precambrian. Moreover, their analysis conclude that the crossing of the Lamb resonance most probably occurred in the Mesozoic, and not during the Precambrian (see section 3.4), with even less chance of daylength locking. Interestingly, the numerical GCM simulations of Wu et al (2023a) allow to strengthen the analytical model of Farhat et al (2023) by probing the asymmetry of the Lamb resonance (Fig.7). These two studies should provide the basis of future improved models for atmospheric thermal tides, but for now, we should conclude, both by the consideration of the geological evidences and by the comparison of theoretical models, that there is no clear arguments allowing to state that LOD locking occurred in the past history of the Earth. **Figures credits.** Figure 3 from (Mitchell and Kirscher, 2023), figures 3 and S4 from (Wu et al, 2023a), and figure 11 from (Bao et al, 2022) were adapted according to CC BY licence ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)). **Acknowledgements.** This project has been supported by the French Agence Nationale de la Recherche (AstroMeso ANR-19-CE31-0002-01) and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Advanced Grant AstroGeo-885250). MLL acknowledges funding from the Heising-Simons grant no. 2021 - 2797.
2307.16862
Modulation-Enhanced Excitation for Continuous-Time Reinforcement Learning via Symmetric Kronecker Products
This work introduces new results in continuous-time reinforcement learning (CT-RL) control of affine nonlinear systems to address a major algorithmic challenge due to a lack of persistence of excitation (PE). This PE design limitation has previously stifled CT-RL numerical performance and prevented these algorithms from achieving control synthesis goals. Our new theoretical developments in symmetric Kronecker products enable a proposed modulation-enhanced excitation (MEE) framework to make PE significantly more systematic and intuitive to achieve for real-world designers. MEE is applied to the suite of recently-developed excitable integral reinforcement learning (EIRL) algorithms, yielding a class of enhanced high-performance CT-RL control design methods which, due to the symmetric Kronecker product algebra, retain EIRL's convergence and closed-loop stability guarantees. Through numerical evaluation studies, we demonstrate how our new MEE framework achieves substantial improvements in conditioning when approximately solving the Hamilton-Jacobi-Bellman equation to obtain optimal controls. We use an intuitive example to provide insights on the central excitation issue under discussion, and we demonstrate the effectiveness of the proposed procedure on a real-world hypersonic vehicle (HSV) application.
Brent A. Wallace, Jennie Si
2023-07-31T17:19:53Z
http://arxiv.org/abs/2307.16862v1
Modulation-Enhanced Excitation for Continuous-Time Reinforcement Learning via Symmetric Kronecker Products ###### Abstract This work introduces new results in continuous-time reinforcement learning (CT-RL) control of affine nonlinear systems to address a major algorithmic challenge due to a lack of persistence of excitation (PE). This PE design limitation has previously stilled CT-RL numerical performance and prevented these algorithms from achieving control synthesis goals. Our new theoretical developments in symmetric Kronecker products enable a proposed modulation-enhanced excitation (MEE) framework to make PE significantly more systematic and intuitive to achieve for real-world designers. MEE is applied to the suite of recently-developed excitable integral reinforcement learning (EIRL) algorithms, yielding a class of enhanced high-performance CT-RL control design methods which, due to a symmetric Kronecker product algebra, retain EIRL's convergence and closed-loop stability guarantees. Through numerical evaluation studies, we demonstrate how our new MEE framework achieves substantial improvements in conditioning when approximately solving the Hamilton-Jacobi-Bellman equation to obtain optimal controls. We use an intuitive example to provide insights on the central excitation issue under discussion, and we demonstrate the effectiveness of the proposed procedure on a real-world hypersonic vehicle (HSV) application. Optimal control, reinforcement learning (RL), adaptive control, aerospace. ## I Introduction & Motivation Adaptive dynamic programming (ADP) [1, 2, 3, 4] has proven a vital application of reinforcement learning (RL) [5, 6] to complex decision and control problems. ADP uses approximation and learning to solve the optimal control problem for both continuous-time (CT) and discrete-time (DT) dynamical systems, tackling the central "curse of dimensionality" which has plagued the field of dynamic programming (DP) [7] and limited applications in optimal control [8]. On one hand, review of DT-RL algorithms [9, 10] shows that they have demonstrated excellent stability, convergence, and approximation guarantees. For representative results, see, e.g., [11, 12, 13, 14, 15, 16]. DT-RL algorithms have also demonstrated great successes in addressing complex real-world control applications, such as robot position control [17, 18], power system stability enhancement [19, 20, 21], helicopter stabilization, tracking, and reconfiguration control [22, 23, 24], waste water treatment [25], and wearable prostheses [26, 27, 28, 29, 30, 31]. On the other hand, CT-RL algorithms [32, 33, 34, 35] have seen fewer theoretical developments and almost no applications successes when compared to their DT-RL counterparts. Recent comprehensive numerical analysis of prevailing ADP-based CT-RL algorithms [36] shows that not only do they suffer from significant algorithm complexity issues, they also struggle with persistence of excitation (PE) as a central design limitation. This fundamental limitation results in crippling numerical performance issues; in particular, poor conditioning of the underlying learning regression. Altogether, these design limitations stifle the real-world synthesis performance of current CT-RL algorithms. We thus are still in search of formal CT-RL control design methods [36]. In response to this great PE issue, the authors in [37] develop a suite of excitable integral reinforcement learning (EIRL) algorithms, especially the decentralized variant dEIRL. The original dEIRL study rigorously proves convergence and closed-loop stability, and it demonstrates real-world synthesis guarantees [37]. Although dEIRL has demonstrated significant reductions in conditioning relative to prior CT-RL methods, there is still a further underlying barrier to achieving PE [37]. In particular, previous empirical studies reveal that learning regression conditioning suffers due to physical constraints such as actuator saturations, high-frequency model uncertainties, and unit intermingling (e.g., m, m/s in translational loops and deg, deg/s in rotational loops). These constraints force a gap between the excitation level permissible by the underlying physical process and the excitation level required for good algorithm numerics [37]. Filling this gap requires new theoretical developments that can potentially elevate control synthesis relying on PE as a conceptual idea to a practically-useful tool for designers. This work develops new properties of the symmetric Kronecker product, which compared to the standard Kronecker product [38, 39] has received very little theoretical attention and has only been studied by a handful of important works [40, 41, 42, 43, 44] (cf. Section IV for a summary of prior results/developments). These new algebraic results are essential to the proposed work; crucially, they ensure that MEE preserves dEIRL's convergence and closed-loop stability guarantees [37]. Furthermore, the symmetric Kronecker product results uncover substantial parallels in algebraic structure between dEIRL and the algebraic Lyapunov equation (ALE) approach of Kleinman's classical control framework [45]. With these new theoretical developments, MEE allows designers to apply first-principles insights of the dynamics to modulate the learning regression via nonsingular transfor mations of the state variables. When applied to the dEIRL algorithm, MEE may be used systematically in conjunction with dEIRL's multi-injection and decentralization, comprising an unparalleled three-prong approach to tackle the CT-RL curves of dimensionality and conditioning [36, 37]. The contributions of this work are threefold: 1) We develop a new modulation-enhanced excitation (MEE) framework to substantively address long-standing PE issues in CT-RL control. 2) We apply MEE to the suite of EIRL algorithms, and we numerically demonstrate on a motivating example and real-world hypersonic vehicle study how MEE may be used as an intuitive design tool to yield significant numerical improvements while preserving EIRL's convergence and stability guarantees. 3) To develop the MEE framework, we develop a new rectangular-matrix version of the symmetric Kronecker product and introduce the symmetric Kronecker sum operation, proving new fundamental algebraic and spectral results for both maps. The remainder of this work is organized as follows. We first establish background and a formulation of the dEIRL algorithm in Section II. We then motivate the need for the developed MEE framework via an intuitive example in Section III. Subsequently, we derive the required symmetric Kronecker product algebra in Section IV, using this algebra to apply MEE to the dEIRL algorithm in Section V. We demonstrate MEE in our evaluation studies of Section VI. Finally, we conclude this work with a discussion in Section VII. ## II Background **Notation.** We denote \(\left\langle\cdot,\cdot\right\rangle_{F}\) as the Frobenius inner product on \(\mathbb{R}^{m\times n}\). Let \(\otimes\), vec denote the usual Kronecker product and vectorization operations, respectively, and \(\text{mat}=\text{vec}^{-1}\)[39]. For any concepts pertaining to differential geometry, this work follows the notational conventions of the standard text [46]. For \(n\in\mathbb{N}\), let \(\text{GL}(n)\subset\mathbb{R}^{n\times n}\) denote the (real) general linear group of square invertible \(n\times n\) matrices. Let \(\mathbb{S}^{n}\subset\mathbb{R}^{n\times n}\) denote the subspace of symmetric matrices, and let \(\underline{n}={}_{n}P_{2}=\frac{n(n+1)}{2}\) denote the dimension of \(\mathbb{S}^{n}\). ### _Problem Formulation_ **System.** We consider the continuous-time nonlinear time-invariant affine systems \((f,g)\) affording a decentralized dynamical structure with \(N\in\mathbb{N}\) loops, which we present in the \(N=2\) case here for simplicity: \[\left[\begin{array}{c}\dot{x}_{1}\\ \dot{x}_{2}\end{array}\right]=\left[\begin{array}{c}f_{1}(x)\\ f_{2}(x)\end{array}\right]+\left[\begin{array}{cc}g_{11}(x)&g_{12}(x)\\ g_{21}(x)&g_{22}(x)\end{array}\right]\left[\begin{array}{c}u_{1}\\ u_{2}\end{array}\right], \tag{1}\] where \(x\in\mathbb{R}^{n}\) is the state vector, \(u\in\mathbb{R}^{m}\) is the control vector, \(x_{j}\in\mathbb{R}^{n_{j}}\), \(u_{j}\in\mathbb{R}^{m_{j}}\)\((j=1,2\triangleq N)\) with \(n_{1}+n_{2}=n\), \(m_{1}+m_{2}=m\). We assume that \(f(0)=0\), and that \(f\) and \(g\) are Lipschitz on a compact set \(\Omega\subset\mathbb{R}^{n}\) containing the origin \(x=0\) in its interior. Define \(g_{j}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n_{j}\times m}\), \(g_{j}(x)=\left[\begin{array}{cc}g_{j1}(x)&g_{j2}(x)\end{array}\right]\). **LQR Problem.** In the LQR problem, we consider the continuous-time linear time-invariant system \((A,B)\), partitioned analogously to the nonlinear system \((f,g)\) (1): \[\left[\begin{array}{c}\dot{x}_{1}\\ \dot{x}_{2}\end{array}\right]=\left[\begin{array}{cc}A_{11}&A_{12}\\ A_{21}&A_{22}\end{array}\right]\left[\begin{array}{c}x_{1}\\ x_{2}\end{array}\right]+\left[\begin{array}{cc}B_{11}&B_{12}\\ B_{21}&B_{22}\end{array}\right]\left[\begin{array}{c}u_{1}\\ u_{2}\end{array}\right]. \tag{2}\] where \(x\in\mathbb{R}^{n}\), \(u\in\mathbb{R}^{m}\) are the state and control vectors, respectively, \(A\in\mathbb{R}^{n\times n}\), and \(B\in\mathbb{R}^{n\times m}\). We assume that \((A,B)\) is stabilizable [47], and we denote \((A,B)\) as the linearization of \((f,g)\) (1). LQR considers the quadratic cost functional \[J(x_{0})=\int_{0}^{\infty}(x^{T}Qx+u^{T}Ru)\,d\tau, \tag{3}\] where \(Q\in\mathbb{S}^{n}\), \(Q\geq 0\) and \(R\in\mathbb{S}^{m_{j}}\), \(R>0\) are the state and control penalty matrices, respectively. It is assumed that \((Q^{1/2},A)\) is detectable [47]. For decentralization, we impose the block-diagonal cost structure \[Q=\left[\begin{array}{cc}Q_{1}&0\\ 0&Q_{2}\end{array}\right],\quad R=\left[\begin{array}{cc}R_{1}&0\\ 0&R_{2}\end{array}\right], \tag{4}\] where \(Q_{j}\in\mathbb{S}^{n_{j}}\), \(Q_{j}\geq 0\), and \(R_{j}\in\mathbb{S}^{m_{j}}\), \(R_{j}>0\)\((j=1,2)\). Under the above assumptions, the LQR optimal control \(u^{*}\) associated with the quadruple \((A,B,Q,R)\) exists, is unique, and assumes the form of a full-state feedback control law [47] \[u^{*}=-K^{*}x, \tag{5}\] where \(K^{*}=R^{-1}B^{T}P^{*}\), and \(P^{*}\in\mathbb{S}^{n}\), \(P^{*}>0\) is the unique positive definite solution to the control algebraic Riccati equation (CARE) \[A^{T}P^{*}+P^{*}A-P^{*}BR^{-1}B^{T}P^{*}+Q=0. \tag{6}\] **Kleinman's Algorithm [45].** Suppose that \(K_{0}\in\mathbb{R}^{m\times n}\) is such that \(A-BK_{0}\) is Hurwitz. For iteration \(i\)\((i=0,1,\ldots)\), let \(P_{i}\in\mathbb{S}^{n}\), \(P_{i}>0\) be the symmetric positive definite solution of the ALE \[(A-BK_{i})^{T}P_{i}+P_{i}(A-BK_{i})+K_{i}^{T}RK_{i}+Q=0. \tag{7}\] Having solved the ALE \(P_{i}\) (7), the controller \(K_{i+1}\in\mathbb{R}^{m\times n}\) is updated recursively as \[K_{i+1}=R^{-1}B^{T}P_{i}. \tag{8}\] **Theorem II.1** (Stability, Convergence of Kleinman's Algorithm [45]): _Let the preceding assumptions of this section hold. Then we have the following:_ 1. \(A-BK_{i}\) _is Hurwitz for all_ \(i\geq 0\)_._ 2. \(P^{*}\leq P_{i+1}\leq P_{i}\) _for all_ \(i\geq 0\)_._ 3. \(\lim\limits_{i\rightarrow\infty}K_{i}=K^{*}\)_,_ \(\lim\limits_{i\rightarrow\infty}P_{i}=P^{*}\)_._ ### _Decentralized Excitable Integral Reinforcement Learning (dEIRL)_ The original EIRL work [37] develops a suite of learning algorithms. In this section, we focus on the flagship decentralized method: dEIRL, but note that the results here apply to the full suite just as readily. Inspired by Kleinman's approach, dEIRL iteratively solves the CARE associated with the linearization of the _nonlinear_ system (1) via a sequence of simpler linear regression problems, reducing the dimension of the regressions by taking advantage of the decentralized dynamical structure (1). In order to solve these regressions, dEIRL uses state-action data \((x_{j},u_{j})\) generated in each decentralized loop \(1\leq j\leq N\) under the initial stabilizing controller \(K_{0}\), collecting \(l\) data samples at the sample period \(T_{s}\). This data forms a learning regression related to the Kleinman's ALE which is solved for \(i=i^{*}\) iterations to produce the final controller [37]. **Operators.** The following maps are necessary for this study. **Definition II.1**: _For \(l\in\mathbb{N}\) and a strictly increasing sequence \(\{t_{k}\}_{k=0}^{l}\), whenever \(x,y:[t_{0},t_{l}]\rightarrow\mathbb{R}^{n}\), define the matrix \(\delta_{x,y}\in\mathbb{R}^{l\times\mathbb{n}}\) as_ \[\delta_{x,y}=\left[\begin{array}{c}\left(x(t_{1})+y(t_{0})\right)^{T}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! approximately ten times the amplitude of the response \(x_{2}(t)\) in the low-bandwidth loop, resulting in scaling and thus conditioning issues in the regression matrix \(\mathbf{A}_{i,j}\) (13). The designer's insight to fix the issue is clear: The state response \(x_{2}(t)\) in the low-bandwidth loop needs to be scaled up by a factor of ten to improve scaling. This raises the central questions: How may we address this significant scaling issue in a systematic design framework which leverages physical insights (in this case, our saturation constraints) while achieving excitation and thus good numerical conditioning? Crucially, how can we ensure that such a framework preserves dEIRL's key theoretical convergence and closed-loop stability guarantees? As will be shown below, MEE is the answer to these questions. First, however, we must develop some essential symmetric Kronecker product results. In a real-world analogue to this scenario, the designer oftentimes has no physical means of recourse to address these conditioning issues: The excitation level in the high-bandwidth loop \(j=1\) cannot be reduced without degrading PE and hence learning performance in this loop. Furthermore, oftentimes unit scaling between unlike physical measurements renders the equilibration of responses physically intractable (e.g., in the HSV example studied in Section VI-B, velocity oscillations on the order of 100 ft/s are needed to achieve good PE in the translational loop, yet flightpath angle oscillations on the order of 100 deg in the rotational loop are nonsensical). This simple example illustrates that the problem runs deeper: Even when the system has been excited to the greatest possible extent, physical constraints and/or unit intermingling may still leave the learning regression poorly conditioned. These fundamental design concerns make the symmetric Kronecker product results of the next section all the more vital. ## IV The Symmetric Kronecker Product & Symmetric Kronecker Sum In this section, we first provide an overview of the symmetric Kronecker product, summarizing the notable developments to-date. We then derive a construction of the map and prove new key properties necessary for the development of the proposed MEE framework. ### _Overview_ The symmetric Kronecker product was originally devised in [40] for application to semidefinite programming as an operation on square-symmetric matrices. In this context, it was shown that the symmetric Kronecker product \(\otimes\) is symmetric as a bilinear form: \(A\mathop{\otimes}B=B\mathop{\otimes}A\), and that \((A\mathop{\otimes}A)^{-1}=A^{-1}\mathop{\otimes}A^{-1}\) in the case \(A\) is invertible. The spectrum of \(A\mathop{\otimes}B\) was identified in the case that \(A,B\) are symmetric and commute. The symmetric Kronecker product was then extended in [41] to an operation on arbitrary square matrices. [41] identified many key algebraic properties analogous to those of the standard Kronecker product, including the usual transposition, mixed product, and mixed vector product identities. The spectrum of \(A\mathop{\otimes}A\) was identified in the general square matrix case. [42] then identified eigenpair relationships and definiteness characterizations: that positive (semi)definiteness of \(A\mathop{\otimes}B\) is equivalent to that of \(A\mathop{\otimes}B\). More recently, the works [43, 44] provide spectral interlacing properties of the related Jordan-Kronecker product. Notably, prior works to date have treated the symmetric Kronecker product as an operation only on square matrices \(A,B\in\mathbb{R}^{n\times n}\), which we here generalize to rectangular matrices \(A,B\in\mathbb{R}^{m\times n}\). Among other advantages, this allows us to identify the eigenstructure of \(A\mathop{\otimes}B\) as relating to the symmetric Kronecker products \(x\mathop{\otimes}y\) of eigenvectors \(x,y\) of \(A\) and \(B\) - a critical parallel to the well-known result of the standard Kronecker product. We also prove new properties in the square case which will be essential to the development of MEE. Importantly, we introduce the concept of the symmetric Kronecker sum \(\mathop{\oplus}\), proving algebraic, spectral, and exponentiation properties, as well as its role in characterizing existence/uniqueness of solutions to ALEs. ### _Construction_ Prior formulations of the symmetric Kronecker product [40, 41, 42, 43, 44] first define the product implicitly, but here we move straight to an explicit construction. For \(n\in\mathbb{N}\), let \(\{E_{i}\}_{i=1}^{\underline{n}}\) denote the orthonormal basis for \((\mathbb{S}^{n},\langle\cdot,\cdot\rangle_{F})\) enumerated as follows. Define \(s:\{0,\ldots,n\}\rightarrow\{0,\ldots,\underline{n}\}\), \(r,c:\{1,\ldots,\underline{n}\}\rightarrow\{1,\ldots,n\}\) by \[s(j) =\sum_{i=1}^{j}(n-(i-1)), \tag{17}\] \[r(j) =p,\qquad s(p-1)<j\leq s(p),\] (18) \[c(j) =(r(j)-1)+\left(j-s\big{(}r(j)-1\big{)}\right). \tag{19}\] When necessary, we will add subscripts \(s_{n}\), \(r_{n}\), \(c_{n}\) to these maps to make their associated dimension \(n\) explicit. Note that \(\{(r(j),c(j))\}_{j=1}^{\underline{n}}\) is given by \((1,1),(1,2)\),..., \((1,n)\), \((2,2),(2,3)\),..., \((2,n)\), \((3,3),\ldots,(n-1,n),(n,n)\). This associates the index \(1\leq j\leq\underline{n}\) with its corresponding Fig. 1: Visualization of the sum, row, and column indexing maps \(s\) (17), \(r\) (18), and \(c\) (19), respectively, for \(n=3\). row/column index \((r(j),c(j))\) on/above the diagonal, beginning at the first row/column and moving left to right, up to down (cf. Figure 1). These maps have not been defined explicitly in the constructions of prior works [40, 41, 42, 43, 44]; however, subsequently they will show great utility in indexing operations for proving properties of the symmetric Kronecker product, especially in developing our results for the rectangular-matrix case. Letting \(\{e_{i}\}_{i=1}^{n}\) denote the standard basis on \(\mathbb{R}^{n}\), we are now ready to enumerate the orthonormal basis \(\{E_{j}\}_{j=1}^{n}\) as \[E_{j}=\begin{cases}e_{r(j)}e_{c(j)}^{T},&r(j)=c(j),\\ \frac{\sqrt{2}}{2}\left(e_{r(j)}e_{c(j)}^{T}+e_{c(j)}e_{r(j)}^{T}\right),&r(j) <c(j).\end{cases} \tag{20}\] Define \(W\in\mathbb{R}^{2\times n^{2}}\) as \[W=\left[\begin{array}{c}\text{vec}^{T}(E_{1})\\ \vdots\\ \text{vec}^{T}(E_{\underline{n}})\end{array}\right]. \tag{21}\] Whenever necessary, we will also add a subscript \(W_{n}\in\mathbb{R}^{2\times n^{2}}\) to this matrix to make its dimensions explicit. **Definition IV.1** (Symmetric Vectorization, Orthogonal Projection): _Define \(\text{vec}:\mathbb{S}^{n}\to\mathbb{R}^{\underline{n}}\) and \(\pi:\mathbb{R}^{n\times n}\to\mathbb{S}^{n}\) by_ \[\text{svec}(P) =\begin{bmatrix}p_{1,1},\,\sqrt{2}p_{1,2},\ldots,\,\sqrt{2}p_{1, n},\\ &\qquad\qquad p_{2,2},\,\sqrt{2}p_{2,3},\ldots,\,\sqrt{2}p_{n-1,n},\,p_{n,n} \end{bmatrix}^{T}\] \[=\begin{bmatrix}\left\langle P,E_{1}\right\rangle_{F},\ldots, \left\langle P,E_{\underline{n}}\right\rangle_{F}\end{bmatrix}^{T}, \tag{22}\] \[\pi(A) =\frac{A+A^{T}}{2}, \tag{23}\] _and define \(\text{smat}=\text{svec}^{-1}:\mathbb{R}^{\underline{n}}\to\mathbb{S}^{n}\). We will discuss the properties of these operators shortly (cf. Proposition IV.1)._ **Definition IV.2** (The Symmetric Kronecker Product): _Define the symmetric Kronecker product \(\underline{\otimes}:\mathbb{R}^{m\times n}\times\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}\) as_ \[A\,\underline{\otimes}\,B=W_{m}\,(A\otimes B)\,W_{n}^{T}. \tag{24}\] **Definition IV.3** (The Symmetric Kronecker Sum): _Define the symmetric Kronecker sum \(\underline{\oplus}:\mathbb{R}^{n\times n}\times\mathbb{R}^{n\times n}\to \mathbb{R}^{n\times n}\) as_ \[A\,\underline{\otimes}\,B=A\,\underline{\otimes}\,I+I\,\underline{\otimes}\, B=(A+B)\,\underline{\otimes}\,I. \tag{25}\] ### _Properties_ We begin this section by outlining the interaction of the vectorization operations \(\text{vec}\), \(\text{svec}\) with the Frobenius inner product on matrix spaces. **Proposition IV.1** (Vectorization and Frobenius Hilbert Space Structure): \(\text{vec}:(\mathbb{R}^{m\times n},\left\langle\cdot,\cdot\right\rangle_{F}) \to(\mathbb{R}^{mn},\left\langle\cdot,\cdot\right\rangle)\) is a Hilbert space isomorphism; i.e., a linear bijection for which \(\text{vec}^{T}(A)\,\text{vec}(B)=\left\langle A,B\right\rangle_{F}\), \(A,B\in\mathbb{R}^{m\times n}\)._ 1. _[label=_0_]_ 2. _In the square-matrix case, the operators_ \(\text{vec},\text{svec}\) _interact with the Hilbert space structure of_ \((\mathbb{R}^{n\times n},\left\langle\cdot,\cdot\right\rangle_{F})\) _via the following commutative diagram:_ \[\begin{array}{l}(\mathbb{R}^{n\times n},\left\langle\cdot,\cdot\right\rangle_{F })\xrightarrow[\text{\scriptsize{$\pi$}}]{\text{\scriptsize{$\pi$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\pi$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\text{proj}$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\text{proj}$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\text{proj}$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\text{proj}$}}}\xrightarrow[ \text{\scriptsize{$\text{proj}$}}]{\text{\scriptsize{$\text{proj}$}}} \xrightarrow[\text{\scriptsize{$\text{proj}$}}]{\ 6. \((A\otimes B)(C\otimes D)=AC\otimes BD\), \(A\in\mathbb{R}^{m\times n}\), \(B\in\mathbb{R}^{p\times q}\), \(C\in\mathbb{R}^{n\times r}\), \(D\in\mathbb{R}^{q\times s}\). 7. For square matrices \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{m\times m}\), if \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\) and \(\sigma(B)=\{\mu_{j}\mid j=1,\ldots,m\}\), then \(\sigma(A\otimes B)=\{\lambda_{i}\mu_{j}\mid i=1,\ldots,n,\,j=1,\ldots,m\}\). Furthermore, if \(x_{i}\in\mathbb{C}^{n}\), \(y_{j}\in\mathbb{C}^{m}\) are eigenvectors corresponding to the eigenvalues \(\lambda_{i}\) of \(A\) and \(\mu_{j}\) of \(B\), respectively, then \(x_{i}\otimes y_{j}\) is an eigenvector corresponding to the eigenvalue \(\lambda_{i}\mu_{j}\) of \(A\otimes B\). 8. \(A\otimes I\) is symmetric if and only if \(A\) is, \(A\in\mathbb{R}^{n\times n}\). 9. If \(A\in\mathbb{S}^{m},B\in\mathbb{S}^{n}\) are symmetric positive definite, then so is \(A\otimes B\). 10. \(A\otimes B=0\) if and only if at least one \(A,B=0\), \(A\in\mathbb{R}^{m\times n}\), \(B\in\mathbb{R}^{p\times q}\). 11. \(\det(A\otimes B)=\det(A)^{m}\det(B)^{n}\), \(A\in\mathbb{R}^{m\times m}\), \(B\in\mathbb{R}^{n\times n}\). 12. For \(A\in\mathbb{R}^{m\times m}\), \(B\in\mathbb{R}^{n\times n}\), if \(A,B\) are diagonal, then \(A\otimes B\) is diagonal. If \(A,B\neq 0\) and \(A\otimes B\) is diagonal, then \(A,B\) are diagonal. 13. For \(A\in\mathbb{R}^{m\times m}\), \(B\in\mathbb{R}^{n\times n}\), \(A\otimes B=I_{mn}\) if and only if \(A=\lambda I_{m}\), \(B=\frac{1}{\lambda}I_{n}\) for some \(\lambda\neq 0\). 14. The map \(\Phi:\text{GL}(n)\to\text{GL}^{+}(n^{2})\), \[\Phi(A)=A\otimes A,\qquad A\in\text{GL}(n),\] (27) is a Lie group homomorphism with \(\ker\Phi=\{\pm I\}\). \(\Phi_{|\text{GL}^{+}(n)}:\text{GL}^{+}(n)\to\text{GL}^{+}(n^{2})\) is an injective Lie group homomorphism if and only if \(n\) is odd. In the case \(n\) is odd, \(\Phi(\text{GL}(n))=\Phi(\text{GL}^{+}(n))\hookrightarrow\text{GL}^{+}(n^{2})\) is connected. In the case \(n\) is even, \(\Phi(\text{GL}(n))\) has two connected components \(\Phi(\text{GL}^{+}(n)),\Phi(\text{GL}^{-}(n))\hookrightarrow\text{GL}^{+}(n^{ 2})\). _Proof:_ 1)-13) are standard results; see, e.g., [38, 39]. Enumerating \(A=\{a_{i,j}\}_{i,j=1}^{m}\), \(B=\{b_{k,l}\}_{k,l=1}^{n}\), 12) and 13) follow from the Kronecker product indexing identity: \[(A\otimes B)_{(i-1)n+k,(j-1)n+l}=a_{i,j}b_{k,l},\] \[i,j=1,\ldots,m,\quad k,l=1,\ldots,n. \tag{28}\] For 14), that \(\Phi\) is a group homomorphism follows from 6), and that \(\ker\Phi=\{\pm I\}\) follows from 13). For smoothness, identifying \(\mathbb{R}^{n\times m}\oplus\mathbb{R}^{n^{2}}\), \(\mathbb{R}^{n^{2}\times n^{2}}\cong\mathbb{R}^{n^{4}}\), the map \(A\mapsto A\otimes A:\mathbb{R}^{n\times n}\to\mathbb{R}^{n^{2}\times n^{2}}\) is polynomial in its coordinates, hence smooth. Thus, since \(\text{GL}(n)\hookrightarrow\mathbb{R}^{n\times n}\) is an open subset, it follows that \(\Phi:\text{GL}(n)\to\mathbb{R}^{n^{2}\times n^{2}}\) is smooth by restriction of the domain [46, Theorem 5.27]. But that \(\Phi(\text{GL}(n))\subset\text{GL}^{+}(n^{2})\) follows from 11), so since \(\text{GL}^{+}(n^{2})\hookrightarrow\text{GL}(n^{2})\to\mathbb{R}^{n^{2}\times n ^{2}}\), we may then restrict the codomain as well [46, Theorem 5.29], yielding \(\Phi:\text{GL}(n)\to\text{GL}^{+}(n^{2})\) is smooth. The remaining claims are straightforward, noting that \(-I\in\text{GL}^{-}(n)\) if and only if \(n\) is odd. _Proposition 4.3 (Symmetric Kronecker Product Properties):_ The symmetric Kronecker product has the following properties developed previously in the in the square-matrix case [41], generalized here to rectangular matrices: 1. \(\underline{\otimes}:\mathbb{R}^{m\times n}\times\mathbb{R}^{m\times n}\to \mathbb{R}^{m\times n}\) is bilinear. 2. \(\underline{\otimes}\) is symmetric; i.e., \(A\,\underline{\otimes}\,B=B\,\underline{\otimes}\,A\), \(A,B\in\mathbb{R}^{m\times n}\). 3. \((A\,\underline{\otimes}\,B)\,\text{sec}(\pi(C))=\text{svec}(\pi(B\pi(C)A^{T}))\), \(A,B\in\mathbb{R}^{m\times n}\), \(C\in\mathbb{R}^{n\times n}\). 4. \((A\,\underline{\otimes}\,B)^{T}=A^{T}\,\underline{\otimes}\,B^{T}\), \(A,B\in\mathbb{R}^{m\times n}\). 5. \((A\,\underline{\otimes}\,A)^{-1}=A^{-1}\,\otimes\,A^{-1}\), \(A\in\text{GL}(n)\). However, \((A\,\underline{\otimes}\,B)^{-1}\neq A^{-1}\,\underline{\otimes}\,B^{-1}\) for \(A,B\in\text{GL}(n)\), in general. Indeed, \(A,B\in\text{GL}(n)\) does not imply \(A\,\underline{\otimes}\,B\in\text{GL}(\underline{n})\). 6. a) \((A\,\underline{\otimes}\,B)(C\,\underline{\otimes}\,D)=\frac{1}{2}\,(AC\, \underline{\otimes}\,BD+AD\,\underline{\otimes}\,BC)\), \(A,B\in\mathbb{R}^{m\times n}\), \(C,D\in\mathbb{R}^{n\times p}\). \(\underline{\otimes}\,B)(C\,\underline{\otimes}\,C)=AC\,\underline{\otimes}\,BC\), \(A,B\in\mathbb{R}^{m\times n}\), \(C\in\mathbb{R}^{n\times p}\). 7. \((C\,\underline{\otimes}\,C)(A\,\underline{\otimes}\,B)=CA\,\underline{\otimes}\,CB\), \(A,B\in\mathbb{R}^{m\times n}\), \(C\in\mathbb{R}^{p\times m}\). 8. a) For a square matrix \(A\in\mathbb{R}^{n\times n}\), if \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\), then \(\sigma(A\,\underline{\otimes}\,A)=\{\lambda_{i}\lambda_{j}\mid 1\leq i\leq j\leq n\}\). Furthermore, if \(x_{i},x_{j}\in\mathbb{C}^{n}\) are eigenvectors corresponding to the eigenvalues \(\lambda_{i},\lambda_{j}\) of \(A\,\underline{\otimes}\,A\), respectively, then \(x_{i}\,\underline{\otimes}\,x_{j}\) is an eigenvector corresponding to the eigenvalue \(\lambda_{i}\lambda_{j}\) of \(A\,\underline{\otimes}\,A\). 9. Suppose that \(A,B\in\mathbb{R}^{n\times n}\) are simultaneously diagonalizable with common basis of eigenvectors \(\{x_{i}\}_{i=1}^{n}\). If \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\) and \(\sigma(B)=\{\mu_{j}\mid j=1,\ldots,n\}\) are the eigenvalues of \(A\) and \(B\) corresponding to the respective eigenvectors \(\{x_{i}\}_{i=1}^{n}\), then \(\sigma(A\,\underline{\otimes}\,B)=\left\{\frac{1}{2}(\lambda_{i}\mu_{j}+ \lambda_{j}\mu_{i})\mid 1\leq i\leq j\leq n\right\}\). Furthermore, \(x_{i}\,\underline{\otimes}\,x_{j}\) is an eigenvector corresponding to the eigenvalue \(\frac{1}{2}(\lambda_{i}\mu_{j}+\lambda_{j}\mu_{i})\) of \(A\,\underline{\otimes}\,B\). 10. Suppose that \(A,B\in\mathbb{R}^{n\times n}\) share two eigenvectors \(x,y\in\mathbb{C}^{n}\). If \(Ax=\lambda_{1}x\), \(Bx=\mu_{1}x\), \(Ay=\lambda_{2}y\), \(By=\mu_{2}y\), then \(x\,\underline{\otimes}\,y\) is an eigenvector of \(A\,\underline{\otimes}\,B\) corresponding to the eigenvalue \(\frac{1}{2}(\lambda_{1}\mu_{2}+\lambda_{2}\mu_{1})\). 1 7Sa), 7Sb) were originally proved in [41] and are well-understood, but because prior works on the symmetric Kronecker product define it only as an operation on square matrices, they have missed that \(x_{i}\mathop{\underline{\otimes}}x_{j}\) constitute the eigenvectors of \(A\mathop{\underline{\otimes}}B\) - an important and intuitive result paralleling that of the usual Kronecker product (cf. Proposition 4.2 7)). 7Sb) was proved in [41] in the case of commuting square matrices \(A,B\in\mathbb{S}^{n}\), but simultaneous diagonalizability is the key property enabling this result. Underpinning the arguments in 7Sa) and 7Sb) is 7Sc), which we prove here because it will be illustrative subsequently. With all terms as in the hypotheses of 7Sc), we first note the subtlety that \(x,y\neq 0\) implies \(x\mathop{\underline{\otimes}}y\neq 0\), by 1OS) (proven below, independently of this result). Next, applying the _now-generalized_ mixed product identity 6S), we have \[(A\mathop{\underline{\otimes}}B)(x\mathop{\underline{\otimes}}y) =\frac{1}{2}\left(Ax\mathop{\underline{\otimes}}By+Ay\mathop{ \underline{\otimes}}Bx\right)\] \[=\frac{1}{2}(\lambda_{1}\mu_{2}+\lambda_{2}\mu_{1})\;x\mathop{ \underline{\otimes}}y. \tag{30}\] The authors are unaware of 11S)-14S) being proved previously. 11S) follows from 7Sa). For 1OS), 12S)-14S), we employ the indexing maps \(r\) (18) and \(c\) (19), which together with the mixed product identity 6S) yield the symmetric Kronecker product indexing identity (31). Straightforward application of (31) yields 10S), 12S), and 13S). Finally, 14S) follows from 12S) and 13S) in an analogous argument to the one presented in the proof of Proposition 4.2 14). \(\blacksquare\) **Remark IV.1** (On the Eigenstructure of the Symmetric Kronecker Product): _Equation (30) elucidates a key issue surrounding the eigenstructure of the symmetric Kronecker product: In general, given eigenvectors \(Ax=\lambda_{1}x\), \(By=\mu_{2}y\) of \(A,B\in\mathbb{R}^{n\times n}\), the first term in the expansion \(Ax\mathop{\underline{\otimes}}By=\lambda_{1}\mu_{2}\;x\mathop{\underline{ \otimes}}y\) always factors in the desired fashion. Yet, the second term \(Ay\mathop{\underline{\otimes}}Bx=Bx\mathop{\underline{\otimes}}Ay\) need not be a scalar multiple of \(x\mathop{\underline{\otimes}}y\), since \(x\) is not an eigenvector of \(B\) and \(y\) is not an eigenvector of \(A\), in general. Naturally, this makes the eigenstructure of the symmetric Kronecker product a significantly more complicated object of study than that of the usual Kronecker product, cf. [41, 43, 44]._ As a note, the eigenstructure results of Proposition 7S) require the symmetric Kronecker product as a map on complex matrices (specifically, when eigenvectors are complex-valued). As is the case with the standard Kronecker product, the necessary results may developed for the complex case. Following the practice of previous works [40, 41, 42, 43, 44], we avoid carrying out this process explicitly here to maintain scope. **Remark IV.2**: _For a counterexample illustrating the point of Proposition 4.3 5S), consider \(A=\mathtt{diag}(1,-1),B=I_{2}\in\text{GL}(2)\). Then \(A\mathop{\underline{\otimes}}B=\frac{1}{2}A\mathop{\underline{\oplus}}A= \mathtt{diag}(1,0,-1)\notin\text{GL}(2)\). The key here is that \(A\) possesses eigenvalues \(\sigma(A)=\{\pm 1\}\) symmetric with respect to the origin (cf. Proposition 4.6). Note further on this point that \(\sigma(A\mathop{\underline{\oplus}}A)=\{1+1,1-1,-1-1\}\)._ **Remark IV.3**: _The strengthened hypotheses for the converse direction of Proposition 4.3 12S) in relation to Proposition 4.2 12) are necessary. Indeed, in the case \(n=2\), consider \(A=e_{2}e_{1}^{T}\in\mathbb{R}^{2\times 2}\). Then \(A,A^{T}\neq 0\), and neither of these matrices are diagonal, yet \(A\mathop{\underline{\otimes}}A^{T}=\mathtt{diag}(0,\frac{1}{2},0)\) is diagonal. Note that \(A,A^{T}\) are zero on their diagonals._ **Remark IV.4** (Lie Group Homomorphisms \(\Phi\), \(\underline{\Phi}\)): _The Lie Group homomorphism \(\underline{\Phi}\) in Proposition 4.3 14S) is relevant to the MEE framework developed in Section V. To maintain subsequent emphasis on the symmetric Kronecker product algebra, we will after this section avoid labeling this map explicitly. We have included construction of its Kronecker product counterpart \(\Phi\) in Proposition 4.2 14) for completeness. By virtue of the bilinearity of the (symmetric) Kronecker product, these homomorphisms are homogeneous of degree two. For intuition, consider the case \(n=1\). Then \(\underline{n}=1\), \(r_{1}\equiv 1\) (18), \(c_{1}\equiv 1\) (19), \(\{E_{i}\}_{i=1}^{1}=\{1\}\) (20), and \(W_{1}=1\) (21). In all, \(\otimes=\mathop{\underline{\otimes}}\) are both given by scalar multiplication, and \(\Phi(a)=\underline{\Phi}(a)=a^{2}\) (we will thus focus on \(\underline{\underline{\Phi}}\)). Here, \(\underline{\Phi}:\text{GL}(1)=\mathbb{R}\backslash\{0\}\rightarrow\text{GL}^ {+}(1)=(0,\infty)\). This is a group homomorphism: \(\underline{\Phi}(ab)=abab=aabb=\underline{\Phi}(a)\underline{\Phi}(b)\), which is polynomial in the global coordinate on \(\mathbb{R}\backslash\{0\}\rightarrow\mathbb{R}\) and on \((0,\infty)\hookrightarrow\mathbb{R}\), hence smooth. Note also that \(\underline{\Phi}(a)=a^{2}=1\) if and only if \(a\in\{\pm 1\}\). Finally, \(\underline{\Phi}|_{\text{GL}^{+}(1)}:\text{GL}^{+}(1)=(0,\infty)\rightarrow\text{ GL}^{+}(1)=(0,\infty)\) is a Lie group isomorphism onto its image \(\underline{\Phi}|_{\text{GL}^{+}(1)}\left((0,\infty)\right)=\underline{\Phi}((0, \infty))=(0,\infty)\) (a connected subgroup of \(\text{GL}^{+}(1)=(0,\infty)\)); in particular, the map \(a\mapsto a^{2}:(0,\infty)\rightarrow(0,\infty)\) is a diffeomorphism._ In the above, \(\underline{\Phi}|_{\text{GL}^{+}(1)}:\text{GL}^{+}(1)\rightarrow\text{GL}^{+}( \underline{1})\) is a Lie group isomorphism in its own right. However, in the case \(n>1\), \(\underline{\Phi}|_{\text{GL}^{+}(n)}:\text{GL}^{+}(n)\rightarrow\text{GL}^{+}( \underline{n})\) is not a Lie group isomorphism. In the case \(n\) is even, it fails to be injective. Meanwhile, for all \(n>1\), \(\underline{\Phi}\) fails to be onto \(\text{GL}^{+}(\underline{n})\). For otherwise \(\underline{\Phi}\) would be a surjective map of constant rank, hence a submersion by the global rank theorem [46, Theorem 4.14]; i.e., \(\text{rank}(\underline{\Phi})=\underline{n}\) - a contradiction of the fact that \(\text{rank}(\underline{\Phi})\leq\min\{n,\underline{n}\}=n<\underline{n}\) always. A similar argument prevails for \(\Phi\). Having discussed the (symmetric) Kronecker product, we now move on to the (symmetric) Kronecker sum. We first recall the spectral result in the standard case: **Proposition IV.4** (Eigenstructure of The Kronecker Sum [38, Theorem 4.4.5]): _For square matrices \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{m\times m}\), if \(\sigma(A)=\{\lambda_{i}\;|\;i=1,\ldots,n\}\) and \(\sigma(B)=\{\mu_{j}\;|\;j=1,\ldots,m\}\), then \(\sigma(A\mathop{\underline{\otimes}}B)=\{\lambda_{i}+\mu_{j}\;|\;i=1,\ldots,n,\,j=1, \ldots,m\}\). Furthermore, if \(x_{i}\in\mathbb{C}^{n}\), \(y_{j}\in\mathbb{C}^{m}\) are eigenvectors corresponding to the eigenvalues \(\lambda_{i}\) of \(A\) and \(\mu_{j}\) of \(B\), respectively, then \(x_{i}\otimes y_{j}\) is an eigenvector corresponding to the eigenvalue \(\lambda_{i}+\mu_{j}\) of \(A\oplus B\). While the eigenstructure of the Kronecker sum is quite intuitive, the eigenstructure of the symmetric Kronecker sum is more complicated, owing to the complications inherited from the symmetric Kronecker product (cf. Remark 4.1). In the simultaneously-diagonalizable case, the result of Proposition 7Sb), developed originally in [41], may be applied to the symmetric Kronecker sum as follows: **Proposition 4.5** (Eigenstructure of The Symmetric Kronecker Sum (Simultaneously Diagonalizable Case)): _Suppose that \(A,B\in\mathbb{R}^{n\times n}\) are simultaneously diagonalizable with common basis of eigenvectors \(\{x_{i}\}_{i=1}^{n}\). If \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\) and \(\sigma(B)=\{\mu_{j}\mid j=1,\ldots,n\}\) are the eigenvalues of \(A\) and \(B\) corresponding to the respective eigenvectors \(\{x_{i}\}_{i=1}^{n}\), then \(\sigma(A\mathop{\underline{\otimes}}B)=\big{\{}\frac{1}{2}(\lambda_{i}+\mu_{ i}+\lambda_{j}+\mu_{j})\mid 1\leq i\leq j\leq n\}\). Furthermore, \(x_{i}\mathop{\underline{\otimes}}x_{j}\) is an eigenvector corresponding to the eigenvalue \(\frac{1}{2}(\lambda_{i}+\mu_{i}+\lambda_{j}+\mu_{j})\) of \(A\mathop{\underline{\otimes}}B\)._ For our purposes, Proposition 4.5 is too restrictive. The following property will be useful shortly: **Lemma 4.1** (Partial Eigenstructure of The Symmetric Kronecker Sum): _Suppose that \(A,B\in\mathbb{R}^{n\times n}\) share two eigenvectors \(x,y\in\mathbb{C}^{n}\). If \(Ax=\lambda_{1}x\), \(Bx=\mu_{1}x\), \(Ay=\lambda_{2}y\), \(By=\mu_{2}y\), then \(x\mathop{\underline{\otimes}}y\) is an eigenvector of \(A\mathop{\underline{\otimes}}B\) corresponding to the eigenvalue \(\frac{1}{2}(\lambda_{1}+\mu_{1}+\lambda_{2}+\mu_{2})\)._ _Proof:_ Follows from Proposition 4.3 7Sc). \(\blacksquare\) Lemma 4.1 allows us to enumerate the eigenstructure of \(A\mathop{\underline{\oplus}}A\), a special case relevant to ALEs. **Proposition 4.6** (Eigenstructure of The Symmetric Kronecker Sum \(A\mathop{\underline{\oplus}}A\)): _For a square matrix \(A\in\mathbb{R}^{n\times n}\), if \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\), then \(\sigma(A\mathop{\underline{\oplus}}A)=\{\lambda_{i}+\lambda_{j}\mid 1\leq i\leq j \leq n\}\). Furthermore, if \(x_{i},x_{j}\in\mathbb{C}^{n}\) are eigenvectors corresponding to the eigenvalues \(\lambda_{i},\lambda_{j}\) of \(A\), respectively, then \(x_{i}\mathop{\underline{\otimes}}x_{j}\) is an eigenvector corresponding to the eigenvalue \(\lambda_{i}+\lambda_{j}\) of \(A\mathop{\underline{\oplus}}A\)._ _Proof:_ Follows from Lemma 4.1. \(\blacksquare\) Having discussed eigenstructure, we move on to the key exponentiation identity involving the (symmetric) Kronecker sum: **Proposition 4.7** (Exponentiation of the Kronecker Sum [38]): _Let \(A\in\mathbb{R}^{m\times m}\), \(B\in\mathbb{R}^{n\times n}\) be given._ 1. \((A\mathop{\underline{\otimes}}I)^{k}=A^{k}\mathop{\underline{\otimes}}I\)_, and_ \((I\mathop{\underline{\otimes}}B)^{k}=I\mathop{\underline{\otimes}}B^{k}\)_,_ \(k\geq 0\)_._ 2. \(\exp(A\oplus B)=\exp(A)\mathop{\underline{\otimes}}\exp(B)\)_._ _The analogue holds for the symmetric Kronecker sum in the case \(A=B\):_ **Proposition 4.8** (Exponentiation of the Symmetric Kronecker Sum): _Let \(A,B\in\mathbb{R}^{n\times n}\) be given._ 1. \((A\mathop{\underline{\otimes}}I)^{k}=(I\mathop{\underline{\otimes}}A)^{k}\) _is given by the following binomial expansion_ \[(A\mathop{\underline{\otimes}}I)^{k}=\frac{1}{2^{k}}\sum_{i=0}^{k}\binom{k}{ i}A^{k-i}\mathop{\underline{\otimes}}A^{i},\qquad k\geq 0.\] (32) 2. \(\exp(A\mathop{\underline{\otimes}}A)=\exp(A)\mathop{\underline{\otimes}}\exp (A)\)_. However, in general_ \(\exp(A\mathop{\underline{\oplus}}B)\neq\exp(A)\mathop{\underline{\otimes}} \exp(B)\)_._ _Proof:_ Proving that (32) holds is a quick algebraic check following from the mixed product identity of Proposition 4.3 6S). 2S) follows from (32) after examining the partial sums of \(\exp(A\mathop{\underline{\oplus}}A)\) and \(\exp(A)\mathop{\underline{\otimes}}\exp(A)\). \(\blacksquare\) **Remark 4.5**: _For a counterexample illustrating the point of Proposition 4.8 2S), consider the same matrices as in Remark 4.2: \(A=\texttt{diag}(1,-1)\), \(B=I_{2}\). Then_ \[\exp(A\mathop{\underline{\oplus}}B) =\texttt{diag}(e^{2},e,1),\] \[\exp(A)\mathop{\underline{\otimes}}\exp(B) =\texttt{diag}\left(e^{2},\,\frac{e^{2}+1}{2},\,1\right). \tag{33}\] ### _Symmetric Kronecker Products in Algebraic Lyapunov Equations (ALEs)_ As is well-known, the Kronecker product plays an important role in characterizing existence and uniqueness of solutions to ALEs [38]. We illustrate in this section that the symmetric Kronecker product algebra developed above also provides this same characterization under symmetric conditions. Substantively, the algebra is structurally identical to the standard case. **Definition 4.4** (Algebraic Lyapunov Equation (ALE)): _Given \(A\in\mathbb{R}^{n\times n},B\in\mathbb{R}^{n\times m}\), consider the following algebraic Lyapunov equation (ALE)_ \[A^{T}X+XA+B=0. \tag{34}\] **Proposition 4.9** (ALE Existence and Uniqueness of Solutions): _Let \(\sigma(A)=\{\lambda_{i}\mid i=1,\ldots,n\}\). There exists a unique solution \(X\in\mathbb{R}^{n\times m}\) of the ALE (34) if and only if \(\lambda_{i}+\lambda_{j}\neq 0\) for all \(1\leq i,j\leq n\)._ _Proof:_ This proof is quite standard; see, e.g., [38, 39]. However, we include it here to illustrate structural parallels to the analogous results developed shortly for the symmetric Kronecker product. Applying the identities in Proposition 4.2, we see that (34) is equivalent to \[\text{vec}(A^{T}X+XA)=(A\oplus A)^{T}\,\text{vec}(X)=-\,\text{vec}(B). \tag{35}\] Thus, the ALE (34) has a unique solution if and only if \((A\oplus A)^{T}\in\text{GL}(n^{2})\). Applying Proposition 4.4, \(\sigma((A\oplus A)^{T})=\{\lambda_{i}+\lambda_{j}\mid i,j=1,\ldots,n\}\), from which the result follows. \(\blacksquare\) **Proposition 4.10** (ALE Existence and Uniqueness of Solutions: Stable Systems [47, Proposition 5.2.1]): _Suppose \(A\in\mathbb{R}^{n\times n}\) is Hurwitz, and \(Q\in\mathbb{S}^{n}\). Consider the ALE_ \[A^{T}P+PA+Q=0. \tag{36}\] 1. _The unique solution is the symmetric matrix_ \[P=\int_{0}^{\infty}e^{A^{T}t}Qe^{At}dt.\] (37) 2. _If_ \(Q\) _is positive (semi)definite, then_ \(P\) _is positive (semi)definite._ 3. _If_ \(Q\) _is positive semidefinite, then_ \(P\) _is positive definite if and only if_ \((Q^{1/2},A)\) _is detectable._ **Remark 4.6** (Symmetric Kronecker Algebra of the ALE (36)): _Consider the ALE (36). Applying Proposition 4.10, we know \(P\in\mathbb{S}^{n}\). We may then apply the symmetric Kronecker product algebra in Proposition 4.3, yielding_ \[\text{vec}(A^{T}P+PA)=-\,\text{vec}(Q).\] (38) Now, applying Proposition 4.3 3S), the left-hand-side of (38) becomes, \[2\,\text{vec}(\pi(PA))=2(A^{T}\mathop{\underline{\otimes}}I)\,\text{vec}(P)=(A \mathop{\underline{\oplus}}A)^{T}\,\text{vec}(P). \tag{39}\] Altogether, the ALE (36) is equivalent to the following: \[\text{svec}(A^{T}P+PA)=(A\mathop{\underline{\oplus}}A)^{T}\,\text{svec}(P)=-\, \text{svec}(Q). \tag{40}\] The reader is encouraged to compare Equations (35) and (40), which precisely motivates our definition of the symmetric Kronecker sum \(\mathop{\underline{\oplus}}\) as the natural analogue to the Kronecker sum \(\oplus\). The structural parallels extend further: Note by Proposition 4.6 that \(\sigma((A\mathop{\underline{\oplus}}A)^{T})=\{\lambda_{i}+\lambda_{j}\mid 1 \leq i\leq j\leq n\}\). Thus, in the case \(Q\in\mathbb{S}^{n}\), the symmetric Kronecker sum may be used to characterize existence and uniqueness of solutions to the ALE (36) in an entirely similar argument to the one used in the proof of Proposition 4.9. Here, the square-symmetric nature of the matrix \(Q\in\mathbb{S}^{n}\) has enabled an effective reduction is dimensionality of the problem from \(n^{2}\) to \(\underline{n}\). ## V Modulation-Enhanced Excitation (MEE) Framework Let a decentralized loop \(1\leq j\leq N\) be given, and suppose that \(K_{0,j}\in\mathbb{R}^{m_{j}\times n_{j}}\) is such that \(A_{ij}-B_{jj}K_{0,j}\) is Hurwitz in loop \(j\). We may then apply Kleinman's algorithm (Section II-A), yielding sequences \(\{P_{i,j}\}_{i=0}^{\infty}\) in \(\mathbb{R}^{n_{j}\times n_{j}}\) and \(\{K_{i,j}\}_{i=0}^{\infty}\) in \(\mathbb{R}^{m_{j}\times n_{j}}\) from the ALE \[A_{i,j}^{T}P_{i,j}+P_{i,j}A_{i,j}+Q_{i,j}=0. \tag{41}\] where the matrices \(A_{i,j}\) and \(Q_{i,j}\) are given by (11) and (14), respectively. We have seen, vis. (40), that the ALE (41) is equivalent to the following vectorized ALE regression \[(A_{i,j}\mathop{\underline{\oplus}}A_{i,j})^{T}\,\text{svec}(P_{i,j})=-\, \text{svec}(Q_{i,j}). \tag{42}\] Now, suppose \(S=\text{diag}(S_{1},\ldots,S_{N})\in\text{GL}(n)\), \(S_{j}\in\text{GL}(n_{j})\) (\(j=1,\ldots,N\)), is any nonsingular coordinate transformation \(\tilde{x}=Sx\), partitioned in the decentralized form \[\tilde{x}_{j}=S_{j}x_{j}. \tag{43}\] This induces the following transformed LQR problem in loop \(j\), associated with the quadruple \((\tilde{A}_{jj},\tilde{B}_{jj},\tilde{Q}_{j},R_{j})\), where \[\tilde{A}_{jj}=S_{j}A_{jj}S_{j}^{-1},\ \ \tilde{B}_{jj}=S_{j}B_{jj},\ \ \tilde{Q}_{j}=S_{j}^{-T}Q_{j}S_{j}^{-1}. \tag{44}\] By similarity, the controller \(\tilde{K}_{i,j}=K_{i,j}S_{j}^{-1}\) is such that \(\tilde{A}_{i,j}=\tilde{A}_{jj}-\tilde{B}_{jj}\tilde{K}_{i,j}\) is Hurwitz. This motivates the following modulated ALE \[\tilde{A}_{i,j}^{T}\tilde{P}_{i,j}+\tilde{P}_{i,j}\tilde{A}_{i,j}+\tilde{Q}_{ i,j}=0. \tag{45}\] Modulation by nonsingular coordinate transformations is common practice in the study of matrix equations, oftentimes offering significant theoretical/numerical advantages for purposes of solving [38]. At this point, two questions are natural: 1) How do the original sequences \(\{P_{i,j}\}_{i=0}^{\infty}\), \(\{K_{i,j}\}_{i=0}^{\infty}\) output by Kleinman's algorithm relate to the modulated sequences \(\{\tilde{P}_{i,j}\}_{i=0}^{\infty}\), \(\{\tilde{K}_{i,j}\}_{i=0}^{\infty}\)? Noting by Theorem 2.2 that dEIRL and Kleinman's algorithm are equivalent, this first question also addresses the relations between the respective sequences produced by dEIRL. And, 2) How does prescaling interact with the symmetric Kronecker product algebra developed in Section IV? That is, how does prescaling affect the terms in the ALE regression (42) and the dEIRL regression (12), and what structural parallels exist between the two? ### _Kleinman's Algorithm & Modulation_ **Theorem 5.1** (Kleinman's Algorithm: Modulation Invariance): \(P_{i,j}\in\mathbb{S}^{n_{j}}\)_, \(P_{i,j}>0\) satisfies the ALE (41) if and only if \(\tilde{P}_{i,j}=S_{j}^{-T}P_{i,j}S_{j}^{-1}\) satisfies the modulated ALE (45)._ _Proof:_ We have seen, vis. (40), that the modulated ALE (45) is equivalent to \[(\tilde{A}_{i,j}\mathop{\underline{\oplus}}\tilde{A}_{i,j})^{T}\,\text{svec}( \tilde{P}_{i,j})=-\,\text{svec}(\tilde{Q}_{i,j}). \tag{46}\] Applying the symmetric Kronecker product algebra of Proposition 4.3, we may expand (46) as \[(S_{j}\mathop{\underline{\oplus}}S_{j})^{-T}(A_{i,j}\mathop{ \underline{\oplus}}A_{i,j})^{T}(S_{j}\mathop{\underline{\oplus}}S_{j})^{T}\, \text{svec}(\tilde{P}_{i,j})\] \[=-(S_{j}\mathop{\underline{\oplus}}S_{j})^{-T}\,\text{svec}(Q_{ i,j}). \tag{47}\] By Proposition 4.3 5S), we may multiply both sides by \((S_{j}\mathop{\underline{\ominus}}S_{j})^{T}\in\text{GL}(\underline{n}_{j})\), yielding the equivalent regression \[(A_{i,j}\mathop{\underline{\oplus}}A_{i,j})^{T}(S_{j}\mathop{ \underline{\ominus}}S_{j})^{T}\,\text{svec}(\tilde{P}_{i,j})=-\,\text{svec}(Q_ {i,j}). \tag{48}\] However, from comparison of (48) and the symmetric vectorization of the original ALE (42), we conclude that \((S_{j}\mathop{\underline{\ominus}}S_{j})^{T}\,\text{svec}(\tilde{P}_{i,j})= \text{svec}(P_{i,j})\). Applying Proposition 4.3 again, \[(S_{j}\mathop{\underline{\ominus}}S_{j})^{T}\,\text{svec}(\tilde{P }_{i,j}) =\text{svec}(\pi(S_{j}^{T}\tilde{P}_{i,j}S_{j}))\] \[=\text{svec}(S_{j}^{T}\tilde{P}_{i,j}S_{j}). \tag{49}\] In all, \(S_{j}^{T}\tilde{P}_{i,j}S_{j}=P_{i,j}\), implying the desired result. The reverse direction follows by a symmetric argument. \(\blacksquare\) We now have a powerful answer to question 1) posed above: Kleinman's algorithm (and hence the dEIRL algorithm) is invariant with respect to nonsingular state modulation in the sense that if the sequences \(\{\tilde{P}_{i,j}\}_{i=0}^{\infty}\), \(\{\tilde{K}_{i,j}\}_{i=0}^{\infty}\) are generated under the modulated problem with potentially-improved numerics, then the solution sequences \(\{P_{i,j}\}_{i=0}^{\infty}\), \(\{K_{i,j}\}_{i=0}^{\infty}\) of the original problem may be backed out by \[P_{i,j}=S_{j}^{T}\tilde{P}_{i,j}S_{j},\qquad K_{i,j}=\tilde{K}_{i,j}S_{j}. \tag{50}\] Furthermore, the above proof also answers question 2) in the case of Kleinman's algorithm: The modulated ALE regression (46) is equivalent to (48), in which we observe the that the original ALE regression matrix \((A_{i,j}\mathop{\underline{\oplus}}A_{i,j})^{T}\in\text{GL}(\underline{n}_{j})\) (42) is multiplied on the right by the modulation matrix \((S_{j}\mathop{\underline{\ominus}}S_{j})^{T}\in\text{GL}(\underline{n}_{j})\). The regression target vector \(-\,\text{svec}(Q_{i,j})\in\mathbb{R}^{2\mu}\) is unchanged between the original regression (42) and equivalent modulated regression (48). ### _dEIRL & Modulation: MEE Framework_ Now, consider the analogue in the dEIRL algorithm. Associate with the nonsingular coordinate transformation \(S_{j}\in\text{GL}(n_{j})\) the transformed problem \((\tilde{f}_{j},\tilde{g}_{j},\tilde{Q}_{j},R_{j})\) in loop \(j\), where \[\tilde{f}_{j}=S_{j}\circ f_{j}\circ S^{-1},\quad\tilde{g}_{j}=S_{j}\circ g_{j} \circ S^{-1}. \tag{51}\] This induces the following modulated dEIRL least-squares regression, analogous to (12), which we term the MEE regression for brevity: \[\tilde{\mathbf{A}}_{i,j}\,\text{svec}(\tilde{P}_{i,j})=\tilde{\mathbf{b}}_{i,j}. \tag{52}\] The symmetric Kronecker product algebraic results developed in Section IV are essential to the derivation of the MEE regression (52). In particular: **Proposition V.1**: _The operations \(\delta_{x,y}\) (9) and \(I_{x,y}\) (10) satisfy the following:_ 1. \(\delta_{Ax,Ay}=\delta_{x,y}(A\mathop{\underline{\otimes}}A)^{T},\,A\in\mathbb{R} ^{m\times n}\)_._ 2. \(I_{Ax,Ay}=I_{x,y}(A\mathop{\underline{\otimes}}A)^{T},\,A\in\mathbb{R}^{m \times n}\)_._ 3. \(I_{Ax,Bx}=I_{x,x}(A\mathop{\underline{\otimes}}B)^{T},\,A,B\in\mathbb{R}^{m \times n}\)_._ Proof:: Follows from Proposition IV.3 _GS)._ These key algebraic properties enable the following fundamental result, the basis of our proposed MEE framework: **Theorem V.2** (MEE Framework & the dEIRL Algorithm: Modulation Invariance): \(P_{i,j}\in\mathbb{S}^{n_{j}}\), \(P_{i,j}>0\) satisfies the dEIRL regression (12) if and only if \(P_{i,j}=S_{j}^{-T}P_{i,j}S_{j}^{-1}\) satisfies the MEE regression (52). Furthermore, the original regression (12) and MEE regression (52) are related by \[\tilde{\mathbf{A}}_{i,j}=\mathbf{A}_{i,j}(S_{j}\mathop{\underline{\otimes}}S _{j})^{T},\qquad\tilde{\mathbf{b}}_{i,j}=\mathbf{b}_{i,j}. \tag{53}\] Proof:: The first assertion follows immediately from Theorems V.1 and V.2. The relation (53) follows from application of the symmetric Kronecker product algebra developed in Propositions IV.3 and V.1. Theorem V.2 definitively concludes our answer to question 2) posed at the beginning of this section for the dEIRL algorithm and our proposed MEE framework, revealing substantial parallels to the classical Kleinman's algorithm. Crucially, the dEIRL regression matrix \(\mathbf{A}_{i,j}\in\mathbb{R}^{l_{j}\times\Delta_{j}}\) (13) is multiplied on the right by the _same_ modulation matrix \((S_{j}\mathop{\underline{\otimes}}S_{j})^{T}\in\text{GL}(\underline{n}_{j})\) to form the MEE regression matrix \(\tilde{\mathbf{A}}_{i,j}\) (52). As is the case with Kleinman's algorithm, the regression target vector \(\mathbf{b}_{i,j}\in\mathbb{R}^{l_{j}}\) (14) remains unchanged under MEE. Furthermore, this vector is given by \(\mathbf{b}_{i,j}=-I_{x_{j},x_{j}}\,\text{svec}(Q_{i,j})\), which is simply the product of the integral matrix \(I_{x_{j},x_{j}}\) (10) with the ALE regression vector \(-\,\text{svec}(Q_{i,j})\) (42). The parallelisms under which these two algorithms interact with the symmetric Kronecker product algebra developed in this work presents a significant practical advantage to real-world control designers: The same physics-based prescaling insights which readily apply to solving classical control problems may be ported directly to dEIRL's MEE framework. We summarize these key algebraic properties in Table I. ## VI Evaluation Studies Having developed the algebraic properties of dEIRL's MEE framework, we now demonstrate how MEE may be used as an intuitive, practical tool for real-world designers. We begin with addressing the motivating linear example in first presented in Section III to illustrate key MEE design principles, and then we apply these insights to a real-world HSV example in Section VI-B. In both cases, using little more than physics-based dynamical insights, MEE offers at least an order of magnitude reduction in dEIRL problem conditioning. These evaluations were performed in MATLAB R2022b, on an NVIDIA RTX 2060, Intel i7 (9th Gen) processor. All numerical integrations in this work are performed in MATLAB's adaptive ode45 solver to ensure solution accuracy. Code for the dEIRL algorithm can be found at [49]. ### _Evaluation 1: Motivating Example_ Consider the motivating linear example (16) first discussed in Section III, with identical hyperparameter selections. We present the resulting peak condition number data in Table II, and the corresponding iteration-wise conditioning response in Figure 1(a). As noted previously, the EIRL algorithm converges to within \(1.62\times 10^{-9}\) of the optimal \(K^{*}\); however, it exhibits large peak condition number of 138.47 (Table II). This is caused by saturation constraints in the low-bandwidth loop \(j=2\), which result in a factor of ten separation between the state response \(x_{2}(t)\) in the low-bandwidth loop \(j=2\) and the response \(x_{1}(t)\) in the high-bandwidth loop \(j=1\). Intuition offers a clear solution: The state response \(x_{2}(t)\) in the low-bandwidth loop needs to be scaled up by a factor of ten to improve scaling. This is precisely where MEE offers immense practical numerical benefits to designers. Indeed, choosing the natural modulation matrix \(S=\texttt{diag}(1,10)\in\text{GL}^{+}(2)\) drastically improves EIRL conditioning, reducing it by a factor of ten from 138.47 before MEE to 14.05 after MEE (Table II), a reduction seen iteration-wise across the board (Figure 1(a)). Thus, using little beyond common-sense principles, MEE can offer conditioning reductions of an order of magnitude to designers using the EIRL/dEIRL algorithm, cementing this framework's already substantial numerical performance guarantees [37]. We conclude this section by employing a decentralized design (i.e., dEIRL) in each of the \(N=2\) loops. Using identical hyperparameters, the resulting final controllers \(K_{i^{*},1}\), \(K_{i^{*},2}\) converge to within \(1.38\times 10^{-11}\) and \(1.49\times 10^{-9}\) of the optimal controllers \(K_{1}^{*}\), \(K_{2}^{*}\) in each loop, respectively. Furthermore, dEIRL has unity conditioning in each loop (since the dimension of each is \(\underline{n}_{1}=\underline{n}_{2}=1\)), illustrating the general principle that dEIRL's use of physically-motivated dynamical insights enables further learning performance improvements. ### _Evaluation 2: Hypersonic Vehicle (HSV) Example_ Having now motivated the significant numerical benefits of MEE on an illustrative example in Section VI-A, we now demonstrate how these principles may be readily applied to a real-world nonlinear, nonminimum phase HSV system. The HSV model considered was developed from NASA Langley aeropropulsive data [50] and has proven a standard testbed for seminal works such as [51, 52, 53, 54]. We offer a complete analysis of the model in the original dEIRL work [37], so we omit the dynamical equations and discussion here for sake of brevity. In sum, the HSV is fifth-order, with states \(x=\left[V,\,\gamma,\,\theta,\,q,\,h\right]^{T}\), where \(V\) is the vehicle airspeed, \(\gamma\) is the flightpath angle (FPA), \(\theta\) is the pitch attitude, \(q\) is the pitch rate, and \(h\) is the altitude. The controls are \(u=\left[\delta_{T},\,\delta_{E}\right]^{T}\), where \(\delta_{T}\) is the throttle setting, and \(\delta_{E}\) is the elevator deflection. We examine the outputs \(y=\left[V,\,\gamma\right]^{T}\). The HSV is naturally a two-loop system consisting of the weakly-coupled velocity subsystem \(j=1\) (associated with the airspeed \(V\) and throttle control \(\delta_{T}\)) and rotational subsystem \(j=2\) (associated with the FPA \(\gamma\), attitude \(\theta,q\), and elevator control \(\delta_{E}\)). For decentralized design, we augment the plant at the output with the integrator bank \(z=\int y\,d\tau=\left[z_{V},\,z_{\gamma}\right]^{T}=\left[\int V\,d\tau,\,\int \gamma\,d\tau\right]^{T}\). The state/control vectors are thus partitioned as \(x_{1}=\left[z_{V},\,V\right]^{T}\), \(u_{1}=\delta_{T}\) (\(n_{1}=2\), \(m_{1}=1\)) and \(x_{2}=\left[z_{\gamma},\,\gamma,\,\theta,\,q\right]^{T}\), \(u_{2}=\delta_{E}\) (\(n_{2}=4\), \(m_{2}=1\)). Running dEIRL with identical hyperparameter selections to those enumerated in [37, Section VII-A], the resulting final controllers \(K_{i^{*},1}\), \(K_{i^{*},2}\) converge to within \(1.07\times 10^{-6}\) and \(2.85\times 10^{-5}\) of the optimal controllers \(K_{1}^{*}\), \(K_{2}^{*}\) in each loop, respectively - a significant synthesis guarantee for this real-world aerospace example. We include the max/min conditioning data in Table III and corresponding conditioning response in Figure 2b. As a technical note, the numerical conditioning data presented here varies slightly from that of the original dEIRL study [37] due to our re-scaling the map suc (22) to make this operator an isometry (cf. Proposition IV.1). Examination of Table III shows that worst-case conditioning is already acceptable in the velocity loop \(j=1\) at 124.38. Thus, no modulation \(S_{1}=I_{2}\) in loop \(j=1\) is necessary. However, conditioning in the higher-dimensional, unstable, nonminimum phase FPA loop \(j=2\) is worse at 5517.97. Although this represents a substantial reduction of fourteen orders of magnitude from prevailing ADP-based CT-RL methods [36, 37], conditioning reductions in this loop are still desired for real-world numerical reliability. Furthermore, just as in the motivating example studied in Section VI-A, a few minutes of investigation yields a physically-intuitive explanation of the cause of the conditioning issue. Within the FPA loop \(j=2\) is the FPA subsystem \(\gamma\) itself (stable, nonminimum phase), alongside the attitude subsystem \(\theta,q\) (unstable, minimum phase). The FPA dynamics have a bandwidth roughly a decade below that of the attitude dynamics. As a result, the pitch \(\theta\) generally exhibits larger responses than the FPA, and the pitch rate \(q\) by virtue of differentiation magnifies this response amplitude discrepancy. Fig. 2: Conditioning number \(\kappa(\mathbf{A}_{i,j})\) (13) versus iteration count \(i\), with and without prescaling. (a): Linear second-order system (Section VI-A). (b): HSV system (Section VI-B). As in the simple linear example, the designer course of action is clear here: The attitude states \(\theta,q\) need to be scaled down to equilibrate their amplitudes with that of the FPA response \(\gamma\) and thereby improve scaling in the regression matrix \(\mathbf{A}_{i,2}\) (13). Generally, it is common for angular state variables to be expressed in degrees for the sake of flight control implementation [48, 55, 56, 57]. Thus, a remedy a designer may likely choose is to simply convert the pitch \(\theta\) and pitch rate \(q\) to radians for the purposes of the MEE regression (52), while keeping the FPA \(\gamma\) and integral augmentation \(z_{\gamma}\) in degrees: \(S_{2}=\texttt{diag}(1,1,\pi/180,\pi/180)\in\text{GL}^{+}(4)\). After the MEE regression is complete, the pitch \(\theta\) and pitch rate \(q\) may then be converted back to degrees for control implementation via the inverse transformation \(S_{2}^{-1}\)in (50) while preserving the convergence/stability of the resulting controller, a result guaranteed by the MEE framework in Theorem 5.2. We include this MEE conditioning data in the FPA loop \(j=2\) in Table III and Figure 2b. As can be seen, this simple radians/degrees conversion reduces worst-case conditioning by factor of 25 from 5517.97 without MEE to 220.13 with MEE, a conditioning reduction observed iteration-wise across the board in Figure 2b. In light of the higher dimension and dynamical challenges associated with the FPA loop \(j=2\), a near equalization of the conditioning in this loop with that of the velocity loop \(j=1\) is a substantial real-world numerical result. Whereas in our previous study we illustrated the motivation, method, and results of MEE on a simple academic example, here we show definitively that the same first-principles intuitions of the dynamics may be extended to MEE on significant, challenging practical applications - with potentially even greater factors of performance improvement. We demonstrate how MEE may be used systematically in conjunction with decentralization and multi-injection, equipping designers with an unrivaled suite of practical numerical capabilities. ## VII Conclusion & Discussion This work presents a novel modulation-enhanced excitation (MEE) framework to address fundamental PE issues in continuous-time reinforcement learning control. We apply this MEE framework to the cutting-edge suite of EIRL algorithms, enabling numerical performance enhancements while preserving their key convergence/stability guarantees via new symmetric Kronecker product algebra. Using simple design principles, MEE is demonstrated to improve conditioning properties of dEIRL by at least an order of magnitude in numerical studies - by a factor of 25 on the significant real-world hypersonic vehicle example. When MEE is combined with the multi-injection and decentralization of dEIRL, this method now offers a three-pronged designer approach for maximizing algorithm numerical performance, enabling control synthesis results unprecedented in CT-RL [36, 37]. To enable the MEE framework, we present novel results on the symmetric Kronecker product [40, 41, 42, 43, 44]. This work also motivates the concept of the symmetric Kronecker sum, which we demonstrate is the natural analogue to its standard counterpart in its algebraic, spectral, and exponentiation properties, as well as its central role in solving ALEs.
2309.13246
Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models
In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average.
Dangxing Chen
2023-09-23T03:59:02Z
http://arxiv.org/abs/2309.13246v1
Can I Trust the Explanations? Investigating Explainable Machine Learning Methods for Monotonic Models ###### Abstract. In recent years, explainable machine learning methods have been very successful. Despite their success, most explainable machine learning methods are applied to black-box models without any domain knowledge. By incorporating domain knowledge, science-informed machine learning models have demonstrated better generalization and interpretation. But do we obtain consistent scientific explanations if we apply explainable machine learning methods to science-informed machine learning models? This question is addressed in the context of monotonic models that exhibit three different types of monotonicity. To demonstrate monotonicity, we propose three axioms. Accordingly, this study shows that when only individual monotonicity is involved, the baseline Shapley value provides good explanations; however, when strong pairwise monotonicity is involved, the Integrated gradients method provides reasonable explanations on average. explainable machine learning, monotonicity, fairness, neural networks + Footnote †: journal: Information Systems ## 1. Introduction In recent decades, machine learning (ML) models have achieved many successes. In comparison with traditional methods, machine learning models are often capable of increasing accuracy at the expense of black-box functionality. The importance of model explanation is particularly important for highly regulated industries such as the financial sector (Beng et al., 2016). As an example, the Consumer Financial Protection Bureau (CFPB) confirmed that anti-discrimination law requires companies to provide detailed explanations when denying an application for credit when using machine learning methods 1. In response to the growing regulatory requirements, researchers are investigating explainable machine learning methods. Footnote 1: [https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-model-using-complex-algorithms?](https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-model-using-complex-algorithms?) Explainable machine learning (XML) methods have been successfully used in the past to achieve great success in machine learning. Popular used methods include SHapley Additive exPlanations (SHAP) (Krishna et al., 2017), Local Interpretable Model-Agnostic Explanations (LIME) (Krishna et al., 2018), Integrated Gradients (IG) (Zhou et al., 2018), Anchors (Zhou et al., 2018), and Sensitivity-based methods (Krishna et al., 2018). As a result of these methods, we have gained a better understanding of how ML models function (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018). In this paper, we address the attribution problem, which involves allocating the prediction score of a model for a given input to its base features. The attribution to a base feature can be understood as the importance of the feature in the prediction. Credit scoring, for instance, can utilize attribution to understand how each feature contributes to the credit score. The SHAP and IG have been successfully applied to the attribution problem. Further, their use has been demonstrated to comply with a number of theoretical properties, as discussed in (Krishna et al., 2017; Krishna et al., 2018; Zhou et al., 2018; Zhou et al., 2018). While extensive analyses have been conducted, the majority of results have been based on black-box machine learning models without domain knowledge. Modeling and ensuring conceptual soundness require domain knowledge: the true model should be consistent with the underlying theories. A number of studies have demonstrated that physics-informed machine learning (PIML) (Krishna et al., 2018; Krishna et al., 2018) improved black-box machine learning models in terms of interpretation and accuracy by enforcing conservation laws, for example. Finance and other applications often require monotonicity. A person's credit score should be decreased when there is one more past due balance on the account, for example. It is possible to achieve better generalization and interpretation when monotonicity is successfully enforced (Krishna et al., 2017; Krishna et al., 2018; Zhou et al., 2018; Zhou et al., 2018). Such models can be categorized as finance-informed machine learning models (FIML) or more generally science-informed machine learning models (SIML). In addition, monotonicity is often associated with fairness. As an example, with other factors being equal, a person with more past dues should have a lower credit score. The violation of monotonicity may result in unfair consequences that could cause damage to our society. In this paper, we ask the following question. **Can attribution methods deliver consistent scientific explanations if SIML models contain certain scientific knowledge? If so, to what extent?** Specifically, do attribution methods preserve monotonicity for monotonic ML models? In the past, monotonicity has been considered for XML methods. In (Zhou et al., 2018), it is shown that among different SHAP methods, the baseline Shapley value (BShap) method, which is a generalization of Shapley-Shubik (Shubik, 2018), preserves demand individual monotonicity. However, as recently highlighted in (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018), individual monotonicity is not the only problem to be addressed; pairwise monotonicity is just as important. The concept of pairwise monotonicity refers to the comparison of features within a given pair. For instance, a past due of more than 60 days should be considered more serious than one of less than 60 days. Pairwise monotonicity is a requirement of fairness that is informed by domain knowledge. It is unfortunate that pairwise monotonicity has been neglected in the existing literature. This paper extends monotonicity results to a broader range of cases. As a summary, we have made the following contributions: 1. We propose three new axioms concerning the average preservation of three types of monotonicity. Accordingly, a good attribution method should satisfy these axioms if the underlying model is monotonic.
2309.15637
Platinum-Decorated Graphene: Experimental Insight into Growth Mechanisms and Hydrogen Adsorption Properties
The potential of graphene for hydrogen storage, coupled with the established role of Platinum as a catalyst for the hydrogen evolution reaction and the spillover effect, makes Pt-functionalized graphene a promising candidate for near-ambient hydrogen storage. This paper focuses on examining the process of Pt cluster formation on epitaxial graphene and assesses the suitability of the system as hydrogen storage material. Scanning tunneling microscopy unveils two primary pathways for Pt cluster growth. In the initial phase, up to ~1 ML of Pt coverage, Pt tends to randomly disperse and cover the graphene surface, while the cluster height remains essentially unchanged. Beyond a coverage of 3 ML, the nucleation of new layers on existing clusters becomes predominant. Then, the clusters mainly grow in height. Thermal desorption spectroscopy on hydrogenated Pt-decorated graphene reveals the presence of multiple hydrogen adsorption mechanisms, manifested as two Gaussian peaks superimposed on a linearly increasing background. We attribute the first peak at 150{\deg}C to hydrogen physisorbed on the surface of Pt clusters. The second peak at 430{\deg}C is attributed to chemisorption of hydrogen on the surface of the clusters, while the linearly increasing background is assigned to hydrogen bonded in the bulk of the Pt clusters. These measurements demonstrate the ability of Pt-functionalized graphene to store molecular hydrogen at temperatures that are high enough for stable hydrogen binding at room temperature.
Letizia Ferbel, Stefano Veronesi, Ylea Vlamidis, Antonio Rossi, Leonardo Sabattini, Camilla Coletti, Stefan Heun
2023-09-27T13:12:26Z
http://arxiv.org/abs/2309.15637v1
Platinum-Decorated Graphene: Experimental Insight into Growth Mechanisms and Hydrogen Adsorption Properties ###### Abstract The potential of graphene for hydrogen storage, coupled with the established role of Platinum as a catalyst for the hydrogen evolution reaction and the spillover effect, makes Pt-functionalized graphene a promising candidate for near-ambient hydrogen storage. This paper focuses on examining the process of Pt cluster formation on epitaxial graphene and assesses the suitability of the system as hydrogen storage material. Scanning tunneling microscopy unveils two primary pathways for Pt cluster growth. In the initial phase, up to ~1 ML of Pt coverage, Pt tends to randomly disperse and cover the graphene surface, while the cluster height remains essentially unchanged. Beyond a coverage of 3 ML, the nucleation of new layers on existing clusters becomes predominant. Then, the clusters mainly grow in height. Thermal desorption spectroscopy on hydrogenated Pt-decorated graphene reveals the presence of multiple hydrogen adsorption mechanisms, manifested as two Gaussian peaks superimposed on a linearly increasing background. We attribute the first peak at 150degC to hydrogen physisorbed on the surface of Pt clusters. The second peak at 430degC is attributed to chemisorption of hydrogen on the surface of the clusters, while the linearly increasing background is assigned to hydrogen bonded in the bulk of the Pt clusters. These measurements demonstrate the ability of Pt-functionalized graphene to store molecular hydrogen at temperatures that are high enough for stable hydrogen binding at room temperature. Graphene, Platinum, Hydrogen, Energy Storage, Metal Functionalization ## 1 Introduction Hydrogen stands out as the most promising renewable energy carrier, offering a viable alternative to fossil fuels [1]. However, its widespread adoption faces several technical challenges, with storage being a significant obstacle [1-3]. To address this issue, solid state storage solutions have been explored, aiming to enable high-density hydrogen storage through physical or chemical means at near-ambient pressure and temperature [3, 4]. Graphene, with its remarkable chemical stability, lightweight nature, large surface area, and favorable physical-chemical properties for hydrogen adsorption, has emerged as a particularly captivating option. Although pristine graphene has a limited capacity for storing molecular hydrogen, especially under near-ambient conditions, the introduction of metal functionalization opens up the possibility to achieve high gravimetric densities [5-7]. Several studies, both theoretical and experimental, have examined the potential of metal functionalization of graphene for hydrogen storage using various metals. Alkaline earth metals [8, 9] and alkali metals [10-12] tend to disperse on the graphene surface or readily intercalate the graphene sheet. However, they often form metal hydrides, posing challenges for reversible hydrogen storage [13]. In contrast, transition metals (TMs) exhibit a favorable interaction known as Kubas interaction, enabling them to bind hydrogen molecules at energies conducive to room temperature storage [14-22]. TMs have also been recognized as catalysts for the dissociation of H\({}_{2}\) molecules, enabling the spillover effect when appropriately supported [23, 24]. However, TMs have a greater tendency to cluster, reducing the available active metal surface area and thereby decreasing the number of hydrogen binding sites. Consequently, in practice the storage capacity is lower than what is theoretically calculated [7, 25, 26]. Among transition metals, Platinum has long been regarded as the most efficient catalyst for the hydrogen evolution reaction and for the spillover effect [27, 28]. However, the commercial potential of this catalyst is limited by its high cost and limited availability. To address this, a promising strategy involves the use of single metal atoms or small metal clusters dispersed on supports, effectively reducing the required amount of material while maintaining the catalytic activity [29, 30]. Ab-initio calculations employing density functional theory (DFT) have demonstrated that the adsorption of Pt atoms on graphene enhances the binding energy of hydrogen molecules, from physisorption (with approximate binding energy of -0.1 eV) [6, 27, 31] in pristine graphene to chemisorption (with approximate binding energy of -0.8 eV) [31] in Pt-decorated graphene. Remarkably, it results that each Pt atom has the ability to bind up to four hydrogen molecules, in the atomic form, thereby enabling the potential for achieving high gravimetric densities at near-ambient conditions [31, 32]. The purpose of this paper is to experimentally investigate the properties and capacity of graphene functionalized with Pt for hydrogen storage applications and to verify the theoretical predictions. ## 2 Materials and Methods Epitaxial graphene was obtained by thermal decomposition of 6H-SiC(0001) crystals at high-temperature in a BM reactor (Aikron) under Ar atmosphere. With this method, the graphene surfaces consist of a mixture of monolayer (ML) and bilayer (BL) graphene, with ML coverage of at least 70%. The quality, composition, homogeneity, and precise thickness of the graphene was first evaluated by atomic force microscopy (AFM) and Raman spectroscopy. Typical morphology of the graphene samples used for this study are reported in Fig. S1 in the Supplementary Information. All experiments reported here were conducted under ultra-high vacuum (UHV) conditions, maintaining a base pressure better than 5x10\({}^{-11}\) mbar. Prior to Pt deposition, the graphene samples were annealed, via direct current heating, at 600\({}^{\circ}\)C for several hours, followed by 10 minutes at 800\({}^{\circ}\)C to eliminate adsorbents and achieve clean surfaces. All temperatures were measured using a thermocouple mounted on the sample holder, directly in contact with the sample, and cross-calibrated with a pyrometer. The high quality of the pristine graphene films was confirmed by atomically resolved scanning tunneling microscopy (STM) images. The scanning tunneling microscope employed for these experiments is a VT-RHK-STM, working in constant current mode. Gwyddion software package was used to analyze the STM images [33]. Platinum deposition on graphene was carried out at room temperature using a commercial electron-beam evaporator, with the Pt coverage calibrated via STM imaging. Hydrogenation of Pt clusters on graphene for thermal desorption spectroscopy (TDS) measurements was accomplished by exposing them to molecular deuterium for 5 min at a pressure of 1x10\({}^{7}\) mbar at 25\({}^{\circ}\)C. Deuterium (D\({}_{2}\), mass 4) is chemically identical to hydrogen (H\({}_{2}\), mass 2). Still, it is less abundant in the residual atmosphere of the vacuum chamber, thus leading to a better signal-to-noise ratio in TDS. For the TDS measurements, the samples were positioned in front of a mass spectrometer (SRSR-RGA) and heated at a constant rate of ~10\({}^{\circ}\)C/s to the target temperature (either 400\({}^{\circ}\)C or 750\({}^{\circ}\)C), while recording the mass 4-channel of the mass spectrometer. ## 3 Results and Discussion First, we analyzed the Pt growth on graphene. In Fig. 1(a) we show the Pt area coverage (i.e., the amount of graphene surface area covered by Pt clusters with respect to the total area scanned, namely (100 nm x 100 nm)) versus the total volume of the Pt clusters, upon room temperature Pt deposition at constant flux. Herein, 1 ML is defined as the Pt atom density in Pt(111). A lattice constant of 0.392 nm yields 1 ML = 1.52\(\times\)10\({}^{15}\) atoms/cm2 [34]. Whereas the interlayer distance is about 0.23 nm. Figure 1(a)-(c) shows three STM images of the epitaxial graphene surface with 0.53 ML, 1.39 ML, and 5.4 ML of Pt. These scans are representative of the surface morphology in the Pt growth regimes that we identified and which will be discussed in the following. Up to a Pt coverage of about 1 ML, Pt particles tend to randomly disperse on the graphene surface. The cluster density increases from just a few clusters to about 185 clusters per 100x100 nm2, while their average height remains essentially unchanged. A typical surface morphology in this regime is shown in Fig. 1(b). This scan corresponds to a surface with 0.53 ML of Pt covering 26% of the graphene area. The Pt clusters have diameters between 1.3 nm and 10 nm, and the resulting average diameter is ~4 nm, while the cluster height spreads from 0.4 nm to 2 nm, resulting in an average height of ~0.6 nm. In this growth regime, we observe a linear relation between Pt volume and area coverage with intercept zero. Afterwards, for Pt coverages > 3 ML the clusters merge, and due to coalescence their density decreases from ~35 clusters up to a single cluster per 100x100 nm2, at full area coverage, while their average height increases remarkably. With a Pt content of 5.4 ML, thus 93% area coverage (as shown in Fig. 1 (d)), the average height increases to 1.5 nm, while the maximum height reaches 4.5 nm. Notably, also in this regime we observe a linear relation between Pt volume and area coverage but with a reduced slope and intercept at about 50% area coverage. For large volume, the curve asymptotically goes to 100% area coverage. These two regimes of growth kinetics can be better understood if we take into account the strength of interaction of Pt with the substrate (i.e., graphene) as opposed to Pt-Pt (the cohesive energy). It is important to emphasize that no Pt intercalation was observed during room temperature deposition. At low coverage, the growth of the clusters primarily involves competition between adsorption and desorption on the graphene surface. Additionally, if all evaporated Pt atoms adhered to the graphene surface and the film would grow with a layer-by-layer mechanism, we would have reached area coverage saturation at 1 ML. However, the linear fit for the first growth regime yields full coverage at ~1.85 ML, indicating that during the initial stages of growth the average cluster height is almost 2 layers, "approximately consistent" with the measured value of 0.6 nm. At higher coverage, the uncovered graphene surface area diminishes, and the dominant mechanism of growth shifts to nucleation of new layers on pre-existing clusters. Furthermore, since the flux of the evaporator is maintained constant, the non-zero intercept obtained from a linear fit of the second growth regime supports and implies that the Pt-graphene interaction strength is lower than that of Pt-Pt. This is indeed consistent with the reported values of Pt-Pt cohesive energy of 5.84 eV [35] which is more than two times larger than 2.16 eV, the computed binding energy of Pt on graphene [31]. We then analyzed the variation of hydrogen adsorption on Pt-covered graphene with increasing Pt content. For this measurement, we consistently evaporated increasing Pt amounts onto the sample. After each Pt deposition, we exposed the sample to molecular hydrogen (D2) and performed TDS. The sample was heated to 400\({}^{\circ}\)C, a temperature at which the shape and distribution of the Pt islands remained largely unchanged, as confirmed by STM analysis. Higher temperatures lead to significant and non-reversible morphological changes (refer to Fig. S2 in the Supplementary Information), thus the choice to stop TDS at ~400\({}^{\circ}\)C. Figure 2 reports the resulting temperature-dependent desorption curves of D2 as a function of Pt coverage. Figure 1: (a) Area coverage [%] vs. Total cluster volume [ML]. All data points are the result of a statistical analysis of several STM images of size (100 nm \(\times\) 100 nm). The error is calculated as the standard deviation for each dataset. Bottom: STM scans of (100 nm \(\times\) 100 nm) areas representative of the surface with a Pt content (b) 0.53 ML - 26% (green), (c) 1.39 ML - 49% (blue), and (d) 5.4 ML - 93% (red). Colors for the frames of STM images (b)-(d) have been used accordingly to the colors of the data points highlighted in (a). Scale bar: 20 nm. Figure 2: TDS spectra of hydrogen desorption from Pt clusters supported on graphene. Inset: Integral intensity under the TDS spectra (from 25\({}^{\circ}\)C to 400\({}^{\circ}\)C) as a function of Pt coverage. Pt coverage ranging from 0 to 6.34 ML. Pure graphene exposed to molecular hydrogen showed little (see 0% in Fig. 2) to no D2 adsorption (refer to Fig. 51(d) in the Supplementary Information), consistent with the fact that molecular hydrogen does not stick to defect-free graphene at room temperature [6]. The D2 desorption spectra of Pt-clusters supported on graphene shown in Fig. 2, exhibit a peak at ~150degC followed by a broad shoulder of increasing signal. As the Pt coverage increases, the peak at 150degC in the desorption spectra becomes more pronounced, and the full integrated desorption signal continues to rise steadily, as demonstrated in the inset to Fig. 2. However, it eventually levels off around 87% of Pt area coverage. For a more quantitative and detailed analysis of the desorption spectra of Pt-covered graphene, we performed a TDS experiment up to 750degC. For this purpose, we prepared a sample with a Pt coverage of 1.4 ML, corresponding to a Pt area coverage of 49% (see Fig. 1(c)). The TDS data is shown in Fig. 3. Comparing the first part of the TDS spectrum up to 400degC in Fig. 3 with the TDS spectra discussed above and presented in Fig. 2, we find consistent results between the two samples and can, again, identify a maximum in the desorption spectrum at ~150degC followed by a broad shoulder, which can now be resolved as an additional desorption peak. The line profile analysis of this TDS spectrum actually reveals the presence of two Gaussian hydrogen desorption peaks labeled as \(\alpha\) and \(\beta\), superimposed on linearly increasing background. The \(\alpha\)-peak is centred at T\({}_{\rm{a}}\) = (155\(\pm\)7)degC, while the \(\beta\)-peak is centered at T\({}_{\rm{B}}\) = (432\(\pm\)13)degC. Assuming first-order desorption, we can use the following equation E\({}_{\rm{a}}\)=k\({}_{\rm{a}}\)Td = A\({}_{\rm{d}}\) e\({}^{\rm{d}\alpha}\)=k\({}_{\rm{a}}\)Td [36] to evaluate an approximate desorption energy barrier (E\({}_{\rm{a}}\) with d = \(\alpha\), \(\beta\)). Here, \({}_{\rm{d}}\) is the time from the start of the desorption ramp to the moment at which the desorption temperature Td is reached, and kB is the Boltzmann constant. We use for the Arrhenius constant (A) a typical value of 10\({}^{123}\) s\({}^{-1}\). Then, we obtain E\({}_{\rm{a}}\) = 1.14 eV/molecule (0.57 eV/atom) and E\({}_{\rm{B}}\) = 1.96 eV/molecule (0.98 eV/atom). Using the Readhead equation [37] we obtain similar results for the desorption barriers, E\({}_{\rm{a}}\) = 0.71 eV/atom and E\({}_{\rm{B}}\) = 1.14 eV/atom. The origin of these multiple hydrogen desorption peaks can be better understood comparing these data with literature. Molecular hydrogen can spontaneously dissociate to individual H atoms on the platinum surface [31, 32]. These H atoms can then chemisorb on the surface of the Pt clusters. Depending on the number of Pt atoms forming the active cluster and the Pt-support interaction, the strength of the bond between Pt and H atoms varies. The upper limit is set by an isolated Pt atom which can chemisorb up to 8 H atoms with adsorption energies in the range from -1.47 eV to -0.62 eV per H atom [31]. As these adsorption energies are reduced by the size of the cluster and the interaction of the cluster with the substrate [38], here epitaxial graphene grown on 6H-SiC(0001), they are in good agreement with the desorption energy we obtained for the \(\beta\)-peak. Thus, we can assign the \(\beta\)-peak to H atoms chemisorbed on the surface of the Pt clusters. The relatively large width of the peak is consistent with this interpretation. Hydrogen can alternatively physisorb on the Pt cluster or chemisorb on the graphene surface by the spillover effect. TDS measurement on the H\({}_{\rm{H}}\) adsorption on Pt(111) reported the occurrence of physisorption as single peak located at around 150-200degC [39, 40]. Chemisorption of atomic hydrogen on graphene has been reported to fall in the same energy range [41, 42]. For the latter to happen, the H\({}_{\rm{2}}\) molecule dissociated by the Pt cluster has to leave the cluster and diffuse as an atom on the graphene substrate to form C-H bonds. According to theory, H can migrate from the metal cluster to the support only if this is thermodynamically more favorable. Theoretical works have reported activation energies for this process above 2 eV, thus concluding that this effect is unlikely to happen [31, 43, 44]. At the same time, there are other theoretical proposals and experimental reports that reveal the occurrence of spillover in TM-doped graphene [45, 46, 47, 48, 49]. In any case, since the spillover effect requires both an active catalyst and a support, the H\({}_{\rm{2}}\) desorption peak associated to it must decrease to zero as the Pt reaches full coverage of the graphene surface. Since the \(\alpha\)-peak does not disappear at full area coverage, therefore spillover is not likely to be the dominant mechanism responsible for this peak. Thus, we assign it to physisorbed hydrogen on the surface of Pt clusters. Finally, for what concerns the linearly increasing background, we suggest that it is due to a "delayed effect" of desorption of hydrogen absorbed in the bulk of the Pt clusters [50]. This contribution should scale with the volume of the clusters, as the distance that the hydrogen has to travel before reaching the surface increases accordingly. As such, we should observe a significant increase Figure 3: TDS spectrum of hydrogen desorption from Pt clusters supported on graphene. Two Gaussian functions (red and blue curves) and a linear background (green curve) have been used for the fitting procedure. of this contribution as the volume of the clusters becomes more important. From the TDS spectra presented in Fig. 2 we have an indication of an increasing shoulder which becomes more important at coverages higher than 75%. This is the same coverage at which we observed by STM an increased volumetric growth of the clusters and the onset of the second growth regime. However, since in the TDS spectra in Fig. 2 we observe a superposition of two Gaussian peaks with the background, the individual contributions are difficult to separate and thus our data cannot provide a conclusive demonstration of the origin of the background. Nevertheless, the interpretation fits well with what has been previously reported in literature [50, 51]. ## 4 Conclusions We presented an in-situ experiment on the growth and the hydrogen adsorption properties of Pt clusters supported on epitaxial graphene grown on 6H-SiC(0001). As we increased the amount of deposited Pt, we monitored the cluster size and height distribution via scanning tunneling microscopy imaging. We identified two main growth modes which become competing at intermediate coverages (~2 ML). At low coverages (~1 ML), Pt tends to adsorb and randomly disperse on the graphene surface and forms small clusters with an average height of ~0.6 nm. At higher coverages (~3 ML), nucleation on pre-existing clusters becomes the dominant growth mechanism, and the cluster height tends to increase, reaching a maximum height of ~4.5 nm at 6.3 ML when full area coverage is reached. With a deeper knowledge of the Pt dispersion on epitaxial graphene, we then studied via thermal desorption spectroscopy the hydrogen adsorption properties of the system. We show that the Pt-functionalized graphene system can adsorb hydrogen at temperatures that are high enough for stable binding at room temperature. The amount of stored hydrogen increases with the Pt content of the sample until it levels off at the onset of the second Pt growth regime. We discovered the presence of multiple hydrogen desorption processes, revealed as two Gaussian peaks (at about 155\({}^{\circ}\)C and 432\({}^{\circ}\)C) superimposed onto a linearly increasing background. From a comparison of the obtained TDS spectra with the data available in literature, we attribute the first Gaussian peak to hydrogen physisorbed on the surface of the Pt clusters. We attribute the second peak, instead, to dissociative chemisorption of the hydrogen molecules on the surface of the Pt clusters. On the other hand, the linear background is due to absorption of hydrogen in the bulk of the Pt clusters. Further studies are needed to better separate these contributions and to solve the controversial question whether hydrogen spillover can or cannot occur in this system. ## Acknowledgements The research leading to these results has received founding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 881603-GrapheneCore3.
2309.12158
Towards Robust and Truly Large-Scale Audio-Sheet Music Retrieval
A range of applications of multi-modal music information retrieval is centred around the problem of connecting large collections of sheet music (images) to corresponding audio recordings, that is, identifying pairs of audio and score excerpts that refer to the same musical content. One of the typical and most recent approaches to this task employs cross-modal deep learning architectures to learn joint embedding spaces that link the two distinct modalities - audio and sheet music images. While there has been steady improvement on this front over the past years, a number of open problems still prevent large-scale employment of this methodology. In this article we attempt to provide an insightful examination of the current developments on audio-sheet music retrieval via deep learning methods. We first identify a set of main challenges on the road towards robust and large-scale cross-modal music retrieval in real scenarios. We then highlight the steps we have taken so far to address some of these challenges, documenting step-by-step improvement along several dimensions. We conclude by analysing the remaining challenges and present ideas for solving these, in order to pave the way to a unified and robust methodology for cross-modal music retrieval.
Luis Carvalho, Gerhard Widmer
2023-09-21T15:11:16Z
http://arxiv.org/abs/2309.12158v1
# Towards Robust and Truly Large-Scale Audio-Sheet Music Retrieval ###### Abstract A range of applications of multi-modal music information retrieval is centred around the problem of connecting large collections of sheet music (images) to corresponding audio recordings, that is, identifying pairs of audio and score excerpts that refer to the same musical content. One of the typical and most recent approaches to this task employs cross-modal deep learning architectures to learn joint embedding spaces that link the two distinct modalities - audio and sheet music images. While there has been steady improvement on this front over the past years, a number of open problems still prevent large-scale employment of this methodology. In this article we attempt to provide an insightful examination of the current developments on audio-sheet music retrieval via deep learning methods. We first identify a set of main challenges on the road towards robust and large-scale cross-modal music retrieval in real scenarios. We then highlight the steps we have taken so far to address some of these challenges, documenting step-by-step improvement along several dimensions. We conclude by analysing the remaining challenges and present ideas for solving these, in order to pave the way to a unified and robust methodology for cross-modal music retrieval. ## 1 Task, Basic Approach, and Challenges A fundamental paradigm in the field of Music Information Retrieval (MIR) is consists in searching and retrieving items of different modalities, for example video clips, live and studio recordings, scanned sheet music, and album covers. Moreover, the large amounts of music-related contents that are currently available in the digital domain demand for the development of _fast_ and _robust_ retrieval methods that allow such extensive and rich collections to be searched and explored in a content-based way. A central and challenging problem in many cross-modal retrieval scenarios is known as _audio-sheet music retrieval_. The goal here is to, given a query fragment in one of the two modalities (a short audio excerpt, for example), retrieve the relevant music documents in the counterpart modality (sheet music scans). In addition, it is typically the case that no metadata or machine-readable information (i.e. MIDI or MusicXML formats) is available: one has to work directly with raw music material, i.e., scanned music sheet images and digitised audio recordings. Figure 0(a) illustrates the retrieval task when searching an audio recording within a sheet music collection. A key step towards audio-sheet music retrieval is to define a convenient joint representation in which both modalities can be readily compared. The common approaches for defining such shared space rely on handcrafted mid-level representations [12], such as chroma-based features [10], symbolic fingerprints [1], or the bootleg score [14], the latter one being a coarse codification of the major note-heads of a sheet music image. However, in order to generate such representations a number of error-prone pre-processing steps are still needed, i.e., automatic music transcription [13] for the audio part, and optical music recognition [3] on the sheet music side. A solution avoiding such problematic pre-processing components was proposed in [8], by designing a deep convolutional network (CNN) that can learn an embedding space that is shared between the two underlying modalities. As sketched in Figure 0(b), this architecture has two independent convolutional pathways, each being responsible for encoding short fragments of its respective music modality into a 32-dimensional embedding vector. This network is fed with pairs of short snippets of sheet music images and magnitude spectrograms, and the embedding space is obtained by minimising the cosine distance between pairs of matching audio-sheet music snippets, while maximising the distance between non-matching pairs. Training is done by optimising a pairwise ranking loss function, and the final canonically correlated layer (CCA) [9] forces the embeddings computed from matching pairs to be correlated to each other in the shared latent space. Then, when the training is finished, snippet-wise retrieval reduces to nearest neighbour search in the joint space (see Figure 0(c)), which is a simple and fast procedure. This general retrieval framework based on short segments (snippets) extracted from the larger original documents (audio recordings, complete scores) supports a variety of possible applications, from piece identification to version detection and music recommendation. The deep learning approach is still in its early stages, and a number of obstacles and open problems prevent robust and large-scale deployment under real-world conditions, some of which we have already begun to solve: * **Variable tempo and context discrepancies.** Global and local tempo deviations are inherent in performed music and require careful design of the amount of temporal context to be provided to a retrieval system during training. * **Strongly-aligned data constraint.** Obtaining matching pairs of short excerpts for training deep learning-based models requires finer alignments between audio and sheet music. Such data is complex and of expensive nature, and as a result synthetic data is used for training. * **Generalisation to real-world (noisy) data** The large numbers of precisely aligned pairs of audio and score snippets required for training are currently obtained by synthesising them in a controlled way from machine-readable scores and corresponding MIDI files. Generalising to real performance recordings and imperfect score scans turns out to be very challenging. * **Temporal dependencies between subsequent snippets.** When handling entire documents, consecutive snippets exhibit strong temporal correspondences, which should be exploited for more robust identification and retrieval. * **Public large-scale datasets.** Up to this date, there is no license-free and truly large audio-sheet music dataset for evaluation of current algorithms. * **Efficient structures for fast retrieval.** Quick cross-modal retrieval algorithms are essential when one is browsing through large-scale and heterogeneous music collections. This aspect can be often overlooked when the main focus is on retrieval quality metrics such as precision and recall. * **Instrumentation and genre.** Current methods have been developed specifically for classical and, even more specifically, piano music data. Other types of scores (e.g., orchestral), instruments, and genres will present new complications. In this article, we examine these challenges one by one. We first summarise our efforts to address some of the points above, as well as the improvements we obtained over the first and original system architecture. We then turn to the still open problems and propose concrete ideas to address these remaining challenges, aiming to establish a unified and robust methodology for cross-modal music retrieval in the context of truly large collections of musical materials. ## 2 Some First Solutions ### Variable tempo and context discrepancies A key limitation of the baseline deep learning solution relates to the temporal context (or field of view) that is input to the network: both audio and sheet music snippets are fixed in size (see the inputs of the main model in Figure 0(b)) for a visual example). For the audio part, the fragments span roughly 2.1 seconds, which corresponds to 42 spectrogram frames. For the scores, snippets span \(160\times 180\) pixels, after sheet music pages being re-scaled to a \(1181\times 835\) resolution. This implies that the amount of actual musical content within the fragments can vary significantly due to the duration of the notes and the tempo in which the piece is being performed. For instance, a sheet music snippet with longer notes played slowly would cover a substantially larger duration in the audio than another one with shorter notes that has been played faster. As a consequence, generalisation issues can occur due to differences between what the network sees during training and the data it will see at application time: the musical content fed to the CNN may exhibit considerably less information than fragments it has been trained on. To address this problem, we proposed in [2] to let the network learn to adjust the temporal content of a given audio excerpt by using a separate _soft-attention mechanism_. First, the audio excerpt size is considerably expanded, up to four times the original duration. We then append to the audio network an attention pathway which, taking as input the audio magnitude spectrogram query, generates a 1-D probability density function that has the same number of frames as the input spectrogram and acts as an attention mask. Then, before the spectrogram excerpt is fed into the original audio embedding network, each frame thereof is multiplied by its attention mask, in this way cancelling out irrelevant parts of the query excerpt and focusing on the important information that should be embedded. In [2] we conducted a series of quantitative and qualitative experiments on synthesised piano music data, with the results indicating that the attention mechanism is capable of increasing the robustness of the audio-sheet music retrieval system. Table 1 summarises the main experimental results for a snippet-wise retrieval scenario: given an audio frag ment as a search query, we desire to retrieve the matching sheet music snippet within a pool of 10,000 candidates from the MSMD dataset [8]. We compare the baseline network BL1 from [8] with an upgraded version BL2 of it (which replaces the last global pooling layer of each modality pathway with a dense layer) and check for retrieval improvements when adding the attention mechanism (+AT) and increasing the duration of the audio excerpts from a short context (SC, 2.1 sec) to a long context (LC, 8.4 sec). No improvement is at first observed when only expanding the temporal context of the second baseline (from BL2-SC to BL2-LC). However, when appending the attention mechanism to BL2, we notice a boost in retrieval performance, with the MRR increasing from 0.63 to 0.75. When comparing the main baseline BL1-SC with our best model configuration, we observe a substantial improvement in all evaluation metrics (MRR increases by 0.44 points). ### Strongly-aligned data constraint In addition to the fixed-size snippet issues discussed above, another limitation of the deep learning approach proposed in [8] relates to its supervised nature. In order to generate a large number of matching pairs of short audio and sheet music snippets for training, one requires big collections of music data with strong labels (alignment annotations), which means fine-detailed mappings between note onsets in the audio recordings and their respective note coordinates in sheet music images. Since obtaining such data is labour-consuming and not trivial, the embedding learning models rely on synthesised data (this limitation will be re-visited in the upcoming subsections). In [6] we propose to address both shortcomings in one, by designing a recurrent network that is can learn compact and fixed-sized embeddings from longer and variable-length passages of audio and sheet music. The key motivation for this is twofold: by operating with variable-length passages, the cross-modal pairs can span the same music content leading to more robust representations; and by allowing longer excerpts, we could relax the required annotations from strong to weak labels, meaning that now only the corresponding passage boundaries are needed. We performed quantitative and qualitative experiments in diverse retrieval scenarios with artificial and real data, with the results indicating a superior performance of the recurrent architectures over the purely convolutional baseline. ### Generalisation to real-world (noisy) data As already hinted at above, obtaining training data in the form of audio-sheet music datasets with appropriate fine-grained alignments is tedious and time-consuming, and also requires specialised annotators with proper musical training. As a consequence, the embedding learning approaches rely on synthetic music data generated from the Multi-Modal Sheet Music Dataset (MSMD) [8]. This is a collection of classical piano pieces with rich and varied data, in \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & **R@1** & **R@5** & **R@25** & **MRR** & **MR** \\ \hline BL1-2s & 19.12 & 44.16 & 66.63 & 0.31 & 8 \\ \hline BL2-2s & 48.91 & 67.22 & 78.27 & 0.57 & 2 \\ BL2-8s & 43.46 & 68.38 & 82.84 & 0.55 & 2 \\ \hline BL2-2s + AT & 55.43 & 72.64 & 81.05 & 0.63 & 1 \\ BL2-8s + AT & 66.71 & 84.43 & 91.19 & 0.75 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Retrieval results of attention-based models. (R@K = Recall at k, MRR = Mean Reciprocal Rank, MR = Median Rank) Figure 1: (a) Illustration of the retrieval application. (b) Architecture of the embedding learning network. (c) Simplified visualisation of the embedding space. cluding score sheets (PDF) engraved via Lilypond1 and respective audio recordings synthetised from MIDI with several types of piano soundfonts. With over 400 pieces from several renowned composers, including Mozart, Beethoven and Schubert, and covering more than 15 hours of audio, the MSMD has detailed audio-sheet music alignments allowing us to obtain perfectly matching audio-sheet snippet pairs. On the downside, the generated scores and audios completely lack real-world artefacts such as scan inaccuracies or room acoustics, and the audios exhibit perfectly steady tempo and dynamics, which is far from how real-world performances would sound. Footnote 1: [https://www.ilypond.org](https://www.ilypond.org) Using the synthetic MSMD severely affects the capacity of the model from Figure 1 to generalise to realistic retrieval scenarios when real music data is presented. In [4] we proposed to alleviate this problem via _self-supervised contrastive learning_. Inspired by the SimCLR framework [7], we pre-trained each independent convolutional pathway (see Figure 1b) by contrasting differently augmented versions of short snippets of audio or sheet music images. As a key advantage of this approach, the data required for the pre-training step needs no annotations, which means we can use real music data scraped from the Web. We applied self-supervised contrastive pre-training for both modalities, taking the following steps: 1. Given a sample \(\mathbf{x}\) from the training mini-batch of a given modality, two stochastic sets of data augmentation transforms are applied to \(\mathbf{x}\), generating the positive pair \(\tilde{\mathbf{x}}_{i}\) and \(\tilde{\mathbf{x}}_{j}\). 2. Then a network composed of a CNN encoder and a multi-layer perceptron head computes a latent representation \(\mathbf{z}_{i}=e(\tilde{\mathbf{x}}_{i})\) for each augmented sample. 3. Then the normalized-temperature cross-entropy (_NT-Xent_) loss function is applied and summed over all positive augmented pairs \((\tilde{\mathbf{x}}_{i},\tilde{\mathbf{x}}_{j})\) within the mini-batch: \[\mathcal{L}=\sum_{(i,j)}\log\frac{\exp(\mathrm{sim}(z_{i},z_{j})/\tau)}{\sum _{v=1}^{2N}\mathds{1}_{[v\neq i]}\mathrm{exp}(\mathrm{sim}(z_{i},z_{v})/\tau)},\] (1) where \(\mathrm{sim}(\cdot)\) is the cosine similarity between \(z_{i}\) and \(z_{j}\) and the temperature parameter \(\tau\in\mathbb{R}_{+}\) is adjusted to prioritise poorly embedded snippet pairs. As for the augmentations used for pre-training, we applied to the snippets: horizontal and vertical shifts, resizing and rotation, additive Gaussian and Perlin noises, and small and large elastic deformations. Figure 2 shows examples of two pairs of augmented sheet music snippets when applying all transforms randomly. The augmentations used on the audio excerpts are: time shift, polarity inversion, additive Gaussian noise and gain change, time and frequency masking, time stretching, and a 7-band equaliser. In order to investigate the effect of self-supervised contrastive pre-training on generalisation from synthetic to real data, we prepare three evaluation datasets: (I) a fully artificial one, using the MSMD; (II) a partially real one combining the MSMD with scans of real sheet music; and (III) an entirely real set with audio recordings and scanned sheet music images. We conduct experiments on snippet retrieval as in Section 2.2, in both search directions (audio-to-sheet and sheet-to-audio) and compare the baseline model purely trained with MSMD as in [8], with fine-tuned versions of it when the audio and sheet music convolutional pathways were pre-trained. To pre-train the individual pathways, we used raw data acquired from the web and public datasets: 200 hours of piano recordings were obtained from the MAESTRO dataset [11] and 7,000 pages of sheet music were collected from the IMSLP online platform2. Footnote 2: [https://imslp.org](https://imslp.org) Table 2.3 provides a summary of the main snippet retrieval results in the audio-to-sheet direction. The baseline model (BL) is compared to its fine-tuned version when both audio and sheet music CNN pathways were pre-trained using self-supervised contrastive learning (BL+A+S). The pre-trained models outperform the baseline in all scenarios for all evaluation metrics. In [4] the same trend is reported for the sheet-to-audio direction, indicating that self-supervised pre-training is beneficial in our retrieval task. We note however that there is still a substantial degradation when going from synthetic to partly and, in particular, fully real data. The MRR and MR values in the fully real scenario are definitely still unacceptable for real-world use. In [4], we also evaluated the models on the task of cross-modal _piece identification_, by aggregating snippet embeddings, and also observed better identification results (e.g. over 100% improvement of the MRR on the task of identifying a piece from an arbitrary recording) when using the pre-trained models. Figure 2: Examples of data augmentation. ### Temporal dependencies between subsequent snippets As briefly mentioned at the end of Section 2.3 and implied in the previous paragraph, a popular task scenario in the audio-sheet music retrieval realm is _cross-modal piece identification_: given an unknown music document in one modality (i.e., a full audio recording), we wish to identify which piece is it based on a collection of documents in another modality (i.e., a database of scanned sheet images). For deep learning-based embedding methods like in [8], choosing how to aggregate snippet embeddings extracted from full documents is essential in order to achieve robust piece identification. The basic identification method proposed in [8] is as follows. Taking the audio-to-sheet search direction without loss of generality, let \(\mathcal{D}\) be a collection of \(L\) sheet music documents, and \(Q\) an unknown audio query. Each document \(D_{i}\in\mathcal{D}\) is segmented into a set of image snippets, which are embedded using the sheet music pathway of Figure 0(b), generating a set of sheet music embeddings \(\{y_{1}^{i},y_{2}^{i},...,y_{M_{i}}^{i}\}\) for each piece. Analogously, the full audio query is segmented into short spectrogram excerpts, from which a set of query audio embeddings \(\{x_{1},x_{2},...,x_{N}\}\) is computed. Then for each audio snippet query \(x_{j}\), its nearest neighbour among all embedded image snippets is selected via cosine distance. Each retrieved sheet snippet then votes for the piece it originated from, resulting in a ranked list of piece candidates. A limitation of this vote-based identification procedure is that it completely ignores the temporal relationships between subsequent snippet queries, which are inherent in, and constitutive of, music. In [5], a matching strategy is presented that aligns the sequences of embeddings obtained from the query document and search database items. The sequence of embedded snippets \(\{y_{1}^{i},y_{2}^{i},...,y_{M_{i}}^{i}\}\) of each piece \(D_{i}\in\mathcal{D}\) from the database is aligned to the query sequence \(\{x_{1},x_{2},...,x_{N}\}\) via dynamic time warping (DTW), using the cosine distance as a cost function. The DTW alignment cost between query \(Q\) and piece \(D_{i}\) is regarded as the matching cost \(c_{i}=\mathrm{DTW}(Q,D_{i})\). Then a ranked list is computed based on the matching cost of each piece to the query, with the best matching piece having the lowest alignment cost. Experiments with real and noisy music data reported in [5] reveal that using the proposed DTW-based matching strategy improves identification results by a large margin, when comparing with the simple vote-based approach. However a number of additional shortcomings of this proposed matching strategy arise: first, even though there are fast implementations of the DTW algorithm, the retrieval time scales up considerably as the search database grows. Moreover, DTW does not handle typical structural differences between audio performances and scores, caused by, e.g., repeats that are, or are not, played. Therefore we believe the next steps in this direction should target algorithms that can scale to large music collections, in terms of processing time, and that are flexible and robust in dealing with structural mismatches between audio and sheet music. ## 3 Remaining Challenges In this section, we briefly discuss the remaining obstacles and open problems, and identify promising directions for future research. ### Public large-scale datasets Large and licence-free datasets are invaluable resources in audio-sheet music retrieval research, enabling the training of deep learning models, facilitating benchmarking and comparative studies, promoting reproducibility, encouraging innovation, and ensuring the relevance of developed methods to real-world applications. For the audio part, existing public datasets such as MAESTRO [11] provide a considerable number of piano audio recordings. However when targeting truly large-scale databases, Youtube3 can be a valuable source for collecting audio recordings, via using its API or scrapping techniques. Together with big amounts of curated sheet music PDFs obtained from online libraries like IMSLP, researchers can create large-scale audio-sheet music datasets that enable the development and evaluation of robust retrieval methodologies. However, it is important to ensure compliance with copyright laws, respect data usage policies of the platforms involved, and provide appropriate attribution when using data from third-party sources. \begin{table} \begin{tabular}{l c c c c} \hline \hline & **R@1** & **R@25** & **MRR** & **MR** \\ \hline \hline (I) & MSMD (Fully synthetic data) & & \\ \hline BL & 0.54 & 0.91 & 0.653 & 1 \\ BL+A+S & 0.57 & 0.93 & 0.687 & 1 \\ \hline \hline (II) & Partially real data & & & \\ \hline BL & 0.28 & 0.67 & 0.375 & 7 \\ BL+A+S & 0.37 & 0.79 & 0.481 & 3 \\ \hline (III) & Fully real data & & & \\ \hline BL & 0.10 & 0.36 & 0.156 & 76 \\ BL+A+S & 0.15 & 0.48 & 0.226 & 29 \\ \hline \hline \end{tabular} \end{table} Table 2: Audio-to-sheet snippet retrieval results on three types of datasets: (I) fully synthetic, (II) partially real and (III) entirely real. ### Efficient structures for fast retrieval Quick responses are pivotal in audio-sheet music retrieval research, particularly in large-scale scenarios, as they enhance efficiency, improve the user experience, ensure scalability, enable practical deployment, and support real-time feedback and iterative refinement. A potential direction is to use compact cross-modal fingerprints, which allow fast search in music as in [1, 14]. Moreover such algorithms should be flexible to handle any kind of structural mismatch between an audio performance and a printed score, as discussed in Subsection 2.4. ### Instrumentation and genre Incorporating diverse instrumentation and genres in audio-sheet music retrieval research enables the development of more inclusive, adaptable, and effective retrieval methods that align with real-world scenarios and user expectations. It broadens the scope of the field and promotes advancements that cater to the diverse musical landscape found in big and heterogeneous music collections. Most retrieval methods use classical piano music as a case study since this type of data is easier to collect due to its abundance and popularity. Complex types of scores, such as orchestral and jazz music, will required more sophisticated and flexible methods, which could be assisted for example by optical music recognition [3]. ## 4 Conclusion This article examined the current developments in audio-sheet music retrieval via deep learning methods. We have identified the main obstacles on the road towards robust and large-scale retrieval and have discussed the steps taken to address some of these challenges. While there has been steady progress in the field over the past years, there are still open problems that hinder the large-scale employment of this methodology. To assist the progress towards a unified and robust retrieval methodology for cross-modal music retrieval, we believe it is crucial to address these remaining challenges, this way unlocking new possibilities for connecting large and heterogeneous music collections and contribute to the enrichment of music information retrieval applications. ## Acknowledgments This work is supported by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation programme, grant agreement No. 101019375 (_Whither Music?_), and the Federal State of Upper Austria (LIT AI Lab).
2309.08666
Variational Embeddings for Many Body Quantum Systems
We propose a variational scheme to represent composite quantum systems using multiple parameterized functions of varying accuracies on both classical and quantum hardware. The approach follows the variational principle over the entire system, and is naturally suited for scenarios where an accurate description is only needed in a smaller subspace. We show how to include quantum devices as high-accuracy solvers on these correlated degrees of freedom, while handling the remaining contributions with a classical device. We demonstrate the effectiveness of the protocol on spin chains and small molecules and provide insights into its accuracy and computational cost.
Stefano Barison, Filippo Vicentini, Giuseppe Carleo
2023-09-15T18:00:05Z
http://arxiv.org/abs/2309.08666v2
# Embedding Classical Variational Methods in Quantum Circuits ###### Abstract We introduce a novel quantum-classical variational method that extends the quantum devices capabilities to approximate ground states of interacting quantum systems. The proposed method enhances parameterized quantum circuit ansatzes implemented on quantum devices with classical variational functions, such as neural-network quantum states. The quantum hardware is used as a high-accuracy solver on the most correlated degrees of freedom, while the remaining contributions are treated on a classical device. Our approach is completely variational, providing a well-defined route to systematically improve the accuracy by increasing the number of variational parameters, and performs a global optimization of the two partitions at the same time. We demonstrate the effectiveness of the protocol on spin chains and small molecules and provide insights into its accuracy and computational cost. We prove that our method is able to converge to exact diagonalization results via the addition of classical degrees of freedom, while the quantum circuit is kept fixed in both depth and width. ## I Introduction Quantum simulation is a fundamental tool for understanding, predicting, and designing the properties of innovative artificial materials or molecules. The rapid scaling of the resources required to treat such many-body problems computationally has led in the past decades to advanced algorithms and approximations [1; 2; 3]. More recently, new approaches have been proposed such as machine learning models [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] and quantum computers [15; 16; 17; 18]. The latter are a promising alternative for quantum simulation because of the polynomial scaling in resources they offer. While several successful experiments have been conducted on quantum devices, the limited resources in both the number of qubits and coherence times have restricted demonstrations to relatively small molecules or lattices [19; 20; 21; 22; 23]. Motivated by these limitations, several techniques to partition large quantum systems into weakly coupled clusters of few interacting qubits have been proposed [24; 25; 26; 27; 28; 29]. The interactions between those clusters may then be treated via classical processing, usually with an overhead in the number of circuits to be executed. As long as the interactions remain weak, those partitioning methods have been demonstrated to prepare quantum states efficiently [28]. In this manuscript, we will build upon the intuition that such circuit knitting strategies developed in the field of quantum computation are conceptually close to classical partitioning schemes that have long been used in computational physics. Those strategies treat the clusters with high-accuracy methods, while the interactions between them are introduced in a simplified way. Over the years, these embedding schemes have shown remarkable results in the study of physical and chemical quantum systems [30; 31; 32]. Nevertheless, the computational cost of the accurate method still limits the size of the relevant subspace. Given the potential scalability they offer, quantum devices represent a natural choice as high-accuracy solvers. We could use circuit knitting techniques not only to partition the computational graph but also rely on the physical intuition to simplify the problem ab-initio. Indeed, as pointed out in Ref. [28], a range of important physical systems possess a natural way to define the partitions, such as chemical systems [33; 34; 35], or strongly interacting systems in a quantum bath [36; 37]. Several classical-quantum integrations have been proposed, showing that even at modest scales, quantum computers could provide insight into relevant physical problems [38; 39]. However, how to systematically improve the results of those embedding schemes in size and accuracy remains unclear. Variational methods often offer a way to systematically improve simulation results, achieving state-of-the-art accuracies while possessing a broad applicability [40]. However, a scheme to embed quantum circuits into those unbiased descriptors has not been thoughtfully investigated. This manuscript presents an ansatz that embeds parameterized quantum circuits within classical variational models. On the one hand, we offload resources to the classical side and enable the usage of quantum devices in simulations of large quantum systems. On the other, integrating a quantum device might allow the study of highly correlated fragments of even bigger size efficiently to include relevant low-energy states. The proposed method is completely variational and global, meaning that the parameters of both models are optimized simultaneously. The optimization of the quantum state is performed in an ab-initio setting, meaning that no data is required other than the specification of the Hamiltonian. We start by studying the combination of quantum circuits with standard variational models. Then we focus on the integration with the Neural Network Quantum States ansatz [5]. We demonstrate the algorithm on spin and small molecular systems, studying its accuracy with respect to exact methods. The structure of this paper is as follows: in Section II we present the hybrid ansatz and its applications to the study of physical and chemical systems, while in Section III we draw our conclusions on the proposed results, together with future outlooks. Section IV presents a detailed overview of the ansatz structure and the methods used to optimize the variational parameters. ## II Results ### Mixed neural variational ansatz Consider a physical system governed by a Hamiltonian \(H\) acting on the Hilbert space \(\mathcal{H}\). Without loss of generality, we can partition every Hilbert space \(\mathcal{H}\) into two subspaces that we call _active_\(\mathcal{H}_{A}\) and _bath_\(\mathcal{H}_{B}\), such that \(\mathcal{H}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). This partitioning naturally induces the following factorization of the Hamiltonian \[H=H_{A}\otimes\mathbb{I}_{B}+\mathbb{I}_{A}\otimes H_{B}+H_{\text{int}}\,, \tag{1}\] where \(H_{A}\) (\(H_{B}\)) contains all the terms of the original Hamiltonian acting non-trivially on the _active_ (_bath_) sub-space, while \(H_{\text{int}}=\sum_{j}H_{\text{int}}^{A,j}\otimes H_{\text{int}}^{B,j}\) contains all the terms with nontrivial action on both sub-spaces. We note that the physical system is separable if the interaction term \(H_{\text{int}}=0\). Having partitioned the Hilbert space, we employ a different variational encoding for each subsystem, with potentially different accuracy targets and computational requirements. For the _bath_ space, we consider a classical model \[\phi(\delta):\left\{0,1\right\}^{N_{B}}\rightarrow\mathbb{C}\,, \tag{2}\] where \(N_{B}\) is the number of spin degrees of freedom of this subspace. In particular, the variational function \(\phi(\delta)\) takes as input binary strings \(\sigma\in\left\{0,1\right\}^{N_{B}}\) associated with the _bath_ basis state \(\ket{\sigma}_{B}\) and outputs complex amplitudes \(\phi(\delta;\sigma)\). For the _active_ space we consider a parameterized quantum circuit \[U(\theta;\sigma)\ket{0}_{A}=V(\theta)\,C(\sigma)\ket{0}_{A}, \tag{3}\] where \(C(\sigma)\) is the part of the circuit that depends on a classical binary string \(\sigma\), and \(V(\theta)\) is an ordinary parameterized unitary. Here, \(\delta\) and \(\theta\) are sets of variational parameters of the classical and quantum models, respectively. The overall ansatz reads \[\ket{\Psi(\delta,\theta)}=\sum_{\sigma}U(\theta;\sigma)\ket{0}_{A}\otimes \phi(\delta;\sigma)\ket{\sigma}_{B} \tag{4}\] which is not necessarily normalized, since, in general, \(\sum_{\sigma}\ket{\phi(\delta;\sigma)}^{2}\neq 1\). We highlight that the quantum circuit in Eq. (4) has an explicit dependence on the _bath_ Figure 1: Sketch of the hybrid algorithm. We study a quantum system by locating a _bath_ and an _active_ partition. A classical model, i.e. a Neural Network Quantum State, is used to sample configurations from the _bath_ partition. Then, the sampled configurations determine the circuit to be prepared on the quantum device, which encodes the _active_ partition. configurations \(\sigma\) in order to forge entanglement between the classical and the quantum partition. If \(C\) does not depend on \(\sigma\), the ansatz describes a factorized state. In Section IV.1 we provide more details on the presented model and we show how \(U(\theta;\sigma)\) can be implemented as a quantum circuit, while in Fig. 4 we present schematic representations of the variational ansatz. From now on we will omit the explicit dependency on the variational parameters, indicating \(\phi(\delta;\sigma)=\phi_{\sigma}\) and \(U(\theta;\sigma)=U_{\sigma}\). Given the hybrid ansatz in Eq. (4) and a set of variational parameters \(\{\theta,\delta\}\), we can estimate the expectation value \(\left\langle O\right\rangle\) of an observable \(O=O_{A}\otimes O_{B}\) as \[\left\langle O\right\rangle=\sum_{\sigma}p_{\sigma}\,O_{\text{loc}}(\sigma)\,, \tag{5}\] where we have defined the normalized distribution \(p_{\sigma}=\left|\phi_{\sigma}\right|^{2}/\sum_{\sigma^{\prime}}\left|\phi_{ \sigma^{\prime}}\right|^{2}\) and the local observable \[O_{\text{loc}}(\sigma)=\sum_{\sigma^{\prime}}\frac{\phi_{\sigma^{\prime}}}{ \phi_{\sigma}}\left\langle 0\right|U_{\sigma}^{\dagger}O_{A}U_{\sigma^{\prime}} \left|0\right\rangle\left\langle\sigma\right|O_{B}\left|\sigma^{\prime}\right\rangle\,. \tag{6}\] This suggests that the expectation value \(\left\langle O\right\rangle\) can be estimated by taking the sample mean of \(O_{\text{loc}}(\sigma)\) on a set of polynomially many samples \(\sigma\). The samples can be generated from \(p_{\sigma}\) using the Metropolis-Hastings algorithm [41], or via direct exact sampling if the associated neural-network parameterization allows for it [42]. To compute the local observable, one must determine all the connected configurations \(\sigma^{\prime}\) such that \(\left\langle\sigma\right|O_{B}\left|\sigma^{\prime}\right\rangle\neq 0\). For every connected configuration, we evaluate \(\left\langle 0\right|U_{\sigma}^{\dagger}O_{A}U_{\sigma^{\prime}}\left|0\right\rangle\) on quantum hardware. For \(K\)-local operators 1, the number of connected elements is not dependent on the system size. If \(\sigma=\sigma^{\prime}\), this reduces to the measurement of \(O_{A}\) on the circuit \(U_{\sigma}\left|0\right\rangle\). In the other case, the two unitaries \(U\) are different and we have to resort to more general schemes, such as the Hadamard test [43]. More details on the evaluation of the quantum term are given in Appendix A. Applying the chain rule to Eq. (5), it is possible to show that the gradient w.r.t. the variational parameters of an observable \(O\) can also be estimated as a classical expectation value over the same distribution \(p_{\sigma}\). A detailed discussion of the gradient calculation is given in Section IV.2. We highlight that every step of this procedure can be accomplished efficiently in polynomial time with respect to the system size. Footnote 1: \(K\)-local operators are operators that act non-trivially on \(K\) degrees of freedom, and therefore have at most \(2^{K}\) connected configurations. Every observable \(O\) acting on the entire Hilbert space \(\mathcal{H}\) can be written as \(O=\sum_{k}c_{k}O_{k}=\sum_{k}c_{k}O_{A,k}\otimes O_{B,k}\), where every \(O_{k}\) can be computed using the procedure detailed above. As usual in quantum computing, the computational cost will grow linearly with the number of terms \(O_{k}\). For most physical observables (e.g. total magnetization, local occupation, dipole moments...), the number of terms is polynomial in the system size. The statistical error of the estimation of \(\left\langle O\right\rangle\) using \(N_{k}\) classical samples is \[\epsilon=\sqrt{\sum_{k}|c_{k}|^{2}\text{Var}\left[O_{k}\right]/N_{k}}. \tag{7}\] Contrary to what happens in the context of standard Variational Monte Carlo, \(\text{Var}\left[O_{k}\right]\neq 0\) even when the variational states get close to the eigenstates of \(O\). For this reason, every \(\left\langle O_{k}\right\rangle\) has to be evaluated with high accuracy. With this method, we are able to estimate the expectation value of the Hamiltonian in Eq. (1) and its gradient with respect to the variational parameters. Finally, we use this information to optimize the parameters and provide a variational approximation of the ground state of the system. A sketch of the algorithm can be found in Fig. 1. ### Transverse Field Ising Model As a first benchmark for our hybrid algorithm, we consider the problem of finding the ground state of a non-homogeneous Transverse Field Ising Model on a chain with open boundary conditions: \[H_{\text{Ising}}=\sum_{i=1}^{N-1}J_{i}\sigma_{i}^{z}\sigma_{i+1}^{z}+\sum_{i=1 }^{N}h_{i}\sigma_{i}^{x}\,. \tag{8}\] The first term accounts for interactions between spins, while the latter represents a local magnetic field along the transverse direction \(x\). We focus on bipartite systems where partition \(A\) has \(J_{i}=h_{i}\), which would correspond to the (strongly correlated) quantum critical point in the thermodynamic limit, and where partition \(B\) has \(J_{i}<h_{i}\). The Hamiltonian Eq. (8) then reads \[H_{\text{Ising}} =\sum_{i,i+1\in A}\left[J_{A}\sigma_{i}^{z}\sigma_{i+1}^{z}+h_{A} \sigma_{i}^{x}\right]+\] \[+\sum_{\begin{subarray}{c}i,i+1\in B\\ |i-j|=1\end{subarray}}J_{\text{int}}\sigma_{i}^{z}\sigma_{j}^{z}\,, \tag{9}\] where \(J_{A}\) (\(J_{B}\)) and \(h_{A}\) (\(h_{B}\)) represent the interactions and the local fields in the _active_ (_bath_) subspace, while \(J_{\text{int}}\) is the strength of the operators that act non-trivially on both subspaces. We consider \(J_{A},h_{A},h_{B}=1\), \(J_{B}=0.25\), and \(J_{\text{int}}=0.5\). We explore the interplay between the quantum and the classical models in the context of variational ground state optimization using combinations of different classical and quantum ansatzes. For the _bath_ sub-space we compare a mean field ansatz, a Jastrow with nearest and next-to-nearest neighbors interaction, and Restricted Boltzmann Machines (RBMs) Neural Quantum State with a varying number of parameters [5]. For the _active_ sub-space, we employ a hardware-efficient circuit with alternating layers of single-qubit \(R_{y}\) rotations and two-qubit CNOT gates, as shown in Fig. 4. Increasing the circuit depth with more alternating layers improves the overall ansatz expressibility. We perform the optimization of the hybrid variational models using a state vector simulation with no noise and estimating expectation values with \(10^{4}\) Monte Carlo samples. The results are reported in Fig. 2. The plot shows that the final results depend on both the classical and quantum models, as expected. When the classical model has a limited expressibility, like in the mean-field and Jastrow case, the final accuracy of the results depends weakly on the depth of the quantum circuit. On the other hand, when a highly expressive classical ansatz is employed, the choice of quantum circuit depth is crucial to improve the accuracy of the final result. Indeed, when RBMs with a deep enough quantum circuit are considered, the method is able to reach an error on the final energy, which is compatible with the number of samples used to estimate it. Given the high expressive power obtained by combining the variational quantum circuits with a Neural Quantum State, we apply the hybrid method to study a molecular system. ### Molecular systems We consider the molecular Hamiltonian in the Born-Oppenheimer approximation. In the second quantization formalism, the Hamiltonian has the form \[H=\sum_{pq}h_{pq}a_{p}^{\dagger}a_{q}+\sum_{pqrs}V_{pqrs}a_{p}^{\dagger}a_{q}^{ \dagger}a_{r}a_{s}\,, \tag{10}\] where \(a^{(\dagger)}\) are the fermionic annihilation (creation) operators, defined by the anticommutation relation \(\{a_{i}^{\dagger},a_{j}\}=\delta_{ij}\), whereas \(h_{pq}\) and \(V_{pqrs}\) represent the one- and two-body integrals, respectively. First, we identify the molecular system we want to study, together with the basis set we want to use. In this manuscript, we study the ammonia molecule (NH\({}_{3}\)) using the Intrinsic Atomic Orbitals (IAO) [44] minimal basis set obtained from a mean-field calculation performed on the bigger aug-cc-pVQZ basis [45]. We consider a configuration in which one of the N\(-\)H bonds is stretched at a bond length of 1.5 A. This stretching enhances the strong correlation in the electronic structure, as the atomic-like character of the constituent atoms is increased. The _active_ space is represented by the Highest Occupied and the Lowest Unoccupied Natural Orbitals (HONO/LUNO) [46], for a total of four spin-orbitals. We freeze the lowest energy orbital, corresponding to the \(1s\) of the nitrogen atom. Additional details regarding the ammonia simulation can be found in Appendix B. To leverage the variational encoding discussed in the previous section, we map the fermionic Hamiltonian of Eq. (10) onto a spin Hamiltonian using the Jordan-Wigner transformation [47] (though other transformations could also be employed [48, 49, 50, 51]). We constraint the ansatz explicitly in order to be particle-preserving, as the original Hamiltonian (see Section IV.1). In Fig. 3 we consider two different hybrid encodings: the first one has no particle restriction in each subspace, and only the conservation of the total number of particles is imposed, while in the second one, we fix the number of particles in each subspace to the value of the initial Hartree-Fock calculation (HF [52]). Starting from the result obtained using the Variational Quantum Eigensolver (VQE, [53]) Figure 2: Error of the ground-state energy of the Transverse Ising Model relative to the exact values for different classical variational ansatzes and quantum circuit depths. The chain has 13 spins, with 10 spins in the _bath_ partition and 3 in the _active_ one. We perform each optimization with \(10^{4}\) Monte Carlo samples and an ideal quantum simulator. The error bars indicate the statistical error given from the sampling. The number of parameters of the RBM ansatz is determined by the ratio between the hidden and the visible nodes, indicated as \(\alpha\). subspace, we investigate the addition of the other orbitals as _bath_. Each experiment is performed on a classical simulator, and expectation values are evaluated using \(2\cdot 10^{4}\) Monte Carlo samples. As we report in Fig. 3, when more orbitals are considered in the ansatz, the energy improves with respect to HF following the value obtained with exact diagonalization in that active variational space (Complete Active Space Configuration Interaction, CASCI). As the size of the active space is increased, the CASCI value decreases towards the exact diagonalization result on the entire molecule (Full Configuration Interaction, FCI [52]) with the frozen core approximation. We start in the active space with a quantum circuit including one double excitation and two single excitation gates, resulting in 3 parameters to optimize. Then, the ansatz is augmented with an RBM having real parameters. We found that a hidden layer density of \(\alpha=1\) was sufficient to converge to the exact diagonalization results. We stress that for this particular choice of hybrid ansatz, the sign of the configurations is entirely determined by the quantum circuit, while the classical partition contributes only to the wave function amplitudes. More details about the hybrid model can be found in Section IV.1. We remark that the addition of classical degrees of freedom allows us to improve upon results obtained with the VQE on the entire ammonia molecule. Indeed, even if it is theoretically possible to get extremely accurate results with a chemically-inspired ansatz such as the quantum Unitary Coupled Cluster (qUCCSSD), such ansatzes cannot be used in practice because of limitations on the coherence times and connectivity. Currently, we are restricted to hardware-efficient circuits, which are able to capture only a few electronic correlations (see Appendix C). Finally, we show the results obtained by fixing the number of particles in each subspace. In this scenario, we can no longer reach the FCI energy since we are excluding important contributions from the wave function. However, this strategy still improves upon the active space calculations and has a reduced overhead in measurements with respect to the full model. ## III Discussion In this manuscript, we have introduced a hybrid variational approach that combines quantum circuits with classical methods to determine the ground state of interacting quantum systems from first principles. We have successfully tested our method on spin systems and molecular Hamiltonians, demonstrating its potential. By augmenting variational quantum circuits with classical parameterizations, this approach allows to expand the capabilities of quantum hardware and can systematically enhance simulation accuracy by increasing parameter counts. The method leverages broadly accessible and reliable classical resources while restricting the use of expensive quantum resources to a minimum, making it particularly relevant for NISQ devices. Even in an age of fault-tolerant quantum computation, however, offloading the treatment of weakly entangled partitions to a classical computer might still be advantageous. Many paths for research can be envisaged for the near future. Alternative neural network architectures for representing quantum systems, beyond Restricted Boltzmann Machines [10, 11, 54, 14], can be explored. Additionally, the variational ansatz presented here is quite versatile and can be also used in a purely classical setting, by combining classical variational representations of different level of accuracy. Under the quantum computation perspective, the effect of hardware noise on the results should be investigated. Indeed, the addition of a classical tunable model might provide some robustness during the optimization. All these improvements may come paired with the development of new techniques to efficiently embed quantum hardware measurements inside the classical sampling scheme. Finally, strategies to optimally partition the physical system of interest into the _active_ and _bath_ subspaces are worth exploring. Figure 3: Energy of the variational hybrid state approximating the ammonia molecule ground state as a function of the _bath_ orbitals considered. Each optimization is performed on a quantum simulator and using \(2\cdot 10^{4}\) Monte Carlo samples. When an orbital is not in the variational subspace, its occupancy is frozen to be the one obtained with Hartree-Fock (HF). We start from VQE calculation in the active space (VQE, AS), then add the remaining orbitals using a Restricted Boltzmann Machine. The hybrid,fixed markers indicate the results obtained with the subspace particle-conserving model explained in the main text. The blue dash-dotted line represents the exact diagonalization results in the variational subspace (CASCI). Methods ### The model In this section, we give a more detailed description of the hybrid model encoding used in Section II. As mentioned previously, the _bath_sub-space can be encoded in an arbitrary classical variational state which is computationally tractable [55], meaning that un-normalized complex amplitudes may be queried in polynomial time, and the Born probability amplitude may be sampled in polynomial time. In particular, here we focus on Neural Quantum States (NQS) as variational representations due to the promising combination of state-of-the-art performance and scalability that they offer. Recently, many different types of neural networks have been proposed as NQS, from Restricted Boltzmann Machines [5], to Convolutional Neural Networks [56] and Autoregressive Neural Networks [6; 9; 42]. As a first demonstration, we employ a Restricted Boltzmann Machine (RBM). An RBM is an artificial network with \(N_{B}\) nodes corresponding to the number of spin degrees of freedom in the classical subspace, and \(M_{B}\) hidden units. The total number of parameters in the model depends on the hyperparameter \(\alpha\), representing the ratio between the number of hidden and visible nodes. The nonlocal correlations induced by the hidden units make the RBM an ansatz well suited to studying quantum systems in arbitrary dimensions. Except where explicitly indicated, we consider \(\alpha=1\) in this work. Given a configuration \(\sigma\) of the _bath_ sub-space, the amplitude \(\phi_{\sigma}\) is obtained by evaluating the RBM.[5]. This is used in a Markov-Chain Monte-Carlo sampling procedure to generate a set of configurations \(\{\sigma\}\) distributed according to the probability distribution \(|\phi_{\sigma}|^{2}\). Then, we prepare a different quantum circuit \(U_{\sigma}(\theta)\) for every configuration \(\sigma\). We partition the quantum operations into two subsets: those depending on the classical configurations \(\sigma\) and the independent ones. While the number of _bath_ degrees of freedom scales polynomially with the system size, _bath_ configurations are exponentially many and discrete. This would result in exponentially many set of parameters for our quantum circuit. Moreover, given that quantum gates are usually determined by continuously varying rotation angles, we define a sample-to-angle function \[S:\{0,1\}^{N_{B}}\rightarrow\mathbb{R}^{N_{\sigma\text{-angle}}}\,, \tag{11}\] where \(N_{\sigma\text{-angle}}\) is the number of quantum variational parameters controlled by the classical configurations. However, the precise form of this function is generally unknown a priori. Therefore, at the beginning of the optimization process of the variational wave function, we can propose an ansatz and relate it to the specific problem at hand. In this context, artificial neural networks have become a popular choice due to their powerful expressive capabilities. The parameters of the sample-to-angle neural network are optimized by measuring the gradient on the quantum device and applying the chain rule. Indeed, given a configuration \(\sigma\) and the corresponding set of angles \(S(\sigma)_{i}\) with \(i\in 1,\dots,N_{\sigma\text{-angle}}\), we can evaluate the derivative with respect to a parameter of the neural network \(\gamma\) as \[\frac{\partial\left|\Psi(\delta,\theta,S(\sigma))\right\rangle}{\partial \gamma}=\sum_{i=0}^{N_{\sigma\text{-angle}}}\frac{\partial\left|\Psi(\delta, \theta,S(\sigma))\right\rangle}{\partial S(\sigma))_{i}}\frac{S(\sigma)_{i}}{ \partial\gamma} \tag{12}\] where we have expressed explicitly the \(S(\sigma)\) dependence. From Eq. (12) we note that each angle \(S(\sigma)_{i}\) can be treated as a normal variational parameter of the quantum circuit and its derivative is evaluated as explained in Section IV.2. Then, the derivative of the neural network architecture with respect each of its parameters can be evaluated entirely on a classical device using standard automatic differentiation algorithms [57]. In every experiment presented in this manuscript, we employ a feed-forward neural network with one hidden layer and \(\alpha=1\) or \(2\). Finally, we discuss the structure of the parameterized quantum circuit. This structure depends on the quantum system under study and the amount of quantum resources at disposal. For the calculations on the Transverse field Ising Model we use a hardware-efficient ansatz consisting of a layer of \(R_{y}\) rotations followed by a layer of CNOTs with linear connectivity, both repeated one or multiple times. The subset of classically-controlled rotations is composed of a rotation \(u_{3}(\theta,\phi,\lambda)=R_{z}(\phi)R_{y}(\theta)R_{z}(\lambda)\) acting on the qubits encoding spins that have nearest neighbor interactions with spins that are in the _bath_ subspace. A scheme of the circuit can be found in Fig. 4. To parameterize the ground state of the molecular system we implement a different, particle-preserving ansatz. When no artificial constraint is imposed, only the total number of particles is conserved, but there is no such guarantee in each individual subspace. In order to conserve the total number of particles in this mixed setting, we first restrict the RBM to sample only physical configurations by fixing a maximum and a minimum amount of electrons that may be present in the partition. Then, the sample-to-angle function is extended to output the number of missing electrons in order to correctly initialize the quantum circuit. Finally, we build the variational quantum circuit using only particle-preserving gates, in particular single and double excitation gates [58; 59]. If we now want to fix the number of particles in each subspace as in the results of Fig. 3, we constrain the RBM to output only physical configurations with a precise number of electrons. These modifications reduce the complexity of the problem and are readily extendable to bigger molecular systems. A scheme of this circuit can be found in Fig. 4. The actual implementation of the particle preserving gates will depend on the quantum hardware used for the experiment [22, 58]. ### Calculating gradients of the hybrid ansatz In this section we show how to compute the gradient of the variational wave function in Eq. (4). For simplicity, we consider a classical ansatz \(\phi\) which is holomorphic with respect to its \(n_{c}\) complex parameters \(\delta\in\mathbb{C}^{n_{c}}\). The unitaries \(U_{\theta}\) defining the circuit, instead, have a set of \(n_{q}\) real parameters \(\theta\in\mathbb{R}^{n_{q}}\). We will now give the expression for the gradient of an expectation value with respect to the two different sets of parameters \(\{\delta,\theta\}\). Starting from the expression of the expectation value of \(O\) in Eq. (5) we obtain \[\begin{cases}\nabla_{\delta}\langle O\rangle=\sum_{\sigma}p_{ \sigma}\bigg{[}\sum_{\sigma^{\prime}}O^{B}_{\sigma\sigma^{\prime}}\frac{\phi_ {\sigma^{\prime}}}{\phi_{\sigma}}O^{A}_{\sigma\sigma^{\prime}}\nabla_{\delta} \log\phi_{\sigma^{\prime}}-\\ -\langle O\rangle\nabla_{\delta}\log\phi_{\sigma}\bigg{]}\\ \nabla_{\theta}\langle O\rangle=\sum_{\sigma}p_{\sigma}\left[\sum_{\sigma^{ \prime}}O^{B}_{\sigma\sigma^{\prime}}\frac{\phi_{\sigma^{\prime}}}{\phi_{ \sigma}}\nabla_{\theta}O^{A}_{\sigma\sigma^{\prime}}\right]\end{cases} \tag{13}\] where we indicated, for compactness, the matrix element on the classical partition \(\langle\sigma|O_{B}|\sigma^{\prime}\rangle=O^{B}_{\sigma\sigma^{\prime}}\), the quantum expectation values \(\langle 0|\,U^{\dagger}_{\sigma}O_{\sigma^{\prime}}|0\rangle=O^{A}_{\sigma \sigma^{\prime}}\) and the complex amplitudes \(\phi_{\delta}(\sigma)=\phi_{\sigma}\), \(p_{\sigma}=|\phi_{\sigma}|^{2}/\sum_{\sigma^{\prime}}|\phi_{\sigma^{\prime}} |^{2}\), \(U_{\sigma}(\theta)=U_{\sigma}\) as in Section II. Similarly to what happens in Variational Monte Carlo, if the classical model has real parameters \(\delta\in\mathbb{R}^{n_{c}}\), the first case of Eq. (13) becomes \[\nabla_{\delta}\langle O\rangle=\sum_{\sigma}p_{\sigma}\Bigg{\{}2 \text{Re}\bigg{[}\sum_{\sigma^{\prime}}O^{B}_{\sigma\sigma^{\prime}}\frac{ \phi_{\sigma^{\prime}}}{\phi_{\sigma}}O^{A}_{\sigma\sigma^{\prime}}\nabla_{ \delta}\log\phi_{\sigma^{\prime}}-\\ -\langle O\rangle\nabla_{\delta}\log\phi_{\sigma}\bigg{]}\Bigg{\}}\,. \tag{14}\] Recently it has been shown that this term is affected by a systematic statistical bias or exponential sample complexity when the wave function contains some (possibly approximate) zeros [13]. This scenario is likely to occur in ground-state calculations of fermionic systems, such as the molecular system we reported in Section II.3. For this reason, we also implemented an unbiased estimator for the gradient of the classical parameters, \[\nabla_{\delta}\langle O\rangle_{\text{unbiased}}=\sum_{\sigma}p _{\sigma}\left\{2\text{Re}\bigg{[}\sum_{\sigma^{\prime}}O^{B}_{\sigma\sigma^ {\prime}}\frac{\nabla_{\delta}\phi_{\sigma^{\prime}}}{\phi_{\sigma}}O^{A}_{ \sigma\sigma^{\prime}}-\right.\\ -\langle O\rangle\nabla_{\delta}\log\phi_{\sigma}\bigg{]}\right\}. \tag{15}\] The second term of Eq. (14), on the other hand, represents the gradient evaluated on the quantum computer. We evaluate this quantum term using an extension of the parameter shift rule [60, 61, 62, 63, 64] to the Hadamard test. More explicitly, we can evaluate the derivative with respect to the \(i\)-th component of the parameter vector \(\theta\) of the real and imaginary part of the overlap separately. For the real part, we have Figure 4: Sketch of the variational circuits used as ansatzes for the _active_ subspace in the study of the Transverse Field Ising Model and the ammonia molecule. The light orange gates that are repeated multiple times are controlled by the _bath_ configurations. \[\frac{\partial}{\partial\theta_{i}}\mathrm{Re}\left[O^{A}_{\sigma, \sigma^{\prime}}(\theta)\right] =\frac{1}{2}\Bigg{\{}\mathrm{Re}\left[O^{A}_{\sigma,\sigma^{\prime}}( \theta+\frac{\pi}{2}e_{i})\right]-\] \[\qquad\qquad-\mathrm{Re}\left[O^{A}_{\sigma,\sigma^{\prime}}( \theta-\frac{\pi}{2}e_{i})\right]\Bigg{\}} \tag{16}\] where we indicated \(O^{A}_{\sigma,\sigma^{\prime}}(\theta)=\bra{0}U^{\dagger}_{\sigma}(\theta)O _{A}U_{\sigma^{\prime}}(\theta)\ket{0}\) and the real (or imaginary) part is evaluated using the procedure presented in Appendix A. Repeating the same procedure for every component of the parameter vector we obtain an estimation of the quantum term of the gradient. ### Optimization details Once the gradient of the cost function has been measured, we choose the classical optimizer to tune the variational parameters of the hybrid ansatz. The optimization of the parameters of the RBM is performed using the stochastic reconfiguration protocol detailed in [5]. We use a learning rate \(\eta\in[0.005,0.01]\) and a regularization factor for the S matrix of \(0.001\). For the sample-to-angle neural network and the quantum circuit, we used the first-order optimizer ADAM [65], with default values for the hyperparameters and a starting learning rate \(\eta=0.01\). All the neural networks are initialized from a random normal distribution with zero mean and a standard deviation of \(0.01\). ###### Acknowledgements. This research was supported by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602). S.B. acknowledges G. Gentinetta, F. Metz, A. Sinibaldi, S. Battaglia and J.R. Moreno for insightful discussions. ## Data availability All the data to reproduce the plots and the simulations presented in the manuscript can be found on GitHub [66]. The colors used for the plots are part of the scientific colormap database [67]. ## Code availability Simulations presented in the manuscript are performed using custom modifications of the open source libraries Netket [68] and Pennylane [69]. The molecular Hamiltonian of the ammonia molecule has been obtained with PySCF [70] and mapped to a qubit operator using Qiskit [71]. The code is available on GitHub [66].
2309.10029
Polarized anisotropic synchrotron emission and absorption and its application to Black Hole Imaging
Low-collisionality plasma in a magnetic field generically develops anisotropy in its distribution function with respect to the magnetic field direction. Motivated by the application to radiation from accretion flows and jets, we explore the effect of temperature anisotropy on synchrotron emission. We derive analytically and provide numerical fits for the polarized synchrotron emission and absorption coefficients for a relativistic bi-Maxwellian plasma (we do not consider Faraday conversion/rotation). Temperature anisotropy can significantly change how the synchrotron emission and absorption coefficients depend on observing angle with respect to the magnetic field. The emitted linear polarization fraction does not depend strongly on anisotropy, while the emitted circular polarization does. We apply our results to black hole imaging of Sgr A* and M87* by ray-tracing a GRMHD simulation and assuming that the plasma temperature anisotropy is set by the thresholds of kinetic-scale anisotropy-driven instabilities. We find that the azimuthal asymmetry of the 230 GHz images can change by up to a factor of 3, accentuating ($T_\perp > T_\parallel$) or counteracting ($T_\perp < T_\parallel$) the image asymmetry produced by Doppler beaming. This can change the physical inferences from observations relative to models with an isotropic distribution function, e.g., by allowing for larger inclination between the line of sight and spin direction in Sgr A*. The observed image diameter and the size of the black hole shadow can also vary significantly due to plasma temperature anisotropy. We describe how the anisotropy of the plasma can affect future multi-frequency and photon ring observations. In Appendices we calculate kinetic anisotropy-driven instabilities (mirror, whistler, and firehose) for relativistically hot plasmas.
Alisa Galishnikova, Alexander Philippov, Eliot Quataert
2023-09-18T18:00:01Z
http://arxiv.org/abs/2309.10029v1
# Polarized anisotropic synchrotron emission and absorption and its application to Black Hole Imaging ###### Abstract Low-collisionality plasma in a magnetic field generically develops anisotropy in its distribution function with respect to the magnetic field direction. Motivated by the application to radiation from accretion flows and jets, we explore the effect of temperature anisotropy on synchrotron emission. We derive analytically and provide numerical fits for the polarized synchrotron emission and absorption coefficients for a relativistic bi-Maxwellian plasma (we do not consider Faraday conversion/rotation). Temperature anisotropy can significantly change how the synchrotron emission and absorption coefficients depend on observing angle with respect to the magnetic field. The emitted linear polarization fraction does not depend strongly on anisotropy, while the emitted circular polarization does. We apply our results to black hole imaging of Sgr A* and M87* by ray-tracing a GRMHD simulation and assuming that the plasma temperature anisotropy is set by the thresholds of kinetic-scale anisotropy-driven instabilities. We find that the azimuthal asymmetry of the 230 GHz images can change by up to a factor of 3, accentuating (\(T_{\perp}>T_{\parallel}\)) or counteracting (\(T_{\perp}<T_{\parallel}\)) the image asymmetry produced by Doppler beaming. This can change the physical inferences from observations relative to models with an isotropic distribution function, e.g., by allowing for larger inclination between the line of sight and spin direction in Sgr A*. The observed image diameter and the size of the black hole shadow can also vary significantly due to plasma temperature anisotropy. We describe how the anisotropy of the plasma can affect future multi-frequency and photon ring observations. In Appendices we calculate kinetic anisotropy-driven instabilities (mirror, whistler, and firehose) for relativistically hot plasmas. ## 1 Introduction Synchrotron emission produced by relativistic electrons in the presence of a magnetic field appears in many astrophysical systems. It is the source of emission across much of the electromagnetic spectrum in pulsar wind nebulae and jets from neutron stars and black holes (BHs). Synchrotron emission is also the source of the mm-wavelength radio emission observed on event-horizon scales in M87* and Sgr A* by the Event Horizon Telescope (EHT) (Event Horizon Telescope Collaboration et al., 2019, 2022). Models of synchrotron emission from astrophysical plasmas typically assume that the plasma has a thermal or power-law distribution function or a hybrid of the two, such as a kappa distribution function. The latter two are motivated by the power-law (non-thermal) synchrotron spectra often observed from astrophysical sources. Another explicit assumption typically made is that the electron distribution function is isotropic relative to the local magnetic field, i.e., that the electrons have the same temperature or energy density in all directions.1 Footnote 1: An exception to this is in very strongly magnetized plasmas such as neutron star magnetospheres where the synchrotron cooling time is so short that the perpendicular energy is nearly instantaneously radiated away. In this paper we are focused on applications with weaker magnetic fields, such as black hole accretion flows and jets. In the presence of dynamically strong magnetic fields, the assumption of an isotropic electron distribution function is not theoretically or observationally well-motivated. By dynamically strong here, we mean an energy density in the magnetic field similar to or larger than that in the plasma. Such magnetized collisionless (and weakly collisional) plasmas can readily depart from thermal equilibrium and develop anisotropies with respect to the local magnetic field direction (Quataert et al., 2002). Although the distribution function will in general be gyrotropic (isotropic in the plane perpendicular to the magnetic field), it can have significant anisotropies parallel and perpendicular to the local magnetic field (Kulsrud, 1983). There is extensive observational evidence for such anisotropy in the solar corona and solar wind (Bale et al., 2009). In the most extreme cases, Oxygen ions in the solar corona have perpendicular temperatures that are a factor of \(\sim 10-100\) times that of their parallel temperature (Cranmer et al., 1999). This anisotropy is in fact critical to interpreting spectroscopy of the solar corona. By analogy, one might expect that anisotropy in the electron distribution function could be important for interpreting synchrotron radiation from astrophysical plasmas. This is particularly true in high spatial resolution observations where our viewing angle relative to the local magnetic field likely changes significantly across the image (e.g., the EHT or radio interferometry more generally). The anisotropy in a plasma's distribution function cannot, however, grow without bound. It is limited by kinetic-scale instabilities such as the mirror, whistler, firehose, and ion cyclotron instabilities (Rosenbluth, 1956; Southwood and Kivelson, 1993; Chandrasekhar et al., 1958; Rudakov and Sagdeev, 1961; Sudan, 1963; Gary, 1992). When the anisotropy in the distribution function becomes too large (relative to the threshold of the instability2), such instabilities rapidly grow, driving the anisotropy towards the instability threshold. This endows the plasma with an effective collisionality that acts to partially isotropize the distribution function. A very rough rule of thumb is that instabilities set in vigorously when the fractional temperature anisotropy satisfies \(|\Delta T/T|\gtrsim\mathcal{O}(\beta^{-1})\) (where \(\Delta T\) is the temperature anisotropy and \(\beta\) is the ratio of thermal to magnetic energy). Anisotropy can thus be much larger in strongly magnetized plasmas with \(\beta\lesssim 1\). Anisotropy in the distribution function is thus expected to be particularly important in jets and in models of accretion flows with dynamically strong magnetic fields, such as the Magnetically Arrested Disc (MAD) models favored by EHT observations of M87* (Event Horizon Telescope Collaboration et al., 2021). Footnote 2: Some instabilities, e.g., the ion cyclotron instability, formally do not have a threshold, but their growth rate becomes sufficiently small at low anisotropies that in practice they do. Observations of protons and electrons in the solar wind show that they obey the expected anisotropy-driven instability thresholds and that the anisotropy is larger at lower \(\beta\)(Bale et al., 2009) (however, the measured anisotropy is smaller than the instability thresholds at \(\beta\lesssim 0.1\)). We expect that in accretion flows and jets, inflow, outflow, and heating of the plasma will likewise drive temperature anisotropies to the point that instabilities set in (Foucart et al., 2017). Global axisymmetric GR kinetic simulations of collisionless plasma accreting onto a BH indeed find the growth of the mirror and firehose instability and that they regulate the plasmas's temperature anisotropy (Galishnikova et al., 2023). Motivated by the potential importance of an anisotropic distribution function in synchrotron emitting plasmas, in this paper we theoretically calculate emission and absorption of polarized synchrotron radiation for a physically motivated gyrotropic distribution function. The study of polarized synchrotron radiation dates back to the work of Westfold (1959), who studied emission from an ultra-relativistically gyrating electron. General formulae for Stokes parameters for ultra-relativistic synchrotron emission from an assemble of electrons can be found in the review of Ginzburg and Srovatskii (1965), who noted that a substantial amount of circular polarization is present only in the case of a highly anisotropic pitch-angle distribution. Melrose (1971) presented the general equations for Stokes parameters for an arbitrary anisotropic distribution function separable in momentum and pitch-angle, while Sazonov (1972) focused on the case of a power-law momentum distribution with a separable pitch-angle anisotropy. In the last few decades, the study of synchrotron radiation was extended to a broader range of validity and a number of different distribution functions via numerical integration methods (Mahadevan et al., 1996; Shcherbakov, 2008; Leung et al., 2011; Pandya et al., 2016, 2018; Dexter, 2016). This is useful for improving analytical results at arbitrary frequency, emission direction with respect to the magnetic field, and distribution function. These works provide fits for the Stokes emissivities, absorptivities, and rotativities that have been widely used in modeling polarized synchrotron radiation from accreting black holes, particularly in the context of the EHT sources M87* and Sgr A* [e.g., Dexter (2016); Moscibrodzka and Gammie (2018); White (2022) and others; see also Gold et al. (2020)]. However, no pitch-angle anisotropy was considered in these studies. In this paper, we extend previous work on synchrotron radiation by studying the intrinsic emission from an ensemble of electrons with an anisotropic relativistic distribution function. We focus on the case of a relativistic generalization of a bi-Maxwellian that has different temperatures perpendicular and parallel to the local magnetic field (SS2) and provide fits for the polarized emissivity and absorption coefficients in Section 2.2. We defer the case of an anisotropic power-law distribution function to future work. We also defer the calculation of Faraday rotation and conversion coefficients for an anisotropic distribution function to future work. We then implement these expressions in a GR radiative transfer code to ray trace GRMHD MAD simulations and study the impact of pitch-angle anisotropy on the observable quantities (Section 3). Finally, in Section 4 we summarise the application of our results to current and future EHT observations. ## 2 Synchrotron Emission from Gyrotropic Distribution Functions In this section, we describe radiation transfer and emission produced by electrons with a gyrotropic distribution function \(f(\gamma,\xi)\) in the presence of a background magnetic field \(\mathbf{B}\); \(\gamma\) and \(\xi\) denote the Lorentz factor of electrons and pitch angle with the magnetic field respectively; we will use \(\mu=\cos\xi\) and \(\xi\) interchangeably in what follows. Throughout the paper, \(m_{e}\), \(e\), and \(c\) are constants that stand for electron mass, electron charge, and speed of light. Therefore, the momentum of a particle with velocity \(\mathbf{v}\) is \(\mathbf{p}=m_{e}\gamma\mathbf{v}\) and \(\beta=\mathbf{v}/c\). In what follows, we normalize the frequency of emission \(\nu\) by a non-relativistic cyclotron frequency given by \(\nu_{c}=eB/2\pi m_{e}c\). The angle between the propagation direction along the wavevector \(\mathbf{k}\) and the background magnetic field \(\mathbf{B}\) is set by \(\theta_{B}\). Polarized emission is described in the Stokes basis as \(I_{a}=\{I,Q,U,V\}^{T}\), where \(I\) stands for intensity, \(Q\) and \(U\) describe linear polarization, and \(V\) describes circular polarization. Given emissivities \(j_{a}=\{j_{I},j_{Q},j_{U},j_{V}\}^{T}\), absorption coefficients \(\alpha_{a}=\{\alpha_{I},\alpha_{Q},\alpha_{U},\alpha_{V}\}^{T}\), and Faraday rotativities \(\rho_{a}=\{\rho_{Q},\rho_{U},\rho_{V}\}^{T}\), the polarized emission can then be found using [see, e.g, Leung et al. (2011)] \[\frac{dI_{a}}{ds}=j_{a}-M_{ab}I_{b}, \tag{1}\] where \(M_{ab}\) is the Mueller matrix, \[M_{ab}=\begin{pmatrix}\alpha_{I}&\alpha_{Q}&\alpha_{U}&\alpha_{V}\\ \alpha_{Q}&\alpha_{I}&\rho_{V}&-\rho_{U}\\ \alpha_{U}&-\rho_{V}&\alpha_{I}&\rho_{Q}\\ \alpha_{V}&\rho_{U}&-\rho_{Q}&\alpha_{I}\end{pmatrix}, \tag{2}\] where \(U\) components vanish if \(\mathbf{B}\) is aligned with \(U\): \(j_{U}=0\), \(\alpha_{U}=0\), and \(\rho_{U}=0\). Then \(I\) components of \(j_{a}\) and \(\alpha_{a}\) describe total emission, \(Q\) components describe linearly polarised emission and \(V\) describe circularly polarised emission, while \(\rho_{Q}\) and \(\rho_{V}\) account for Faraday conversion and rotation respectively. In this work, we focus on emissivities \(j_{a}\) and absorption coefficients \(\alpha_{a}\), while Faraday rotativities \(\rho_{a}\) will be studied in future work. We need to evaluate \(j_{a}\), \(\alpha_{a}\), and \(\rho_{a}\) through \(f(\gamma,\xi)\) to describe the radiation emission and transfer. In the Stokes basis at frequency \(\nu\) [see, e.g., Leung et al. (2011)]: \[j_{a} =\frac{2\pi e^{2}\nu^{2}}{c}\int d^{3}pf(\gamma,\xi)\sum_{n=1}^{ \infty}\delta(y_{n})K_{a}(z), \tag{3}\] \[\alpha_{a} =\frac{2\pi\nu}{m_{e}c^{2}}\int d^{3}pDf(\gamma,\xi)\sum_{n=1}^{ \infty}\delta(y_{n})K_{a}(z),\] where \(\delta(y_{n})\) is a delta function of argument \(y_{n}=n\nu_{c}/\gamma-\nu(1-\beta\cos\xi\cos\theta_{B})\), \(z=\nu\gamma\beta\sin\theta_{B}\sin\xi/\nu_{c}\), \(d^{3}p=2\pi m_{e}^{3}c^{3}\gamma^{2}\beta d\gamma d\cos\xi\) for a gyrotropic \(f(\gamma,\xi)\), and \(Df\) is an operator that includes a full derivative of the distribution function: \[Df\equiv\left(k_{\parallel}\frac{\partial}{\partial p_{\parallel} }+\frac{\omega-k_{\parallel}v_{\parallel}}{v_{\perp}}\frac{\partial}{ \partial p_{\perp}}\right)f(\gamma,\xi) \tag{4}\] \[=\frac{2\pi\nu}{m_{e}c^{2}}\left(\frac{\partial}{\partial\gamma }+\frac{\beta\cos\theta_{B}-\cos\xi}{\beta^{2}\gamma}\frac{\partial}{\partial \cos\xi}\right)f(\gamma,\xi),\] In Equation 3, \(K_{a}\) is defined as \[K_{a}=\begin{cases}M^{2}J_{n}^{2}(z)+N^{2}J_{n}^{\prime 2}(z),a=I,\\ M^{2}J_{n}^{2}(z)-N^{2}J_{n}^{\prime 2}(z),a=Q,\\ 0,a=U,\\ 2MNJ_{n}(z)J_{n}^{\prime}(z),a=V,\end{cases} \tag{5}\] where \(J_{n}\) is a Bessel function of the first kind, \(M=(\cos\theta_{B}-\beta\cos\xi)/\sin\theta_{B}\), and \(N=\beta\sin\xi\). Given \(f(\gamma,\xi)\), one can find \(j_{a}\) and \(\alpha_{a}\) through Equations 3, 4, and 5. ### Anisotropic electron distribution function We will use an anisotropic distribution function \(f(\gamma,\xi)\) for emitting electrons, written in cgs units: \[f(p_{\perp},p_{\parallel})=\frac{n_{e}\eta^{1/2}}{4\pi m_{e}^{3}c ^{3}\epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}\times \tag{6}\] \[\exp{(-\sqrt{1+(p_{\perp}/m_{e}c)^{2}+\eta(p_{\parallel}/m_{e}c)^ {2}}/\epsilon_{\perp})},\] where \(n_{e}\) is the electron number density, \(\epsilon_{\perp}=kT_{\perp,e}/m_{e}c^{2}\) is the dimensionless perpendicular electron temperature, \(K_{2}\) is the modified Bessel function of the second kind, \(p_{\perp}\) and \(p_{\parallel}\) stand for the relativistic momentum perpendicular and parallel to the magnetic field direction. Here \(\eta\) is a measure of anisotropy, with \(\eta=1\) corresponding to an isotropic relativistic Maxwellian distribution function. In the non-relativistic limit, \(T_{\perp,e}/T_{\parallel,e}=\eta\), while \(T_{\perp,e}/T_{\parallel,e}\approx\eta^{0.8}\) in the ultra-relativistic limit (see Appendix C for a detailed fit). Transforming \(f(p_{\perp},p_{\parallel})\) to \(\gamma-\xi\) variables: \[f(\gamma,\xi)=\frac{n_{e}\eta^{1/2}}{4\pi m_{e}^{3}c^{3}\epsilon_{ \perp}K_{2}(1/\epsilon_{\perp})}\times \tag{7}\] \[\exp{(-\sqrt{1+(\gamma^{2}-1)(\sin^{2}\xi+\eta\cos^{2}\xi)}/ \epsilon_{\perp})}.\] In the limit of high \(\gamma\): \[f(\gamma,\xi) =\frac{n_{e}\eta^{1/2}}{4\pi m_{e}^{3}c^{3}\epsilon_{\perp}K_{2}(1/ \epsilon_{\perp})}\times \tag{8}\] \[\exp{(-\gamma\sqrt{1+(\eta-1)\cos^{2}\xi}/\epsilon_{\perp})}\] \[=\frac{n_{e}\eta^{1/2}}{4\pi m^{3}c^{3}\epsilon_{\perp}K_{2}(1/ \epsilon_{\perp})}\times\exp{(-\gamma/\epsilon_{\perp}^{*})},\] where \[\epsilon_{\perp}^{*}=\epsilon_{\perp}^{*}(\xi)=\frac{\epsilon_{\perp}}{\sqrt{ 1+(\eta-1)\cos^{2}\xi}} \tag{9}\] is the new renormalized temperature. Thus, in the high-\(\gamma\) limit, the temperature in the distribution function depends on both the anisotropy \(\eta\) and pitch angle or \(\mu=\cos\xi\). In the isotropic case \(\eta=1\), the temperature is described by \(\epsilon=\epsilon_{\perp}=\epsilon_{\parallel}\) in all directions. Note that in the analytical fitting functions in the next subsection, \(\epsilon_{\perp}^{*}\) will be evaluated at \(\xi=\theta_{B}\) because the radiation is beamed along the local direction of motion of relativistic electrons (as is standard in synchrotron radiation, see Appendix A for details). In our numerical evaluations, however, we integrate and sum over \(\xi\) and \(\theta_{B}\) separately using Equation 3. The total derivative \(Df\) that is used in calculating \(\alpha_{a}\) in Equation 3 contains \[\partial_{\gamma}f(\gamma,\xi) =-\frac{\gamma}{\epsilon_{\perp}}\frac{1+\mu^{2}(\eta-1)}{\sqrt{ \gamma^{2}+(\gamma^{2}-1)(\eta-1)\mu^{2}}}f(\gamma,\xi), \tag{10}\] \[\partial_{\mu}f(\gamma,\xi) =-\frac{(\gamma^{2}-1)(\eta-1)\mu}{\epsilon_{\perp}\sqrt{\gamma^{ 2}+(\gamma^{2}-1)(\eta-1)\mu^{2}}}f(\gamma,\xi).\] While \(\partial_{\mu}f(\gamma,\xi)\) is non-zero, we find that the absorption coefficients change negligibly if we include this term. This is due to the prefactor it goes with in equation 4 since \(\gamma\gg 1\) and the absorption is mainly concentrated around \(\xi\approx\theta_{B}\) (see Appendix A for details). ### Emissivities and absorption coefficients We obtain the following fits for emissivities and absorption coefficients for a relativistic plasma with an anisotropic bi-Maxwellian distribution function (see Appendix A for details on the derivation): \[j_{a}=\begin{cases}\eta^{1/2}\mathcal{K}_{a}(\epsilon_{\perp}^{*},\epsilon_{ \perp})j_{a,\text{iso}}(\epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B}),a=\{I,Q\}\\ \eta^{3/2}\mathcal{K}_{a}(\epsilon_{\perp}^{*},\epsilon_{\perp})j_{a,\text{iso }}(\epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B}),a=V,\end{cases} \tag{11}\] where \[\mathcal{K}_{a}(\epsilon_{\perp}^{*}/\epsilon_{\perp})=\begin{cases}(\epsilon _{\perp}^{*}/\epsilon_{\perp})[K_{2}(1/\epsilon_{\perp}^{*})/K_{2}(1/\epsilon _{\perp})],a=\{I,Q\}\\ (\epsilon_{\perp}^{*}/\epsilon_{\perp})^{3}[K_{2}(1/\epsilon_{\perp}^{*})/K_{ 2}(1/\epsilon_{\perp})],a=V,\end{cases} \tag{12}\] and \(K_{2}(1/\epsilon_{\perp})\approx 2\epsilon_{\perp}^{2}\) when \(\epsilon_{\perp}\gg 1\). Here \(\epsilon_{\perp}^{*}\) is evaluated at \(\xi=\theta_{B}\) and \(j_{a,\text{iso}}(\epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B})\) and \(\alpha_{a,\text{iso}}(\epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B})\) correspond to emission and absorption in the case of an isotropic relativistic Maxwellian. Absorption coefficients \(\alpha_{a}\) can be obtained via Kirchoff's law for a thermal distribution function: \[j_{a,\text{iso}}-\alpha_{a,\text{iso}}B_{\nu}=0, \tag{13}\] where \(B_{\nu}(T_{\perp,e})=\frac{2h\nu^{3}}{c^{2}}(\exp{(h\nu/kT_{\perp,e})}-1)^{-1}\) for an anisotropic distribution has the same functional form as in the isotropic case (with \(T_{\perp,e}=T_{e}\) for an isotropic distribution). Equations 11 correspond to total intensity and linearly polarized emissivities with the same functional form as in an isotropic plasma but with a temperature \(\epsilon_{\perp}^{*}\) that depends on the observer-angle \(\theta_{B}\) due to the anisotropy in the distribution function relative Figure 1: Emissivity \(j_{I}\) (a), absorption \(\alpha_{I}\) (b), and emitted polarization fractions (c) as functions of the angle between propagation direction and magnetic field \(\theta_{B}\) at different anisotropy values: \(T_{\perp}<T_{\parallel}\) (\(\eta<1\), blue), isotropic (\(\eta=1\), black), and \(T_{\perp}>T_{\parallel}\) (\(\eta>1\), red). The free parameters are \(\nu/\nu_{c}=10^{3}\) and \(\epsilon_{\perp}=10\) (near the peak of the optically thin synchrotron spectrum for an isotropic distribution function). In (c) the emitted linear \(|j_{Q}|/j_{I}\) and circular \(|j_{V}|/j_{I}\) polarization fractions are shown by solid and dashed lines respectively. In (a) \(\sin^{2}\theta_{B}\) dependence is shown by a black dotted line. to the magnetic field (the factors of \(\eta^{1/2}\mathcal{K}_{a}(\epsilon_{\perp}^{+},\epsilon_{\perp})\approx\eta^{1/2}( \epsilon_{\perp}^{+}/\epsilon_{\perp})^{3}\) in Equation 11 reflect the change in normalization of the distribution function due to the different number of particles whose radiation is beamed in the direction of the observer). By contrast, the Stokes V (circular polarization) emissivity in Equation 11 differs by a larger factor because of a change in the efficiency of producing circularly polarized radiation for an anisotropic distribution function (see Eq. A1 and A9 in Appendix A). Given Equation 11, it is straightforward to derive the location of the peak of the optically thin emission \(\nu j_{I}(\epsilon,\nu,\theta_{B},\eta)\) as \[\nu_{\rm peak}\approx 36.7\nu_{c}\epsilon_{\perp}^{*2}\sin\theta_{B}\simeq \frac{36.7\nu_{c}\epsilon_{\perp}^{2}\sin\theta_{B}}{1+(\eta-1)\cos^{2}\theta _{B}}, \tag{14}\] which shifts to lower (higher) frequencies with increasing (decreasing) \(\eta\) at a fixed \(\epsilon_{\perp}\) and \(\theta_{B}\) (though we show below in Figure 1 that \(\eta\) changes the efficiency of producing radiation as a function of \(\theta_{B}\)). We find good agreement between our analytic expressions and numerical calculation over a wide parameter range using a publicly available code symphony, which integrates Equations 3 for a given distribution function \(f(\gamma,\xi)\). For a detailed derivation of the fits given by Equations 11, full expressions for \(j_{a,\rm iso}\) and \(\alpha_{a,\rm iso}\), and the comparison with numerical solutions see Appendix A. The fits presented here become inaccurate for \(\epsilon_{\perp}^{+}\lesssim 3\) and low frequencies \(\nu/\nu_{c}\lesssim 10\), where the isotropic fits that we scale to in Equation 11 themselves become inaccurate. We demonstrate the resulting emission properties in Figure 1, where \(j_{I}\) (a), \(\alpha_{I}\) (b), and the emitted linear and circular polarization fractions (solid and dashed lines in (c) respectively) are shown as functions of the angle between propagation direction and magnetic field \(\theta_{B}\) at different values of anisotropy parameter \(\eta\), represented by different colors (we intentionally choose relatively large anisotropy to highlight the large differences in synchrotron radiation possible in this limit). The parameters used in this Figure are a high frequency of \(\nu/\nu_{c}=10^{3}\) and temperature of \(\epsilon_{\perp}=10\). The isotropic case (black line) closely follows a \(\sin^{2}\theta_{B}\) dependence of \(j_{I}\) (dotted) in panel (a) due to the frequency being near the peak of the synchrotron emissivity. Figure 1 shows that there are significant differences in the synchrotron emission/absorption for an anisotropic plasma distribution, compared to the isotropic case. This change can be understood as a renormalization of the number of relativistic particles emitting toward the observer at \(\theta_{B}\). In particular, the plasma is less prone to emitting along the magnetic field at \(\eta>1\) (\(T_{\perp}>T_{\parallel}\), red lines), hence the rapid fall off of \(j_{\nu,I}\) with decreasing \(\theta_{B}\), compared to \(\eta\equiv 1\). That is, the emission is even more concentrated towards \(\theta_{B}=90^{\circ}\) when \(\eta>1\). For the opposite anisotropy, \(\eta<1\) and \(T_{\parallel}>T_{\perp}\) (blue lines), the number of particles capable of emitting along the magnetic field direction increases. Thus, more emission can be produced at smaller \(\theta_{B}\) (along the magnetic field), relative to the isotropic case with \(\eta\equiv 1\). The unpolarised absorption coefficient \(\alpha_{I}\) (b) shows a similar but smaller dependence on \(\eta\) as \(j_{I}\). The polarization fractions have a weaker dependence on \(\eta\). This is shown in Figure 1 (c) with solid and dashed lines for the intrinsic linear \(|j_{Q}|/j_{I}\) and circular \(|j_{V}|/j_{I}\) polarization fractions, respectively. Quantitatively, both \(|j_{Q}|/j_{I}\) and \(|j_{V}|/j_{I}\) are higher for higher \(\eta\) but the change is particularly modest for the intrinsic linear polarization \(|j_{Q}|/j_{I}\). Since most of the emission comes from small (large) angles for \(\eta<1\) (\(\eta>1\)), the emitted circular polarisation degree can significantly vary with \(\eta\) due to the change in which pitch angles dominate the emission. In particular, \(\eta>1\) is significantly more circularly polarised, and \(\eta<1\) is less circularly polarized, compared to emission from an isotropic plasma. This is because \(\eta>1\) decreases the effective temperature \(\epsilon_{\perp}^{*}\) by suppressing the parallel temperature at a fixed \(\epsilon_{\perp}\). ## 3 Black hole imaging In this section we study the observational implications of synchrotron emission by a plasma with anisotropic temperatures in the context of black hole accretion flows. Specifically, we focus on the application to the EHT targets Sgr A* and M87* (Event Horizon Telescope Collaboration et al., 2019, 2022). Our goal in this initial study is to determine the rough magnitude of the effect and which observables are most sensitive to electron temperature anisotropy. The exact electron temperature anisotropy in the near-horizon plasma is uncertain so we will use general stability arguments to bound the anisotropy and thus the effect of anisotropy on the synchrotron radiation. ### Method We use a publicly available radiative transfer code blacklight(White, 2022) to ray trace synchrotron emission in GRMHD simulations and study the resulting intensity and polarization images. We implement the formulas for the emissivity and absorption coefficients of hot electrons with an anisotropic distribution function discussed in SS2 (we use the limit of high temperature such that \(K_{2}(1/x)\approx 2x^{2}\)). Since EHT observational constraints favor highly magnetized models (Event Horizon Telescope Collaboration et al., 2021), we restrict our study to a MAD simulation of plasma accreting onto a spinning BH with dimensionless spin parameters of \(a=0.98\) and \(0.5\). Our results are averaged over 100 snapshots which span a time of \(1000r_{g}/c\) when the accretion rate and magnetic flux on the horizon are in approximate steady-state (see Appendix D for details on the simulation setup and choice of this time period). Since the MHD method cannot handle vacuum, our GRMHD simulations have a ceiling plasma magnetization parameter of \(\sigma=B^{2}/[4\pi\rho c]=100\), and we ignore emission from \(\sigma>10\) regions.3 We choose a BH mass and distance to the BH to match M87*, \(M_{BH}=6.5\times 10^{9}M_{\odot}\) and \(d=1.67\times 10^{7}\)pc, unless otherwise specified. In the GRMHD simulations, the plasma number density normalization is a free parameter, which we choose such that the total flux of the image \(F_{\nu}\) at 230 GHz matches EHT observations of M87*, i.e., 0.66 Jy. The raytraced images have a resolution of \(128\times 128\) cells, with a point camera located at \(100r_{g}\) and inclination [observing angle] of \(\theta\). We consider both \(\theta=163\deg\), appropriate for M87*, as well as less face-on viewing angles to demonstrate the change with viewing angle. Footnote 3: The magnetization in the jet region can, in reality, be significantly larger than the ceiling value set in our GRMHD simulations. These low-density regions with \(\sigma\gtrsim 10\) are, however, not expected to contribute significantly to the observed flux at 230 GHz. Since the GRMHD equations evolve a single fluid, while in the plasmas of interest the electrons and ions likely have different temperatures, we have the freedom to set the electron temperature. The heating of collisionless electrons should depend on local plasma parameters, in particular, the magnetic field strength (Quataert & Gruzinov, 1999) via \(\beta_{\rm th}=P_{\rm th}/P_{B}\) - the ratio of thermal pressure to magnetic pressure. To parameterize the electron temperate, we use the widely-employed \(R_{\rm high}-R_{\rm low}\) model (Moscibrodzka et al., 2016). In this model, the ion-to-electron temperature is set by \[R=\frac{T_{i}}{T_{e}}=\frac{\beta_{\rm th}^{2}R_{\rm high}+R_{\rm low}}{1+\beta _{\rm th}^{2}}, \tag{15}\] Figure 2: Plasma-\(\beta_{\rm th}\) (column 1, a and g), plasma temperature \(P/2\rho\) (column 2, b and h), anisotropy \(\eta\) (columns 3 and 4 for mirror and firehose respectively), and normalized electron perpendicular temperature \(\epsilon_{\perp}\) (columns 5 and 6 for mirror and firehose respectively) for \(a=0.98\) (at time of \(14300r_{g}/c\), top row) and \(a=0.5\) (at time of \(16000r_{g}/c\), bottom row). where \(\beta_{\rm th}=P_{\rm th}/P_{B}\) is the plasma \(\beta_{\rm th}\) for an MHD fluid, \(R_{\rm high}\) and \(R_{\rm low}\) are ion-to-electron temperature ratios in the high and low-\(\beta_{\rm th}\) regions respectively. The fluid GRMHD temperature is \(T=(T_{i}+T_{e})/2\), and thus \(T_{e}=2T/(1+R)\). In this work, we explore three cases: \(R_{\rm high}=1\), \(10\) and \(100\), while \(R_{\rm low}\) is set to \(1\) always. To study the effect of the anisotropy of the plasma on images, we also have the freedom to set the anisotropy parameter \(\eta\sim T_{\perp,e}/T_{\parallel,e}\) since the GRMHD simulations have no information about plasma anisotropy. The anisotropy of the plasma is limited by kinetic-scale instability thresholds, which allow for a large anisotropy in low-\(\beta_{\rm th}\) regions. Ion-scale mirror and firehose instabilities are clearly present in global GR kinetic simulations of collisionless plasma accreting onto a BH (Galishnikova et al., 2023) and in kinetic shearing box simulations (Kunz et al., 2014; Riquelme et al., 2015). The electrons also contribute to driving mirror, firehose, and whistler instabilities, which are important for setting the electron temperature anisotropy. Since the magnitude of the electron temperature anisotropy in the near-horizon environment is not fully understood, we consider all three limiting cases - where the plasma sits at the mirror (\(\eta>1\)), whistler (\(\eta>1\)), or firehose (\(\eta<1\)) instability thresholds everywhere. We then compare these limiting cases to the usually considered isotropic plasma distribution. This should bracket the magnitude of the effect introduced by an anisotropic electron distribution function. We note that in the single-fluid global "extended GRMHD" simulations of Foucart et al. (2017) in which the pressure anisotropy is a dynamical variable, most of the plasma was near the mirror threshold. If generically true, and applicable to electrons, this would suggest that the mirror and whistler instability thresholds are the most important. The microinstability thresholds can be expressed as \(T_{\perp,e}/T_{\parallel,e}=g(\beta_{e})\), where \(g(\beta_{e})\) is a function of election plasma\(-\beta\), different for each of the anisotropy-driven instability [see Appendix B for a derivation of relativistic mirror, firehose, and whistler instabilities and Appendix C for additional details on \(g(\beta_{e})\) for each instability]. Therefore, our procedure for obtaining \(T_{\perp,e}\) and \(\eta\) for each of the four instability cases (here and after mirror, whistler, isotropic, firehose) is as follows. We first compute electron-\(\beta\) as \(\beta_{e}=2\beta_{\rm th}/(1+R)\), according to Eq. 15. Knowing \(\beta_{e}\) allows us to calculate \(T_{\perp,e}/T_{\parallel,e}=g(\beta_{e})\) for each case of interest and thus \(\eta\) (we consider the relativistic limit, where \(T_{\perp,e}/T_{\parallel,e}\approx\eta^{0.8}\)). The perpendicular temperature can then be separately determined from the definition \(T_{e}=(T_{\parallel,e}+2T_{\perp,e})/3=T_{\perp,e}(\eta^{1/0.8}+2)/3=T_{i}/R\). Now that we have \(\eta\) and \(\epsilon_{\perp}\) for electrons, we can then calculate \(j_{a}\) and \(\alpha_{a}\), given by Equations 11 (with \(\epsilon_{\perp}=kT_{\perp,e}/m_{e}c^{2}\)). In Figure 2 we show an example of the inferred physical conditions in MAD accretion flows from GRMHD simulations: MHD-\(\beta_{\rm th}\) (first column), plasma temperature \(P/2\rho\) (second column), and the resulting \(\eta\) (third and fourth columns for mirror and firehose respectively) and electron \(\epsilon_{\perp}\) (fifth and sixth columns for mirror and firehose respectively). The top row is for \(a=0.98\) while the bottom row is for \(a=0.5\). Grey regions indicate \(\sigma\geq\sigma_{\rm cut}=10\). Since MAD simulations are highly magnetized with low plasma-\(\beta\) in much of the volume, the mirror (c and i, \(\eta>1\)) and firehose (d and j, \(\eta<1\)) instabilities allow for a large temperature anisotropy in much of the volume. The mirror case also results in a higher electron temperature \(\epsilon_{\perp}\), while the firehose case results in a lower and more uniform \(\epsilon_{\perp}\). Future observations aim to probe not only the direct emission from the BH but also the lensed emission associated with the "photon ring" (Johnson et al., 2023). The latter can be decomposed into a series of sub-rings labeled by the ray order \(n\) - the number of half-orbits a photon traveled to the observer, defined as \((\Delta\phi_{\rm ray}\mod\pi)\), where \(\Delta\phi_{\rm ray}\) is the change in the angular coordinate \(\phi_{\rm ray}\) along the ray in the plane of its orbit. To distinguish \(n=0\) (direct) and \(n=1\) ("photon ring" of order 1) in the ray tracing, we track \(dz/d\lambda\) along each ray, where \(z\) and \(\lambda\) are Cartesian Kerr Schild coordinate along the spin axis and coordinate along the ray respectively. The number of times that \(dz/d\lambda\) crosses zero for a particular ray defines the order of this ray \(n\), allowing us to approximately distinguish \(n=0\) and \(n=1\). ### 230 GHz images Total intensity images observed at \(\theta=163^{\circ}\), expected for M87* (Mertens et al., 2016), with \(R_{\rm high}=10\) are shown in Figure 3 for \(a=0.98\). The top row (a-d) shows the brightness blurred with \(20\mu as\) FWHM Gaussian kernel on a linear scale to match current EHT observations. Each column represents a different anisotropy model: mirror, whistler, isotropic, and firehose (from left to right, from largest to smallest \(\eta\)). The density normalization is roughly the same for each of these cases at fixed \(R_{\rm high}\) and observing angle \(\theta\), with the density in the firehose model being larger than in the isotropic case by a factor of a few. The three bottom rows in Figure 3 show unblurred full emission (second row), which is decomposed into the direct emission (\(n=0\), third row) and the \(n=1\) photon ring (fourth row) on a logarithmic scale (as appropriate for future higher dynamic range measurements). The azimuthal anisotropy in the images in Figure 3 is due to a combination of two effects: Doppler beaming and differences in the angle \(\theta_{B}\) relative to the local magnetic field that photons are emitted at, in order to arrive at a given location in the observed image. Anisotropy in the electron distribution function can significantly change the synchrotron emission as a function of \(\theta_{B}\), thus changing this second source of azimuthal image anisotropy. Figure 3 shows that, compared to the isotropic case (c), the mirror and whistler images [\(\eta>1\), (a) and (b)] are more azimuthally asymmetric, while plasma at the firehose instability threshold [\(\eta<1\), (d)] results in a more symmetric image. This is also noticeable in the unblurred case, as well as separately in \(n=0\) and \(n=1\) images. Figure 3: Synchrotron emission of accreting plasma raytraced from a MAD simulation for a BH with \(a=0.98\) at \(R_{\rm high}=10\) and inclination of \(\theta=163^{\circ}\). Each column represents different plasma anisotropy: mirror instability threshold (column 1), whistler instability threshold (column 2), isotropic plasma distribution (column 3), and firehose instability threshold (column 4). The first row represents the full image blurred with \(20\mu\)as FWHM Gaussian kernel on a linear scale (a-d), the second row shows a full unblurred image on a logarithmic scale, from which \(I_{\nu,0}\) (\(n=0\)) and \(I_{\nu,1}\) (\(n=1\)) are decoupled on the third (i-l) and forth (m-p) rows respectively. Figure 4: Synchrotron emission of accreting plasma raytraced from a MAD simulation at an inclination of \(\theta=135^{\circ}\) for a BH with \(a=0.98\). As in Fig. 3, each column represents different plasma anisotropy. The first and second rows represent \(R_{\rm high}=1\) and 100 models respectively. Figure 5: Similar to Fig. 3 but for a BH with spin parameter of \(a=0.5\). Inclination of \(\theta=163^{\circ}\) and \(R_{\rm high}=10\) are identical to Figure 3. The first row represents the full image blurred with \(20\mu\)as FWHM Gaussian kernel on a linear scale (a-d), and the second row shows a full unblurred image on a logarithmic scale. The dependence of the azimuthal image symmetry on plasma anisotropy is also more apparent with increasing viewing angle, i.e. as we look more "edge-on" instead of "face-on". Additionally, the effect of the anisotropy of the distribution function is more prominent for larger \(R_{\rm high}\). This is because larger \(R_{\rm high}\) suppresses the emission from high-\(\beta\) regions (where distribution function anisotropies are constrained to be smaller) relative to low-\(\beta\) regions (where distribution function anisotropies can be larger). The more azimuthally asymmetric image at higher inclination and higher \(R_{\rm high}\) are demonstrated in Figure 4, where we show the full intensity images at \(R_{\rm high}=1\) (top) and \(R_{\rm high}=100\) (bottom) at a higher inclination relative to the spin axis of \(\theta=135^{\circ}\) for \(a=0.98\). Table 1 shows the image-averaged, emission-weighted ratio of the two components of anisotropic temperature, \(\langle j_{\nu}T_{\perp}/T_{\parallel}\rangle/\langle j_{\nu}\rangle\), for \(a=0.98\), \(\theta=163^{\circ}\), \(R_{\rm high}=1\), 10, and 100. As \(R_{\rm high}\) increases, the anisotropic temperature ratio approaches our maximum allowed values of 10 ad 0.1 for mirror and firehose models respectively. The significant changes in image morphology found here thus require large temperature anisotropy in the emitting plasma. To better understand the interplay between Doppler-induced asymmetry and magnetic field viewing angle-induced asymmetry, we also consider the case of a moderately spinning BH, \(a=0.5\), shown in Figure 5, where the Doppler effect is smaller than for \(a=0.98\) studied above. This figure is organized identically to Fig. 3 and the viewing angle relative to the spin axis and the choice of \(R_{\rm high}=10\) are the same. We find that the asymmetry of the image due to the plasma temperature anisotropy is still pronounced, similar to the case of a highly spinning BH. As in the \(a=0.98\) case, mirror and whistler anisotropies make the image more asymmetric, while temperature anisotropy near the firehose boundary results in a more symmetric image. Our calculations show that the anisotropic temperature distribution of plasma sitting at the firehose and mirror thresholds leads to a more azimuthally symmetric or asymmetric synchrotron image, respectively. At first glance, it is not entirely obvious why the firehose sense of anisotropy (rather than the mirror sense of anisotropy) should be associated with a more symmetric image. Our interpretation of this is that if the rotation rate of the magnetic field lines is small relative to the rotation rate of the plasma, then in ideal MHD models, the plasma velocity is approximately parallel to the magnetic field direction (see, e.g., eq. E148b of Chael et al., 2023b for a relativistic version of this expression). For nearly (but not exactly) face-on viewing angles, the Doppler effect and the effect of changing viewing angle relative to the magnetic field are "in phase": the brightening and dimming produced by the two effects peak in roughly the same places in the image plane (this follows, e.g., from the analytic model in Narayan et al., 2021). The firehose instability sense of anisotropy counteracts this by making the emission a significantly weaker function of angle relative to the magnetic field (Fig. 1) thus making the overall emission more isotropic. Another key difference between images with different electron temperature anisotropy is the image diameter; this is noticeable at both spin values in Figures 3 and 5: the size of the bright region in the image increases as \(\eta\) increases. Additionally, \(a=0.5\) shows variations in the size of the BH shadow between different models in Fig. 5. Both of these effects, as well as the asymmetry of the images, are quantified below. Figure 6 shows the emissivity-weighted angle between the magnetic field and photon direction along the ray \(\langle j_{\nu}\theta_{B}\rangle/j_{\nu}\). This angle is larger for \(a=0.98\) (a) than for \(a=0.5\) (b,c) in the inner region of the image. Physically for roughly face-on viewing angles, the magnetic field in the accretion flow onto a BH with a smaller spin has a more vertical field than onto a highly spinning BH (where the field is wrapped up to be more azimuthal). This leads to the average angle between the propaga \begin{table} \begin{tabular}{c c c c} \(R_{\rm high}\) & mirror & whistler & firehose \\ \hline \hline 1 & 3.13 & 1.47 & 0.52 \\ 10 & 5.3 & 2.0 & 0.1100 \\ 100 & 8.9 & 2.9 & 0.102 \\ \end{tabular} \end{table} Table 1: Values of image-averaged emission-weighted temperature anisotropy \(T_{\perp}/T_{\parallel}\) [pixel-averaged \(\langle j_{\nu}T_{\perp}/T_{\parallel}\rangle/\langle j_{\nu}\rangle\)] for our three temperature anisotropy cases at a = 0.98, an inclination of \(\theta=163\deg\), and three \(R_{\rm high}\) values. Figure 6: Average angle with the magnetic field along the ray, measured by emission-weighted sine of \(\theta_{B}\) for different spins and viewing angles. Lower spin decreases the average angle of the emitted photons relative to the magnetic field. This in turn enhances the effects of plasma anisotropy on the observed image (Figures 3-5). tion direction and the local magnetic field decreasing for \(a=0.5\) relative to \(a=0.98\). A less face-on viewing angle produces a similar effect (c). Figure 6 shows results for the isotropic emission model but can be used to gain insight into why the central "shadow" is noticeably different in the firehose and mirror cases in Figures 3 and 5. In particular, the lower average angle between the photon and magnetic field in Figure 6 at lower spin and observer viewing angle implies (via Figure 1) that in the mirror (firehose) case the emission in the shadow should be suppressed (enhanced). This is exactly what is seen in the images. Plasma anisotropy could thus have an effect on observational efforts to infer physical properties of the black hole using the "inner shadow" (Chael et al., 2021). We now quantify the effects of changing image size and asymmetry for different plasma anisotropy models. Following (Event Horizon Telescope Collaboration et al., 2019), we measure the image diameter \(d\) as twice the distance from the center of the image to the peak \(I_{\nu}\) averaged over all directions and \(w\) is the Full Width Half Maximum (FWHM) of \(I_{\nu}\) averaged over all directions. We can then infer \(r_{\rm in}=(d-w)/2\) and \(r_{\rm out}=(d+w)/2\) - inner and outer radius of the image. The asymmetry parameter \(A\) of the image, defined in image plane coordinates \(r_{\rm im}-\phi_{\rm im}\), is \[A=\left\langle\frac{\int_{0}^{2\pi}I(\phi_{\rm im})e^{i\phi_{\rm im}}d\phi_{ \rm im}}{\int_{0}^{2\pi}I(\phi_{\rm im})d\phi_{\rm im}}\right\rangle_{r_{\rm im }\in[r_{\rm in},r_{\rm out}]}, \tag{16}\] where \(I(\phi_{\rm im})\) is the brightness profile across image coordinate \(\phi_{\rm im}\) at a fixed radial coordinate \(r_{\rm im}\). A fully symmetric image has \(A=0\), while an antisymmetric image has \(A=1\). The asymmetry \(A\) and diameter \(d\) measured from raytraced images are shown in Figure 7 with different models represented by different colors, identical across all panels; 230 GHz results and the variation with frequency are shown in panels (a,b,d) and (c) respectively. Top panels show \(A\) (a) and \(d\) (b) for an M87* observing angle \(\theta=163^{\circ}\) as functions of \(R_{\rm high}\). Panel (d) shows \(A\) for \(R_{\rm high}=10\) as a function of observing angle \(\theta\) for \(a=0.98\) (solid lines) and \(a=0.5\) (thin dotted lines) for Sgr A*. Shaded regions indicate the allowed range as inferred from observations for M87* (a-b) (Event Horizon Telescope Collaboration et al., 2019,c) and Sgr A* (d) (Event Horizon Telescope Collaboration et al., 2022). Panel (c) shows \(A\) measured from unblurred images as a function of frequency for an M87* viewing angle. As expected, the difference in anisotropy \(A\) between the models becomes larger with increasing \(R_{\rm high}\) (a) since larger \(R_{\rm high}\) suppresses the emission from high-\(\beta\) regions relative to low-\(\beta\) regions, where the plasma Figure 7: Asymmetry \(A\) (a) and diameter \(d\) (b) as functions of \(R_{\rm high}\) for \(a=0.98\) and M87* observing angle, \(\theta=163^{\circ}\), for images at 230 GHz blurred with 20\(\mu\)as FWHM Gaussian kernel. (c): Asymmetry of unblurred images at \(\theta=163^{\circ}\) and \(R_{\rm high}=10\) as a function of observing frequency, \(a=0.98\). (d): Asymmetry at \(R_{\rm high}=10\) as a function of observing angle \(\theta\) for \(a=0.98\) (solid lines) and \(a=0.5\) (thin dotted lines) for Sgr A*. The green regions highlight EHT constraints for M87* (a,b) and Sgr A* (d). In each panel, the color of the lines represents 4 limiting cases: mirror instability, whistler instability, isotropic plasma distribution, and firehose instability. can develop significant anisotropy (see also Table 1). The firehose case (\(\eta<1\)) always shows smaller \(A\), consistent with the images in Figures 3-5, while models with \(\eta>1\) show higher \(A\) compared to the isotropic case. The firehose models typically have anisotropy \(A\) up to \(\lesssim 3\) smaller than the mirror case, with the exact value depending on \(R_{\rm high}\) and viewing angle. As explained above, this is because plasma at the firehose limit (\(\eta<1\)) emits more isotropically (over a wide range of angles) with respect to the magnetic field direction, relative to the mirror case which emits mostly at \(\theta_{B}=90^{\circ}\). This leads to less anisotropy in the image overall. In M87* the viewing angle is constrained to be \(\theta\approx 163^{\circ}\), while \(A\approx 0.16-0.32\); thus, from Figure 7a, a better fit to the observed \(A\) is obtained for \(\eta<1\) at larger \(R_{\rm high}\) or \(\eta>1\) with smaller \(R_{\rm high}\). We also show the diameter of the image in Figure 7b, calculated for the same images as shown in panel (a), i.e., \(a=0.98\), \(\theta=163^{\circ}\), and \(R_{\rm high}=10\); the diameter is generally larger for models with larger \(\eta\). The shaded region indicates M87* constraint of \(d=(42\pm 3)\mu\)as (Event Horizon Telescope Collaboration et al., 2019). As was shown in Figure 2, the temperature \(\epsilon_{\perp}\) for plasma at the firehose limit (f,l) is smaller and varies less with radius than at the mirror limit (e,k) for both spin parameters. This lower temperature leads to emission more concentrated near the black hole and thus a smaller image diameter. Figure 7d shows that the image becomes more asymmetric (d) as we look more "edge-on" instead of "face-on". Both spin values of 0.98 (solid lines) and 0.5 (thin dotted lines) show similar behavior. The images used for panel (d) are produced for Sgr A* with \(M=4.3\times 10^{6}M_{\odot}\), and the density is normalized such that the total flux matches EHT observations, i.e., \(F_{\nu}=2.4\)Jy at 230 GHz and distance of \(d=8178\)pc. A quantitatively similar trend, however, is also present for our M87* models. We also show the EHT constraints on \(A\) for Sgr A* by the shaded region in (d) [\(A\approx 0-0.5\)]. As before, plasma at the firehose anisotropy limit leads to a more symmetric image, compared to mirror and whistler limits, at any observing angle. Note the quite isotropic image (small A) at the firehose limit even at \(\theta=135^{\circ}\), especially for the lower spin case \(a=0.5\). This effect can significantly change the constraint on our viewing angle relative to Sgr A* suggested by the EHT data, allowing for larger observing angles than for an isotropic plasma. We will now quantify the imprint of the anisotropy of the plasma distribution function on the direct emission \(I_{0}\) and the \(n=1\) photon ring \(I_{1}\) separately. As seen in Fig. 3 (i-p), both \(n=0\) and \(n=1\) emission have their azimuthal asymmetry modified with varying \(\eta\) in a way that is similar to the full blurred image. Both are more symmetric at the firehose limit with \(\eta<1\) and more asymmetric at the mirror and whistler limits with \(\eta>1\), compared to the isotropic plasma distribution case. To distinguish the imprint of the plasma anisotropy on the two components, we show angular profiles of \(n=0\) and \(n=1\) emission (\(I_{0}\) and \(I_{1}\) as functions of the polar angle in the image plane \(\phi_{\rm im}\), top row, a and b) and their ratio (\(I_{1}/I_{0}\), bottom row, c and d). The polar angle is plotted such that the dimmest region of the image, \(\phi_{\rm im}\sim 0\), is in the center of the profile. This is for an observing angle of \(\theta=163^{\circ}\) for both of our spin values of 0.98 and 0.5 (thick and thin lines, respectively); \(R_{\rm high}=1\) and 100 are shown in the left and right columns respectively. As expected, the \(R_{\rm high}=100\) case shows a stronger dependence of \(I_{0}\) and \(I_{1}\) on plasma anisotropy than \(R_{\rm high}=1\) due to the higher anisotropy in the low-\(\beta_{\rm th}\) regions. The quantitative dependence of the \(n=0\) and \(n=1\) intensities on plasma anisotropy differ because the \(n=0\) and \(n=1\) photons at the same place in the image plane are emitted at different directions relative to the local magnetic field. The largest difference between \(I_{0}\) and \(I_{1}\) is reached in the case of smaller electron temperatures at the mirror limit. In principle, measurements of the azimuthal intensity profiles at \(n=0\) and \(n=1\) could Figure 8: Angular profiles of \(n=0\) and \(n=1\) brightness (\(I_{0}\) [solid] and \(I_{1}\) [dotted] as functions of \(\phi_{\rm im}\) in the image plane, top row) and their ratio (bottom row) at observing angle of \(\theta=163^{\circ}\) at spin of 0.98 (thick lines) and 0.5 (thin lines). The first and second columns represent \(R_{\rm high}=1\) and 100 respectively. The color of the lines, as in Fig. 7, represents 4 limiting cases: mirror, whistler, isotropic, and firehose. thus be used to constrain plasma anisotropy though it is unclear if this is feasible in practice given uncertainties in black hole spin, the electron temperature, degree of Doppler beaming, etc. In addition to calculating the synchrotron emission and absorption produced by an anisotropic distribution function, we have also calculated how the emitted linear and circular polarization depends on plasma anisotropy. Because we do not consider the impact of plasma anisotropy on Faraday rotation and conversion in this paper, we defer a detailed discussion of the polarization due to plasma anisotropy to future work. We can, however, quantify the change in intrinsic linear and circular polarization, i.e. neglecting the effects of Faraday rotation and conversion. We find that the image-averaged linear polarisation fraction can change by up to roughly \(+10\%\) or \(-10\%\) for the mirror and firehose limits respectively, compared to the isotropic case. Circular polarization exhibits the same trend, but the mirror case can be 5 times more circularly polarized compared to the isotropic case, at \(R_{\rm high}=100\). We also note that because models with plasma at the firehose anisotropy have smaller \(\epsilon_{\perp}\), a higher density is required to match the observed EHT flux. This leads to an increase in pixel-averaged optical depth, e.g.: \(1.1\times 10^{-3}\), \(1.2\times 10^{-3}\), \(1.3\times 10^{-3}\), and \(3.7\times 10^{-3}\) for the mirror, whistler, isotropic, and firehose cases respectively at an inclination of \(163^{\circ}\) and \(R_{\rm high}=10\). Thus, \(\tau\) is by a factor of \(3-4\) larger in the firehose case, compared to other cases, which might also lead to a higher Faraday depolarization. ### Multi-wavelength observations Future mm interferometric observations will include 2 more frequencies, 345 GHz and 86 GHz (Johnson et al., 2023), with the latter (former) expected to be more (less) optically thick (Chael et al., 2023). We thus explore the impact of an anisotropic plasma distribution function on observable images and spectra at these frequencies. In Figure 9 we show intensity images for a BH with \(a=0.98\) at 345 GHz on top (a-b) and 86 GHz on the bottom (c-d), with the parameters being identical to Figure 3 - \(\theta=163^{\circ}\) and \(R_{\rm high}=10\). The mirror and firehose models are shown in the first (a,c) and second (b,d) columns respectively. The respective images at 230 GHz are shown in Figure 3 for mirror (e) and firehose (h) cases. The differences between the mirror and firehose in Figure 9 at 345 GHz are similar to the differences at 230 GHz in Figure 3: the mirror case is more azimuthally asymmetric than the firehose case. Images at 345 GHz (a-b) are particularly similar to their 230 GHz counterparts because the emission is predominantly optically thin in both cases. At lower frequency (c-d), however, the higher synchrotron optical depth somewhat suppresses the differences between the mirror and firehose limits and overall makes the emission more azimuthally symmetric. Figure 7c quantifies the asymmetry \(A\) as a function of frequency for the 4 different distribution function models - the difference in the asymmetry between the different distribution function models persists at all frequencies though the overall asymmetry is largest at high frequencies. In the firehose model at 86 GHz, Figure 9d also shows that the photon ring emission is much less evident. This is because the firehose model has a lower temperature and higher density (at fixed 230 GHz flux) than the other plasma anisotropy models, and so the emission is optically thick at 86 GHz. The same trend, i.e. optically thin emission at high frequencies (345 GHz and 230 GHz) and optically thick emission at 86 GHz in the firehose case, persists at a lower spin parameter of \(a=0.5\) (not shown here). We also calculate the synchrotron emission spectra from \(10^{10}\) to \(10^{15}\) Hz, shown in Figure 10 at \(135^{\circ}\) (left, a-b) and \(163^{\circ}\) (right, c-d) for a spin of 0.98 (solid lines) and 0.5 (dotted lines) at two \(R_{\rm high}\) values of 10 (a,c) and 100 (b,d) (shown on the left and right side panels for each angle respectively). The different spectra for different black hole spins are due to the higher temperatures found in more rapidly spinning GRMHD simulations (Moscibrodzka et al., 2009). The color of the lines Figure 9: Synchrotron emission of accreting plasma raytraced from a MAD simulation with \(a=0.98\) at an inclination of \(\theta=163^{\circ}\) and \(R_{\rm high}=10\) at frequencies of 345 GHz (a-b) and 86 GHz (c-d). The first and second columns represent two limiting cases (mirror and firehose respectively). is organized as in previous plots with different colors representing different plasma anisotropy. The firehose case shows a significantly different spectrum for both \(a=0.98\) and \(a=0.5\). The change is minor at low frequencies, with firehose being slightly fainter than the other models. The peak of the spectrum, however, can significantly shift to lower frequencies, steepening the spectral slope just below the peak. At higher frequencies, the emission in the firehose model is substantially fainter and the spectral slope is steeper, compared to other cases. The qualitative results do not depend on the value of \(R_{\rm high}\). Our physical interpretation of this is that at fixed GMRHD temperature, the firehose model (with \(T_{\parallel,e}>T_{\perp,e}\)) has a lower value of \(T_{\perp,e}\). This suppresses the peak frequency of the synchrotron emission as given by Equation 14 leading to a more rapid decline in emission at high frequency. ## 4 Summary and Conclusions Magnetized collisionless plasmas are prone to developing anisotropies in their distribution function with respect to the magnetic field direction: the distribution function is isotropic in the plane perpendicular to the magnetic field because of rapid cyclotron motion ("gyrotropic"), but can be very different along and perpendicular to the local magnetic field. In this work we have calculated the synchrotron radiation from distribution functions with anisotropy of this form. We are motivated by the application to low accretion rate black holes such as those found in Sgr A* and M87* but we anticipate that the synchrotron radiation calculations presented here will have broader applicability. First, we have derived and provided fits for synchrotron emissivities and absorption coefficients for relativistic thermal electrons with an anisotropic distribution function in (Eq. 11). The distribution function we choose (Eq. 7) is a natural relativistic generalization of a non-relativistic bi-Maxwellian and allows for arbitrary temperature anisotropies relative to the local magnetic field \(T_{\perp}/T_{\parallel}\) via a parameter \(\eta\). The derived fits we present are accurate to \(\sim 10\%\) or better compared to numerical solutions using the publicly available synchrotron code symphony(Pandya et al., 2016) in the parameter range of interest (high frequency and high temperature); the main source of error is the inaccuracy of the fits for synchrotron emission and absorption for an isotropic thermal plasma, which our fits are scaled to. The change in synchrotron emission as the plasma transitions from an isotropic to anisotropic distribution function at a fixed perpendicular temperature \(T_{\perp}\) can be understood as a renormalization of the number of particles that emit toward the observer. The reason is that synchrotron emission emitted at an angle \(\theta_{B}\) relative to the local magnetic field is produced primarily by particles whose vector momenta are in the same direction as \(\theta_{B}\), or, equivalently, the pitch angle of the emitting particles is \(\xi\approx\theta_{B}\). The emission thus depends on the distribution function at pitch angle \(\xi\approx\theta_{B}\). For a plasma with an isotropic distribution function the temperature is independent of \(\xi\) but temperature anisotropy in the distribution function implies that the temperature is now effectively a function of pitch angle \(\xi\) and thus viewing angle \(\theta_{B}\) (Eq. 9). For an isotropic plasma, synchrotron emission is peaked near \(\theta_{B}\sim 90\,\)deg, i.e., orthogonal to the magnetic field. This trend is _enhanced_ for \(T_{\perp}>T_{\parallel}\) (anisotropy parameter \(\eta>1\)) while for \(T_{\parallel}>T_{\perp}\) (\(\eta<1\)) the emission can peak at significantly smaller observing angles, depending on the exact value of \(\eta\) (Fig. 1). The case of \(\eta<1\) also shows more uniform emission across observing angles than does \(\eta>1\). In addition to calculating the total emitted synchrotron radiation as a given frequency, we have also calculated the emitted linear and circular polarization fractions as a function of plasma anisotropy. We find that the intrinsic linear polarization degree depends only weakly on the plasma anisotropy \(\eta\). On the other hand, Figure 10: Synchrotron emission spectra for BHs with \(a=0.98\) (solid lines) and \(a=0.5\) (thin dotted lines) viewed at inclinations of \(\theta=135^{\circ}\) (a,b) and \(163^{\circ}\) (c,d) at \(R_{\rm high}=10\) (a,c) and \(100\) (b,d). As in Fig. 7, different models are represented by different colors. circular polarization, which is very weak in synchrotron emission from relativistic isotropic plasmas, increases significantly for \(T_{\parallel}<T_{\perp}\) at a fixed \(T_{\perp}\) (\(\eta>1\)). In addition, since most of the emission comes from large (small) angles relative to the magnetic field for \(\eta>1\) (\(\eta<1\)), the respective angle-averaged circular polarization degree is higher (smaller). We have employed the newly developed fits for synchrotron emission and absorption by anisotropic electrons in a GR radiative transfer code blacklight, capable of propagating synchrotron radiation in curvilinear space-time. To assess how anisotropy of the accreting plasma affects mm-wavelength observations of Sgr A* and M87*, we ray-trace GRMHD MAD simulations - the accretion model most favored observationally (Event Horizon Telescope Collaboration et al., 2021). Other accretion models, such as Standard and Normal Evolution (SANE) models, are also possible. In such models the plasma-\(\beta\) is considerably higher. This suggests that the effect of plasma anisotropies is relatively smaller in SANE models compared to MAD models, but more detailed work in the future is required to assess this quantitatively. Since the ideal MHD approach describes a collisional isotropic fluid, the main source of uncertainty in this work is the temperature and temperature anisotropy of the synchrotron-emitting electrons. In particular, the ion-to-electron temperature ratio, which we approximate by the widely-used \(R_{\rm high}-R_{\rm low}\) model and the electron's anisotropy \(\eta\), are the main free parameters in our study. Since \(\eta\) is a prescribed quantity, absent in our ideal GRMHD simulations, the conclusions of this work should be thought of as qualitative rather than quantitative. The temperature anisotropy in a collisionless plasma cannot grow without bound because small-scale instabilities set in and limit the magnitude of the temperature anisotropy. We thus examine the effect of an anisotropic synchrotron emitting plasma on observed emission by considering 3 limiting cases, defined by the anisotropy thresholds of three anisotropy-driven instabilities: the mirror and whistler instabilities (\(\eta>1\)) and the firehose instability (\(\eta<1\)). We present relativistic derivations of these thresholds in Appendix B. In particular, we derive a fully kinetic mirror instability threshold in the case of anisotropic relativistic electrons with anisotropy parameter \(\eta\) (and anisotropic non-relativistic ions with a different anisotropy parameter \(\eta_{i}\)). The temperature anisotropy allowed by kinetic-scale instabilities is larger for stronger magnetic fields, i.e., smaller \(\beta\) (the ratio of thermal to magnetic energy density). The effects of temperature anisotropy on observed synchrotron emission are thus likely to be the largest when the emission is dominated by regions with \(\beta\lesssim 1\), as is often the case in magnetically-arrested disk models favored on theoretical and observational grounds. We find that anisotropy in the accreting plasma can significantly modify the observed synchrotron emission in horizon-scale images, including the azimuthal asymmetry in the image plane and size of the image. This is primarily due to the following two effects. The first effect is that the emission and absorption for different distribution anisotropies are concentrated at different observing angles with the magnetic field, with \(\eta<1\) emitting more uniformly across all angles as \(\eta\) decreases, and \(\eta>1\) emission/absorption being more concentrated near \(\theta_{B}\sim 90\deg\) (Fig. 1). This can significantly modify the azimuthal asymmetry in the image plane because different parts of the image contain radiation that was initially emitted at different angles relative to the local magnetic field. The second key effect is that the local perpendicular temperature \(T_{\perp}\) of the electrons changes with an assumed anisotropy \(\eta\) at a given total fluid temperature \(T\) given by the GRMHD solution (Fig. 2). Models with \(\eta>1\) (\(\eta<1\)) have a larger (smaller) \(T_{\perp}\), compared to the isotropic case. Higher (lower) temperatures produce larger (smaller) 230 GHz images because the emission at 230 GHz occurs over a larger (smaller) range of radii (Fig. 3). Higher temperatures also lead to a smaller (higher) density of the accreting plasma at a fixed 230 GHz flux and thus more optically thin (thick) emission; this is especially pronounced for \(\eta>1\), i.e., the firehose regime, in which the image-averaged optical depth can increase by a factor of \(3-4\). More specifically we find that emission from plasma with \(\eta<1\) (\(\eta>1\)) produces a more azimuthally symmetric (asymmetric) image, up to a factor of 3 difference in the asymmetry parameter \(A\). This result is of particular interest in application to Sgr A*, where the observed EHT azimuthal asymmetry is surprisingly modest given expectations for a random viewing angle. This appears to suggest we are observing Sgr A* closer to face-on than not, which is a priori surprising. Models with \(\eta<1\) have significantly less variation in the synchrotron emissivity with photon direction relative to the magnetic field. This produces a more azimuthally symmetric image, alleviating the restrictive constraints on viewing angle (Fig. 7d). Anisotropy in the plasma distribution function also changes the image diameter and the size of the central flux depression (or the observed "BH shadow"). The smaller perpendicular temperature \(T_{\perp}\) in \(\eta<1\) firehose model results in a reduced image diameter (Fig. 7b). At lower BH spins, the viewing angle relative to the magnetic field is also smaller in the near-horizon region. This suppresses (enhances) the emission in the image center interior to the true photon ring (i.e., the critical curve). The BH "shadow" therefore appears to be larger in low spin models with \(\eta>1\) (Fig. 5). Chael et al. (2021) showed that the size and shape of the "inner shadow" depend on BH spin and our viewing angle relative to the BH spin, potentially providing a route to measuring these quantities. Our results show that anisotropy in the distribution function in this region close to the event horizon may be important to consider as well. In this paper we have not calculated the Faraday conversion coefficients for an anisotropic plasma. We defer this to future work. We have, however, calculated the emitted linear and circular polarization fractions and how they depend on plasma anisotropy. We find that the imaged-averaged emitted linear polarization fraction can increase (decrease) by up to 10% in the mirror and whistler (firehose) cases. The emitted circular polarization fraction shows a similar trend, although the magnitude of the effect is much larger, with the \(T_{\perp}>T_{\parallel}\) regime showing an emitted circular polarization in the mm that is up to 5 times larger than in an isotropic plasma. The high frequency synchrotron emission is particularly sensitive to plasma anisotropy. As a result, interpreting and modeling GRAVITY observations of Sgr A* may require incorporating the effects of plasma anisotropy; this emission is also likely non-thermal, however, so an extension of our results to non-thermal distribution functions would be valuable. We have also assessed how the anisotropy of the plasma affects future multi-frequency and \(n=1\) photon ring observations. We find that the effect of the plasma distribution function on the azimuthal image asymmetry persists throughout the frequencies of interest to future ngEHT observations, i.e. 86 GHz and 345 GHz, though the effect is more pronounced at higher frequencies (Fig. 7c). We also find that the \(n=1\) photon ring emission is even more azimuthally asymmetric (symmetric) for \(\eta>1\) (\(\eta<1\)) than the direct \(n=0\) emission, leading to an increased (decreased) ratio of photon ring to direct emission brightness - up to a factor of 6 in intensity ratio relative to the isotropic distribution function case for the parameter range we considered. Anisotropy in the distribution function has a particularly large effect on the ratio of the \(n=1\) to \(n=0\) emission because plasma anisotropy directly changes the emissivity as a function of viewing angle relative to the magnetic field, and the \(n=0\) and \(n=1\) images contain emission emitted at different angles relative to the local magnetic field. The largest limitation of the present study as applied to modeling Sgr A*, M87* and related sources is that the true electron temperature anisotropy in the near-horizon environment is poorly constrained. In this work we have attempted to bracket the magnitude of the effect that temperature anisotropy can produce on near-horizon synchrotron radiation by considering the extreme limit in which all of the plasma is at the temperature-anisotropy associated with the instability thresholds for the mirror, whistler, or firehose instabilities. The image-averaged emission-weighted electron temperature anisotropies in these models are given in Table 1 and range from \(\sim 0.1-9\). Real systems likely do not follow just one of the limiting anisotropy models considered here since different temperature anisotropy can co-exist in different parts of the accretion flow. In magnetically dominated jet regions, the plasma is in principle capable of developing large anisotropy in its distribution function. This could occur due to differential parallel and perpendicular heating and/or as a result of outflow-driven expansion of the jet (as in the solar wind). Consequently, it would be interesting to apply the methods developed here - likely extended to non-thermal distribution functions - to model and interpret the emission from spatially extended jets (e.g., Lu et al., 2023). Fortunately, there is a clear path forward for improving our understanding of the role of temperature anisotropy in the radiation from accretion flows and jets. Global "extended" MHD models that evolve the pressure anisotropy as a dynamical variable can predict \(T_{\perp}/T_{\parallel}\) as a function of time and space (Foucart et al., 2017), removing the need to specify the temperature anisotropy in post-processing as we have done here; such models will, however, needed to be extended to consider both electron and proton temperature anisotropies. Global GRPIC simulations can go one step further and predict the full distribution function in the accretion flow and outflow, including temperature anisotropy, and deviations from a Maxwellian, etc. (Galishnikova et al., 2023). One aspect that is important to account for in future modeling is that in plasmas with \(T_{p}>T_{e}\), the mirror and fluid firehose instabilities are most sensitive to the proton temperature anisotropy (see Appendix B). As a result it is plausible that the electron anisotropy is set primarily by resonant instabilities such as the whistler and resonant firehose instabilities. We thank Alex Lupsasca for useful conversations and for sharing his analytical calculations of source-plane emission angle for different photon sub-rings. We also thank Chris White for useful conversations and for sharing the details of his GRMHD MAD simulations, and Charles Gammie and Michael Johnson for useful conversations. This research was facilitated by the Multimessenger Plasma Physics Center (MPPC), NSF grant PHY-2206610. A.P. acknowledges support by NASA ATP grant 80NSSC22K1054. This work was also supported in part by a Simons Investigator Grant from the Simons Foundation (EQ), and was completed during EQs stay at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452. ## Appendix A Comparison of Analytical Synchrotron Expressions with Numerical Results In this Appendix we analytically calculate the synchrotron emission and absorption coefficients for our assumed gyrotropic distribution function and compare the resulting analytic expressions to full numerical evaluations of Equations 3. The analytic calculations are carried out in the limit of high Lorentz factors for the emitting electrons, the same regime in which analytic progress can be made for an isotropic distribution function [see, e.g., (Ginzburg & Syrovatskii, 1965; Melrose, 1971)]. ### Derivation of Analytical Fits for Total Intensity, Linear Polarization, and Circular Polarization Under the assumption of high Lorentz factor \(\gamma\) (or energy \(E\)) for the emitting electrons, the emission is predominantly concentrated in a narrow cone around the pitch angle \(\mu\simeq\cos(\theta_{B})\) where \(\theta_{B}\) is the viewing angle with respect to the magnetic field. Following Melrose (1971), the electron emissivity in the Stokes basis from Equations 3 and 5 in the main text can be expressed in tensor form as \[\begin{split} j^{\alpha\beta}=\int d^{3}pf(E,\xi)\eta^{\alpha \beta}=\int_{0}^{\infty}d Ef(E,\theta_{B})\frac{\sqrt{3}e^{2}\nu_{c}\sin\theta_{B}}{8\pi c}H^{ \alpha\beta}(X),\\ H^{11}(X)=X\left[\int_{X}^{\infty}dtK_{5/3}(t)+K_{2/3}(X) \right],\\ H^{22}(X)=X\left[\int_{X}^{\infty}dtK_{5/3}(t)-K_{2/3}(X) \right],\\ H^{12}(X)=-H^{21}=-\frac{2i\cot\theta_{B}}{3\gamma}\left[(2+g( \theta_{B}))\int_{X}^{\infty}dtK_{1/3}(t)+2XK_{1/3}(X)\right],\end{split}\] (A1) where \(\nu_{c}=eB/2\pi m_{e}c\) is a non-relativistic cyclotron frequency, \(X=\nu/\nu_{cr}\) and \(\nu_{cr}=(3/2)\nu_{c}\gamma^{2}\sin\theta_{B}\). The first expression in Equation A1 is general while in the second expression we have integrated over pitch angle \(\xi\) by assuming \(\xi\simeq\theta_{B}\). The Stokes emissivities are related to Equation A1 as \(j_{I}=j^{22}+j^{11}\), \(j_{Q}=j^{22}-j^{11}\), \(j_{U}=j^{12}+j^{21}\equiv 0\), and \(j_{V}=i(j^{12}-j^{21})\). Here, unlike in Melrose (1971), we define \(g(\theta_{B})\) for a general non-separable gyrotropic distribution function, which for our choice of the distribution (Equation 7 in the main text) is \[g(\theta_{B})=\tan\theta_{B}\left.\frac{df(E,\xi)}{d\xi}\right|_{\xi=\theta_{ B}}\frac{1}{f(E,\theta_{B})}=\frac{\gamma}{\epsilon_{\perp}^{*}}\frac{(\eta-1) \sin^{2}\theta_{B}}{1+(\eta-1)\cos^{2}\theta_{B}}=\frac{\gamma}{\epsilon_{ \perp}^{*}}\left(\frac{\epsilon_{\perp}^{*}}{\epsilon_{\perp}}\right)^{2}( \eta-1)\sin^{2}\theta_{B}\equiv A\gamma.\] (A2) In the last equality in Equation A2 we have defined the anisotropy parameter A (a function of \(\eta\)) that will appear below. We now proceed analytically evaluating the emissivities \(j_{I}\), \(j_{Q}\) and \(j_{V}\), beginning with \(j_{I}\). Equation 17 for \(j_{I}\) can be rewritten as: \[j_{I}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)=\frac{\sqrt{3}Bm_{c}^{2}ce^{ 3}\sin(\theta_{B})}{8\pi}\int d\gamma\gamma^{2}\beta f(\gamma,\theta_{B})F(X)= \eta^{1/2}\frac{\sqrt{3}n_{e}Be^{3}\sin(\theta_{B})}{32\pi^{2}m_{e}c^{2} \epsilon_{\perp}K_{2}(\epsilon_{\perp})}\int d\gamma\gamma^{2}\beta e^{-\gamma/ \epsilon_{\perp}^{*}}F(X), \tag{19}\] where \[F(X)=X\int_{X}^{\infty}dtK_{5/3}(t)=\begin{cases}2^{2/3}\Gamma(1/3)X^{1/3}+ \mathcal{O}(X),\ X\ll 1,\\ \sqrt{\frac{\pi}{2}X}e^{-X}(1+\mathcal{O}(1/X)),\ X\gg 1\end{cases} \tag{20}\] is the asymptotic behavior of the synchrotron power at low and high frequencies and \(\Gamma(a)\) is the gamma-function. To express the emissivity in terms of the new temperature \(\epsilon_{\perp}^{*}=\epsilon_{\perp}^{*}(\xi=\theta_{B})=\epsilon_{\perp}/ \sqrt{1+(\eta-1)\cos^{2}\theta_{B}}\), as given by distribution in Equation 8, we consider separately the low and high frequency limits in Equation 20 applied to Equation 19. In the low-frequency limit, \[j_{I}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)\propto\eta^{1/2}\int_{1}^ {\infty}d\gamma\gamma^{2}\beta\frac{e^{-\gamma/\epsilon_{\perp}^{*}}}{ \epsilon_{\perp}K_{2}(\epsilon_{\perp})}\gamma^{-2/3}\approx\frac{\eta^{1/2}}{ \epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}\int_{1}^{\infty}d\gamma\gamma^{4/3} e^{-\gamma/\epsilon_{\perp}^{*}}\approx\eta^{1/2}\frac{\epsilon_{\perp}^{*}7/3}{ \epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}. \tag{21}\] Therefore, the final expression for the emissivity is \[j_{I}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)\approx\frac{2^{4/3}\pi\eta ^{1/2}}{3}\frac{n_{e}e^{2}\nu_{s}^{*}}{cK_{2}(1/\epsilon_{\perp})}\left(\frac{ \nu}{\nu_{s}^{*}}\right)^{1/3}\left(\frac{\epsilon_{\perp}^{*}}{\epsilon_{ \perp}}\right)=\eta^{1/2}\frac{\epsilon_{\perp}^{*}}{\epsilon_{\perp}}\frac{K_ {2}(1/\epsilon_{\perp}^{*})}{K_{2}(1/\epsilon_{\perp})}j_{I,\text{iso}}(\epsilon =\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B}), \tag{22}\] where \(\nu_{s}^{*}=\frac{2}{9}\nu_{c}\epsilon_{\perp}^{*2}\sin\theta_{B}\) and \(\epsilon_{\perp}^{*}=\epsilon_{\perp}^{*}(\xi=\theta_{B})\). This calculation was done in the limit of low \(\nu\), but the same expression can be obtained in the limit of high \(\nu\) as well. The integral over Lorentz factor in Equation 19 now becomes \[j_{I}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)\propto\eta^{1/2}\int_{1}^ {\infty}d\gamma\gamma\beta\frac{e^{-\gamma/\epsilon_{\perp}^{*}-X}}{\epsilon_ {\perp}K_{2}(1/\epsilon_{\perp})}. \tag{23}\] The maximum of the exponent in Equation 23 occurs at \(\gamma_{0}=(2B\epsilon_{\perp}^{*})^{1/3}\), where \(B=(\nu/\nu_{cr})\gamma^{2}=2/3(\nu/\nu_{c})\sin^{-1}\theta_{B}\gg 1\). The integral over \(\gamma\) can then be carried out using the method of steepest descent (as in the case of an isotropic distribution function), leading again to \[j_{I}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)=\eta^{1/2}\frac{\epsilon_ {\perp}^{*}}{\epsilon_{\perp}}\frac{K_{2}(1/\epsilon_{\perp}^{*})}{K_{2}(1/ \epsilon_{\perp})}j_{I,\text{iso}}(\epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c}, \theta_{B}). \tag{24}\] The fact that \(j_{I}\) for the anisotropic relativistic Maxwellian can be expressed as Equation 22 in both the low and high frequency limits motivates our using this expression as the proposed fit in Equation 11 of the main text. Physically, this corresponds to the total intensity emissivity just changing due to a different effective distribution function in the angle \(\theta_{B}\) towards the observer. Note as well that although we derived Equation 22 for total intensity the same expression scaled to the isotropic distribution function emissivity holds for the intrinsic linear polarization emissivity, i.e., \(j_{Q}\). This is because \(K_{2/3}(X)\) has the same functional form as \(\int_{X}^{\infty}K_{5/3}(X)\) at both high and low frequencies. Circular polarization, however, has a different functional form: \[j_{V}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)\propto\eta^{0.5}\cot\theta _{B}\sin\theta_{B}\int_{1}^{\infty}d\gamma\gamma\frac{e^{-\gamma/\epsilon_{ \perp}^{*}}}{\epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}\left[(g(\gamma, \theta_{B})+2)\int_{X}^{\infty}K_{1/3}(t)dt+2XK_{1/3}\left(X\right)\right]. \tag{25}\] Unlike in the case of total intensity and linear polarization, the circular polarization emissivity requires expanding the distribution function in a narrow cone around \(\theta_{B}\); the resulting \(j_{V}\) depends on the derivative of the distribution function, included in \(g(\gamma,\theta_{B})\). To understand the origin of our fit for \(j_{V}\) in Equations 11, we first consider the high-frequency limit when both \(\int_{X}^{\infty}K_{1/3}(t)dt\) and \(K_{1/3}\left(X\right)\) scale as \(e^{-X}/\sqrt{X}\) for \(X\gg 1\). The integrand in equation 25 can be written as \(h(\gamma,A)e^{S(\gamma,A)}\), where \[S(\gamma,A)=-\gamma/\epsilon_{\perp}^{*}-B/\gamma^{2}\quad\text{ and }\quad h(\gamma,A)=(A\gamma^{3}+2\gamma^{2}+2B)\approx(A\gamma^{3}+2B). \tag{26}\] The exponential term \(e^{S(\gamma)}\) is again maximum at \(\gamma_{0}\approx(2B\epsilon_{\perp}^{*})^{1/3}\), where \(B=(\nu/\nu_{cr})\gamma^{2}=2/3(\nu/\nu_{c})\sin^{-1}\theta_{B}\gg 1\). Equation 14 can then be integrated via the method of steepest descent as: \[\int d\gamma h(\gamma)e^{S(\gamma)}\approx\sqrt{\frac{2\pi}{-S^{\prime\prime}( \gamma_{0})}}e^{S(\gamma_{0})}h(\gamma_{0})=\sqrt{\frac{2\pi}{-S^{\prime\prime }(\gamma_{0})}}e^{S(\gamma_{0})}\!\times\!2B(A\epsilon_{\perp}^{*}\!+\!1)=\sqrt {\frac{2\pi}{-S^{\prime\prime}(\gamma_{0})}}e^{S(\gamma_{0})}\!\times\!2B\eta \left(\frac{\epsilon_{\perp}^{*}}{\epsilon_{\perp}}\right)^{2}. \tag{15}\] As with \(j_{I}\) and \(j_{Q}\) we choose to express \(j_{V}\) relative to the result for an isotropic Maxwellian with temperature \(\epsilon_{\perp}^{*}\). The latter can be derived in an identical manner to Equation 15. We find that ratio of \(j_{V}\) in the anisotropic case to \(j_{\text{v,iso}}\) at a temperature of \(\epsilon_{\perp}^{*}\) and \(A=0\) has two terms. One is the ratio of distribution function normalizations \(\eta^{1/2}(\epsilon_{\perp}^{*}/\epsilon_{\perp})(K_{2}(1/\epsilon_{\perp}^{* })/K_{2}(1/\epsilon_{\perp}))\) that appears in \(j_{I}\) and \(j_{Q}\). The other is the factor \(\eta(\epsilon_{\perp}^{*}/\epsilon_{\perp})^{2}\) in Equation 15 - present only in \(j_{V}\) and not \(j_{Q}\) and \(j_{I}\) - that is due to the presence of the distribution function derivative \(g(\theta_{B})\) in the circular polarization emissivity. The net result is \[\frac{j_{V}(\epsilon_{\perp},\nu/\nu_{c},\theta_{B},\eta)}{j_{V,\text{iso}}( \epsilon=\epsilon_{\perp}^{*},\nu/\nu_{c},\theta_{B})}=\eta\left(\frac{ \epsilon_{\perp}^{*}}{\epsilon_{\perp}}\right)^{2}\times\eta^{1/2}\left(\frac{ \epsilon_{\perp}^{*}}{\epsilon_{\perp}}\right)\left(\frac{K_{2}(1/\epsilon_{ \perp}^{*})}{K_{2}(1/\epsilon_{\perp})}\right)=\eta^{3/2}\left(\frac{\epsilon _{\perp}^{*}}{\epsilon_{\perp}}\right)^{3}\left(\frac{K_{2}(1/\epsilon_{ \perp}^{*})}{K_{2}(1/\epsilon_{\perp})}\right), \tag{16}\] which gives the analytical fit given by Equation 15 in the main text. The same result can be derived in the low frequency limit via suitable expansion of Equation 14. ### Comparison of Analytics and Numerics We now solve Equations 3-5 in the main text numerically and check the validity of the approximations used in the previous section for obtaining analytical fits for the polarized synchrotron emissivity and absorption coefficients. To do so, we use the publicly available code symphony to compare our theoretical fits (Eq.15) with a numerical solution. We implemented an anisotropic distribution function to calculate \(j_{S}\) and \(\alpha_{S}\). In particular, we added the possibility for the distribution to depend on harmonic number \(n\) as well as a non-zero \(\partial_{\mu}f\) term in the absorption coefficient calculation (Eq. 4 that shows up in Eq. 3 includes \(\partial_{\mu}f\)) - both were absent in symphony. The distribution function and analytical derivatives \(\partial_{\gamma}f\) and \(\partial_{\mu}f\) can now depend on \(\mu=\cos\xi\). However, as described in Section 2 and below, the term with \(\partial_{\mu}f\) in the absorption coefficient is negligible because it shows up proportional to a term that vanishes when the pitch angle is approximately \(\theta_{B}\). Figure 11: Integrands \(|K_{I}f(\gamma,\xi)|\) for \(j_{I}\) (a) and \(|K_{I}Df(\gamma,\xi)|\) for \(\alpha_{I}\) (b). Two parts of \(Df\) that include \(\partial_{\gamma}f(\gamma,\xi)\) (c) and \(\partial_{\mu}f(\gamma,\xi)\) (d). The approximate location of the peak that corresponds to \(\cos\theta_{B}=\beta\mu\) is shown by dotted lines. The free parameters are \(\nu/\nu_{c}=10^{3}\), \(\eta=10\), \(\epsilon_{\perp}=10\). While the location of the peak is still the same as in the \(\eta=1\) case, a non-zero term with \(\partial_{\mu}f(\gamma,\mu)\) appears which, however, goes through zero at the peak of the integrand. Note the saturated colorbar in (c) and (d). The integrands in Equations 3 are \(K_{a}f(\gamma,\xi)\) for \(j_{a}\) and \(K_{a}Df(\gamma,\xi)\) for \(\alpha_{a}\), where \(\xi\) can be substituted for \(n\) since at \(y_{n}=0\) (as required by \(\delta(y_{n})\)): \[\cos\xi=\frac{1-\frac{n}{\gamma}\frac{\nu_{c}}{\nu}}{\beta\cos\theta_{B}}.\] (A13) Thus, the integrands can be expressed as functions of \(\gamma\) and \(n\) and integrated in \(\gamma-n\) space. In Figure 11 we show the integrands for \(j_{I}\) (a) and \(\alpha_{I}\) (b) and the two terms from \(Df\) that include \(\partial_{\gamma}f(\gamma,\xi)\) and \(\partial_{\mu}f(\gamma,\xi)\) for \(\nu/\nu_{c}=10^{3}\), \(\epsilon_{\perp}=10\), and \(\eta=10\). The location of the sharp peak in the \(\gamma-n\) plane is where \(\xi\simeq\theta_{B}\) (as in the isotropic distribution function case). However, the exact harmonic at which the emission peaks moves along this line depending on \(\eta\). This is equivalent to the result in Figure 1 that different \(\theta_{B}\) dominate the emission as we vary \(\eta\). Panels (c) and (d) in Figure 11 show that, at the location in \(\gamma-n\) space where the absorption coefficient peaks (panel b), the first term in \(Df\) due to gradients in \(\gamma\) is much larger than the second term due to gradients in \(\mu\). This is because most of the emission and absorption is coming from pitch angles of \(\xi\approx\theta_{B}\). Thus, the propagation is almost parallel, and the term with \(\beta\cos\theta_{B}-\mu\) shown in panel (d) does not contribute significantly to \(Df\). This implies that in practice the total intensity emission and absorption coefficients for the anistropic distribution function are equivalent to calculations for a thermal isotropic distribution function at a new temperature \(\epsilon_{\perp}^{*}\). This allows us to calculate \(\alpha_{a}\) from \(j_{a}\) via Kirchoff's law even for our anisotropic distribution function (at least in the limit of high \(\gamma\) where \(\xi\approx\theta_{B}\) is justified). A number of fitting functions for \(j_{a}\) and \(\alpha_{a}\) are used in the literature [see, e.g., Pandya et al. (2016); Dexter (2016)]. Here we compare our results for the fits used by blacklight: \[j_{S,\text{iso}}(\epsilon,X,\theta_{B})=\frac{n_{e}e^{2}\nu_{c}}{c}e^{-X^{1/3} }\times\begin{cases}\frac{\sqrt{2}\pi}{27}\sin(\theta_{B})(X^{1/2}+2^{11/12} X^{1/6})^{2}\ [S=I],\\ -\frac{\sqrt{2}\pi}{27}\sin(\theta_{B})\Big{(}X^{1/2}+\Big{(}\frac{7\epsilon^{ 24/25}+35}{10e^{24/25}+75}\Big{)}2^{11/12}X^{1/6}\Big{)}^{2}\ [S=Q],\\ 0\ [S=U],\\ \frac{\cos\theta_{B}}{\epsilon}\Big{(}\frac{\pi}{3}+\frac{\pi}{3}X^{1/3}+ \frac{2}{300}X^{1/2}+\frac{2}{19}\pi X^{2/3}\Big{)}\ [S=V],\end{cases}\] (A14) where \(X=\nu/\nu_{s}\). Absorption coefficients \(\alpha_{S}\) for a thermal distribution can be obtained via Kirchoff's law. Figure 12 shows numerical integration results from symphony (solid lines) along with their respective theoretical fits (dotted lines) for \(j_{I}\) (a), \(\alpha_{I}\) (b), \(j_{Q}\) (c), and \(j_{V}\) (d) on the left. On the right, their respective absolute errors are shown in panels (b,d,f,h). All results are shown as a function of observing angle \(\theta_{B}\) at different anisotropy parameters \(\eta\) represented by different colors, at \(\nu/\nu_{c}=10^{3}\) and for \(\epsilon_{\perp}=10\). These are typical parameters for application to Sgr A* and M87*. The agreement is excellent for all \(\eta\), with maximal errors \(\lesssim 10\%\) We show a more challenging case of low temperature \(\epsilon_{\perp}=3\) and low frequency \(\nu/\nu_{c}=10\) in Figure 13, which is organized identically to Fig. 12. This case is more challenging for our analytic fits than Figure 12 because the emission for \(\epsilon_{\perp}=3\) and low frequency \(\nu/\nu_{c}=10\) is dominated by much lower energy electrons. The errors in our fits in Figure 13 are, not surprisingly, larger. Generally, \(\eta<1\) has smaller relative errors than \(\eta>1\). This is because at a fixed observing angle \(\theta_{B}\) and \(\epsilon_{\perp}\), the effective temperature \(\epsilon_{\perp}^{*}\) is larger than \(\epsilon_{\perp}\) for \(\eta<1\). By contrast, \(\epsilon_{\perp}^{*}<\epsilon_{\perp}\) for \(\eta>1\), which can start to approach the non-relativistic cyclotron limit for which our fits do not apply. Figure 13 shows that the case of \(\eta<1\) has a relative error of \(<30\%\) across all considered angles \(\theta_{B}\in[5,85]^{\circ}\), and \(<10\%\) for most angles. The fits have a relative error of larger than \(30\%\) for \(\eta\geq 1\), however, at the largest and smallest angles. This is true for the isotropic case as well at small angles. We note, though, that the actual value of the emissivity and absorption coefficient are very small at small angles for \(\eta\gtrsim 1\) (Fig. 6) so that most of the emission and absorption will arise at larger angles where the fits are better. In addition at this low frequency, the emission will in most practical cases of interest be self-absorbed and approximately a blackbody. Finally, we note that a significant cause of error here is that our fits in Equation 11 for the anisotropic emission and absorption coefficients are scaled to the isotropic emissivity and absorption fits in Equation A14, which become inaccurate at low temperatures, low frequencies, and small angles, as indicated by the large fractional errors for the isotropic case in Figure 13. In practice we advise caution in using the fits here if \(\epsilon_{\perp}^{*}\lesssim 3\) and the frequency is low \(\lesssim 10\nu_{c}\). The regime of most interest for our applications is much higher frequencies where the analytic fits in Equation 11 are accurate. ## Appendix B Anisotropy-driven instabilities in relativistic plasmas ### Mirror instability To calculate the kinetic threshold for the relativistic mirror instability, we consider Vlasov and Maxwell's equations: \[\frac{\partial f_{s}}{\partial t}+\mathbf{v}_{s}\cdot\nabla f_{s}+q _{s}(\mathbf{E}+\frac{\mathbf{v}_{s}}{c}\times\mathbf{B})\cdot\frac{\partial f _{s}}{\partial\mathbf{p}}=0,\] (B15) \[\frac{1}{c}\frac{\partial\mathbf{E}}{\partial t}=\nabla\times \mathbf{B}-\frac{4\pi}{c}\mathbf{j},\] (B16) \[\frac{1}{c}\frac{\partial\mathbf{B}}{\partial t}=-\nabla\times \mathbf{E},\] (B17) where \(s\) is the particle species (ions \(i\) or electrons \(e\)) with mass \(m_{s}\) and charge \(q_{s}\), \(\mathbf{v}_{s}=\mathbf{p}_{s}/m_{s}\gamma\), \(\mathbf{E}\) is electric field (with \(\mathbf{E}_{0}=0\) initially), \(\mathbf{B}\) is the magnetic field, and the axes are chosen such that \(\mathbf{B}=B_{0}\hat{z}\). We now consider a small perturbation in the form of displacement \(\propto e^{i\mathbf{k}\mathbf{r}-i\omega t}\), where we consider \(\mathbf{k}=k_{\perp}\hat{x}+k_{\parallel}\hat{z}\). We will initially consider electrons with an anisotropic distribution and ions with an isotropic distribution, \(\delta\mathbf{E}=\delta E_{y}\hat{y}\) and thus Figure 12: Comparison of numerical results for \(j_{I}\) (a-b), \(\alpha_{I}\) (c-d), \(j_{Q}\) (e-f), and \(j_{V}\) (g,h) with the theoretical fits given by Equations 11 and 14. Numerical results and theoretical fits are shown on the left by solid and dotted lines respectively, and on the right – the relative error is shown. The dashed gray line on the right shows a relative error of 30%. The free parameters are \(\epsilon_{\perp}=10\) and \(\nu/\nu_{c}=10^{3}\). \(\delta\mathbf{B}=\delta B_{x}\hat{x}+\delta B_{z}\hat{z}\); the distribution function is perturbed as \(f_{s}+\delta f_{s}\). The linearized equations are then \[(-i\omega+i\mathbf{k}\cdot\mathbf{v}_{s})\delta f_{s}+q_{s}\frac{ \mathbf{v}_{s}}{c}\times\mathbf{B}_{0}\cdot\partial_{\mathbf{p}}\delta f_{s}+q _{s}\left(\delta\mathbf{E}+\frac{\mathbf{v}_{s}}{c}\times\delta\mathbf{B} \right)\cdot\partial_{\mathbf{p}}f_{s}=0, \tag{108}\] \[\frac{4\pi}{c^{2}}\omega\delta j_{y}=i\left(\frac{\omega^{2}}{c^ {2}}-k^{2}\right)\delta E_{y},\ \frac{4\pi}{c^{2}}\omega\delta j_{x}=i\left(\frac{\omega^{2}}{c^{2}}-k^{2} _{\parallel}\right)\delta E_{x}, \tag{109}\] where we used \(\delta\mathbf{B}=\frac{c}{\omega}\mathbf{k}\times\delta\mathbf{E}\). We seek the solution of the linearized Vlasov equation for \(\delta f_{s}\) and the corresponding current via the method of characteristics (e.g., Mikhailovsky 1976). The current response \(\delta j_{y}\) due to \(\delta E_{y}\) is: \[\delta j_{y,s}=-2\pi iq_{s}^{2}\int dpd\mu p^{2}\sum_{n=-\infty}^{+\infty} \frac{v^{2}\sin^{2}\xi}{\omega-k_{\parallel}v\cos\xi-n\Omega_{s}}\left[\frac{1 }{v}\frac{\partial f}{\partial p}-\frac{\cos\xi}{vp}\frac{\partial f}{ \partial\mu}+\frac{k_{\parallel}}{\omega}\frac{1}{p}\frac{\partial f}{ \partial\mu}\right]J_{n}^{{}^{\prime}2}\left(\frac{k_{\perp}v_{\perp}}{ \Omega_{s}}\right)\delta E_{y}, \tag{110}\] where \(\Omega_{s}\) is the relativistic cyclotron frequency of species \(s\). For \(\Omega_{s}\gg\omega\) and \(\Omega_{s}\gg k_{\parallel}v_{\parallel}\), keeping the leading terms \(n=0,\pm 1\) of order \(\Omega_{s}^{-2}\) and using \(J_{0}^{{}^{\prime}}(z)\approx-\frac{z}{2}\) and \(J_{\pm 1}^{{}^{\prime}}(z)\approx\pm\frac{1}{2}\): \[\begin{split} n=0:&-\frac{\pi iq_{s}^{2}}{2}\int dpd \mu p^{2}\frac{v^{4}\sin^{4}\xi}{\omega-k_{\parallel}v\cos\xi}\left[\frac{1} {v}\frac{\partial f_{s}}{\partial p}-\frac{\cos\xi}{vp}\frac{\partial f_{s}}{ \partial\mu}+\frac{k_{\parallel}}{\omega}\frac{1}{p}\frac{\partial f_{s}}{ \partial\mu}\right]\frac{k_{\perp}^{2}}{\Omega_{s}^{2}}\delta E_{y},\\ n=\pm 1:&\frac{\pi iq_{s}^{2}}{2}\int dpd\mu p^{2}v^{2} \sin^{2}\xi\frac{\omega-k_{\parallel}v\cos\xi}{\Omega_{s}^{2}}\left[\frac{1} {v}\frac{\partial f_{s}}{\partial p}-\frac{\cos\xi}{vp}\frac{\partial f_{s}}{ \partial\mu}+\frac{k_{\parallel}}{\omega}\frac{1}{p}\frac{\partial f_{s}}{ \partial\mu}\right]\delta E_{y}.\end{split} \tag{111}\] For isotropic ions the terms with \(\partial_{\mu}f\) can be dropped, resulting in the following current response \[\delta j_{y,i}=\frac{\pi ic}{2B^{2}}\delta E_{y}\int dpd\mu p^{3}\sqrt{m_{i}^{ 2}c^{2}+p^{2}}S(\mu)\frac{\partial f_{i}}{\partial p}\left[\omega-k_{\parallel }v\mu-\frac{v^{2}S(\mu)}{\omega-k_{\parallel}v\mu}k_{\perp}^{2}\right], \tag{112}\] where the second term in brackets equals zero due to the odd function \(S(\mu)=1-\mu^{2}=\sin^{2}\xi\). For the mirror mode, we are interested in the \(k_{\parallel}v_{\parallel}/\omega\gg 1\) limit, which leaves only the resonant term. Using \[\frac{v^{2}}{\omega}\int_{-1}^{1}d\mu\frac{S^{2}(\mu)}{1-\frac{k_{\parallel}v \mu}{\omega}}=-i\pi\frac{v}{k_{\parallel}}+\frac{16}{3}\frac{\omega}{k_{ \parallel}^{2}}+\mathcal{O}((k_{\parallel}v_{\parallel}/\omega)^{-3}), \tag{101}\] and considering non-relativistic ions, \(p/m_{i}c\ll 1\), with a Maxwellian distribution function \(f_{i}(p)\) and number density \(n\) \[f_{i}(p)=\frac{n}{(2\pi m_{i}^{2}c^{2}\epsilon_{i})^{3/2}}e^{-p^{2}/2m_{i}^{2}c ^{2}\epsilon_{i}}, \tag{102}\] integration by parts of the third resonant term in Eq. 100 results in \[\delta j_{y,i}=\frac{2\pi^{2}c^{2}}{B_{0}^{2}}\delta E_{y}\frac{k_{\perp}^{2} }{k_{\parallel}}\int_{0}^{\infty}dpp^{3}f_{s}(p)=\frac{\delta E_{y}}{\sigma_{ i}}\frac{c\epsilon_{i}^{1/2}}{4(\pi/2)^{1/2}}\frac{k_{\perp}^{2}}{k_{ \parallel}}=\frac{\delta E_{y}}{4\pi\sigma_{i}}\pi^{1/2}v_{th,i}\frac{k_{ \perp}^{2}}{k_{\parallel}}, \tag{103}\] where \(v_{th,i}=\sqrt{2\epsilon_{i}}c\) and \(\sigma_{i}=B_{0}^{2}/4\pi nm_{i}c^{2}=v_{a}^{2}/c^{2}\), \(v_{a}^{2}=B_{0}^{2}/4\pi nm_{i}\) is the Alfven speed. We will now analyze the electron's current in Equation 101, splitting it by the three terms in the brackets \(j_{y,e,1}\), \(j_{y,e,2}\), and \(j_{y,e,3}\). The second term in Eq. 101 is \(\mu\omega/k_{\parallel}v\ll 1\) times smaller than the third term and thus \(j_{y,e,2}\) is negligible. As with the ions, considering the resonant term's residue of \(-i\pi\omega/k_{\parallel}v\) at \(\mu_{0}=\omega/k_{\parallel}v\) \[\delta j_{y,e,1}\approx-\frac{\pi^{2}c^{2}}{2B^{2}}\delta E_{y}\frac{k_{\perp} ^{2}}{k_{\parallel}}\int dpp^{4}\frac{\partial f_{e}(p,\mu_{0})}{\partial p}= \frac{\delta E_{y}}{4\pi\sigma_{i}}\frac{2\pi^{2}}{m_{i}n}\frac{k_{\perp}^{2}} {k_{\parallel}}\int dpp^{3}f_{e}(p,\mu_{0}). \tag{104}\] The dispersion relation in the limit of \(\omega\ll k_{\parallel}v\mu\) is therefore \[-\omega\frac{k_{\perp}^{2}}{k_{\parallel}}\mathcal{J}=ik^{2}v_{a}^{2}+4\pi k_{ \parallel}\varpropto\sigma_{i}\omega\frac{\delta j_{y,e,3}}{\delta B_{x}}, \tag{105}\] where \[\mathcal{J}=\frac{2\pi^{2}}{m_{i}n}\int dpp^{3}f_{e}(p,\mu_{0})+\pi^{1/2}v_{ th,i}>0. \tag{106}\] As in Osipov et al. (2017), the current response from anisotropic electrons, which drives the mirror instability, can be calculated in the same form: \[\delta j_{y,e,3}=-i\frac{\pi c\delta B_{x}}{2B_{0}^{2}}\frac{k_{\perp}^{2}}{k _{\parallel}}\int dpp^{3}v\int_{-1}^{1}d\mu\frac{(1-\mu^{2})^{2}}{\mu}\frac{ \partial f_{e}(p,\mu)}{\partial\mu}, \tag{107}\] which we will calculate in two parts \[\int_{-1}^{1}d\mu\frac{(1-\mu^{2})^{2}}{\mu}\frac{\partial f_{e}}{\partial\mu }=\underbrace{\int_{-1}^{1}d\mu(\mu^{3}-2\mu)\frac{\partial f_{e}}{\partial \mu}}_{I_{1}}+\underbrace{\int_{-1}^{1}d\mu\frac{1}{\mu}\frac{\partial f_{e}}{ \partial\mu}}_{I_{2}}. \tag{108}\] Integral \(I_{1}\) can be calculated by parts and expressed through parallel and perpendicular pressure \(P_{\parallel,e}\) and \(P_{\perp,e}\) since \[I_{1}=-2f_{e}(1)-\int_{-1}^{1}f_{e}(p,\mu)d(\mu^{3}-2\mu)=-2f_{e}(1)+2\int_{-1 }^{1}d\mu(1-\mu^{2})f_{e}(p,\mu)-\int_{-1}^{1}d\mu\mu^{2}f_{e}(p,\mu), \tag{109}\] and \[P_{\parallel,e}=2\pi\int_{0}^{\infty}dpp^{3}v\int_{-1}^{1}d\mu\mu^{2}f_{e}(p, \mu),\ P_{\perp,e}=\pi\int_{0}^{\infty}dpp^{3}v\int_{-1}^{1}d\mu(1-\mu^{2})f_{ e}(p,\mu). \tag{110}\] Therefore, the two integral terms in \(I_{1}\) in Eq. 109 lead to \[2\pi\times 2\int_{0}^{\infty}dpp^{3}v\int_{-1}^{1}d\mu(1-\mu^{2})f_{e}(p,\mu )-2\pi\int_{0}^{\infty}dpp^{3}v\int_{-1}^{1}d\mu\mu^{2}f_{e}(p,\mu)=4P_{\perp,e }-P_{\parallel,e}=P_{\parallel,e}\left(4\eta^{\lambda}-1\right). \tag{111}\] For the distribution function given by Equation 7 in the main text, \(f_{e}(p,\mu)\propto\exp{(-a\sqrt{1+b\mu^{2}})}\), where \(a=\gamma/\epsilon_{\perp}\) and \(b=(1-1/\gamma^{2})(\eta-1)\). Then, the boundary term in Eq. 31 can be expressed as \[-2\times 2\pi\int_{0}^{\infty}dpp^{3}vf_{e}(p,\mu=1)=-\frac{nm_{e}c^{2}\eta^{1/2}} {\epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}\int_{1}^{\infty}d\gamma(\gamma^{2 }-1)^{3/2}e^{-a\sqrt{1+b}}=-P_{\parallel,e}\frac{\eta^{1/2+\lambda}}{\epsilon_ {\perp}^{2}K_{2}(1/\epsilon_{\perp})}\int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3 /2}e^{-a\sqrt{1+b}}, \tag{34}\] resulting in the following contribution to \(\delta j_{y,e,3}\) from \(I_{1}\): \[2\pi\int_{0}^{\infty}dpp^{3}vI_{1}=P_{\parallel,e}\left[4\eta^{\lambda}-1- \frac{\eta^{1/2+\lambda}}{\epsilon_{\perp}^{2}K_{2}(1/\epsilon_{\perp})}\int_ {1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}e^{-a\sqrt{1+b}}\right]. \tag{35}\] For calculating \(I_{2}\), we find the derivative of the distribution function as \(\partial f_{e}(p,\mu)/\partial\mu=-ab\frac{\mu}{\sqrt{1+b\mu^{2}}}f_{e}(p,\mu)\). Since the integrand is an even function of \(\mu\), the contribution of \(I_{2}\) to the current is \[\begin{split} 2\pi\int_{0}^{\infty}dpp^{3}vI_{2}&=-\frac{ nm_{e}c^{2}\eta^{1/2}}{\epsilon_{\perp}K_{2}(1/\epsilon_{\perp})}\int_{1}^{ \infty}d\gamma(\gamma^{2}-1)^{3/2}ab\int_{0}^{1}d\mu\frac{e^{-a\sqrt{1+b\mu^{2} }}}{\sqrt{1+b\mu^{2}}}\\ &=-P_{\parallel,e}\frac{\eta^{1/2+\lambda}}{\epsilon_{\perp}^{2} K_{2}(1/\epsilon_{\perp})}\int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}ab\int_{0}^{1}d \mu\frac{e^{-a\sqrt{1+b\mu^{2}}}}{\sqrt{1+b\mu^{2}}}.\end{split} \tag{36}\] Therefore, the relevant current can be expressed as \[\begin{split}\delta j_{y,e,3}&=-i\frac{c\delta B_{x }}{4B_{0}^{2}}\frac{k_{\perp}^{2}}{k_{\parallel}}P_{\parallel,e}\left[4\eta^{ \lambda}-1-\frac{\eta^{1/2+\lambda}}{\epsilon_{\perp}^{2}K_{2}(1/\epsilon_{ \perp})}\left(\int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}e^{-a\sqrt{1+b}}+ \int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}ab\int_{0}^{1}d\mu\frac{e^{-a \sqrt{1+b\mu^{2}}}}{\sqrt{1+b\mu^{2}}}\right)\right]\\ &\equiv-i\frac{c\delta B_{x}}{4B_{0}^{2}}\frac{k_{\perp}^{2}}{k_{ \parallel}}P_{\parallel,e}\left[4\eta^{\lambda}-1-\mathcal{I}\right]\end{split} \tag{37}\] which defines the integral \(\mathcal{I}\). Therefore, the final dispersion relation is \[-i\omega\frac{k_{\perp}^{2}}{k_{\parallel}}\mathcal{J}=k^{2}v_{a,i}^{2}-k_{ \perp}^{2}\frac{P_{\parallel,e}}{4\rho_{i}}(4\eta^{\lambda}-1-\mathcal{I}), \tag{38}\] or, for \(k_{\perp}\gg k_{\parallel}\) \[-i\frac{\omega}{k_{\parallel}}\rho_{i}\mathcal{J}=\frac{B_{0}^{2}}{4\pi}-P_{ \parallel,e}(4\eta^{\lambda}-1-\mathcal{I})/4. \tag{39}\] The growth rate of the instability is positive when \[2\beta_{\parallel,e}^{-1}-(4\eta^{\lambda}-1-\mathcal{I})/4>0. \tag{40}\] Therefore, the threshold can be expressed as \[\beta_{\perp,e}<\frac{8\eta^{\lambda}}{4\eta^{\lambda}-1-\mathcal{I}}, \tag{41}\] where \[\mathcal{I}=\frac{\eta^{1/2+\lambda}}{\epsilon_{\perp}^{2}K_{2}(1/\epsilon_{ \perp})}\left(\int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}e^{-a\sqrt{1+b}}+ \int_{1}^{\infty}d\gamma(\gamma^{2}-1)^{3/2}ab\int_{0}^{1}d\mu\frac{e^{-a \sqrt{1+b\mu^{2}}}}{\sqrt{1+b\mu^{2}}}\right). \tag{42}\] In the ultra-relativistic limit, \(\sqrt{\gamma^{2}-1}\approx\gamma\) and \(b\approx\eta-1\), \(\mathcal{I}\) reduces to \[\begin{split}\mathcal{I}&=\frac{\eta^{1/2+\lambda}}{ \epsilon_{\perp}^{2}K_{2}(1/\epsilon_{\perp})}\left(\int_{1}^{\infty}d\gamma \gamma^{3}e^{-\frac{\gamma}{\epsilon_{\perp}}\sqrt{\eta}}+\frac{\eta-1}{ \epsilon_{\perp}}\int_{0}^{1}\frac{d\mu}{\sqrt{1+(\eta-1)\mu^{2}}}\int_{1}^{ \infty}d\gamma\gamma^{4}e^{-\frac{\gamma}{\epsilon_{\perp}}\sqrt{1+(\eta-1) \mu^{2}}}\right)\\ &=\frac{3\eta^{1/2+\lambda}\epsilon_{\perp}^{2}}{K_{2}(1/\epsilon_ {\perp})}\left(\frac{2}{\eta^{2}}+\left[3-\frac{1}{\eta}-\frac{2}{\eta^{2}}+3 \sqrt{\eta-1}\tan^{-1}\sqrt{\eta-1}\right]\right)\rightarrow\frac{9\pi\eta^{1 +\lambda}\epsilon_{\perp}^{2}}{2K_{2}(1/\epsilon_{\perp})},\ \eta\rightarrow\infty,\end{split} \tag{43}\] where the boundary term (first term in Eq. 107) cancels. Therefore, the mirror instability threshold in the ultra-relativistic limit is \[\beta_{\parallel,e}<\frac{8}{4\eta^{\lambda}-1-1.5\eta^{1/2+\lambda}\left[3+3 \sqrt{\eta-1}\tan^{-1}\sqrt{\eta-1}-1/\eta\right]}. \tag{108}\] In the non-relativistic limit, the instability threshold defined by Equations 102 and 107 reduces to \(\beta_{\perp,e}<1/(\eta-1)\), consistent with previous work. This is because for a non-relativistic distribution \(P_{\perp,e}/P_{\parallel,e}=\eta^{1}\), i.e., \(\lambda=1\). The contribution to \(\delta j_{y,e,3}\) from the boundary term in \(I_{1}\) (first term in Eq. 107) is \(3P_{\perp,e}/\eta^{2}\) and the contribution from \(I_{2}\) (second term in Eq. 107) is \(P_{\perp,e}(\eta-1)(8\eta^{2}+4\eta+3)/\eta^{2}\). Thus, \(4\eta-1-\mathcal{I}=8\eta(\eta-1)\) in a non-relativistic limit. In the case of non-relativistic anisotropic ions with anisotropy parameter \(\eta_{i}\), the same derivation leads to a threshold \[\beta_{\perp,e}<\frac{8\eta^{\lambda}}{4\eta^{\lambda}-1-\mathcal{I}+8\frac{ P_{\parallel,i}}{P_{\parallel,e}}\eta_{i}(\eta_{i}-1)}. \tag{109}\] Therefore, the threshold condition is defined by the ions as \(\beta_{\perp,i}<1/(\eta_{i}-1)\) when \(P_{\parallel,i}\gg P_{\parallel,e}\). The assumption of a zero parallel current \(j_{z}\) holds only when plasma-\(\beta\) of at least one species is \(\ll 1\). Consequently, when this assumption is invalid, a non-zero \(\delta E_{z}\) also leads to a more complex \(j_{y}\). In this case, both \(j_{y}\) and \(j_{z}\) contain terms proportional to \(\delta E_{y}\) and \(\delta E_{z}\) through Vlasov's equation. In a non-relativistic limit, the threshold is modified by an additional stabilizing term, which depends on plasma-\(\beta\) of all species [see, e.g., Hall (1979); Hellinger (2007)]. A similar stabilizing relation can be obtained in a relativistic limit. The dispersion relation in the long-wavelength limit with \(\Omega_{e}\gg\omega\) and \(\Omega_{e}\gg kv\) is then defined by the following two inseparable equations: \[\frac{4\pi}{c^{2}}\omega\delta j_{y}=i\left(\frac{\omega^{2}}{c^{2}}-k^{2} \right)\delta E_{y},\ \frac{4\pi}{c^{2}}\omega\delta j_{z}=i\left(\frac{\omega^{2}}{c^{2}}-k_{ \perp}^{2}\right)\delta E_{z}, \tag{110}\] which is equivalent to writing it in terms of plasma dielectric tensor \(\mathcal{E}_{\alpha\beta}\): \[\mathcal{E}_{22}-\frac{\mathcal{E}_{23}\mathcal{E}_{32}}{\mathcal{E}_{33}- \frac{k_{\perp}^{2}\,c^{2}}{\omega^{2}}}=\frac{k^{2}c^{2}}{\omega^{2}}, \tag{111}\] where \(\mathcal{E}_{22}\) has already been calculated as the current response along \(\hat{y}\) due to \(\delta E_{y}\): \[\mathcal{E}_{22}=1-\frac{ik_{\perp}^{2}}{\omega k_{\parallel}^{2}\sigma_{i}} \mathcal{J}+\frac{\pi c^{2}k_{\perp}^{2}}{B_{0}^{2}\omega^{2}}P_{\parallel,e}( 4\eta-1-\mathcal{I}). \tag{112}\] The general relations for relevant dielectric tensor components are: \[\mathcal{E}_{23}^{s} =-\mathcal{E}_{32}^{s}=\frac{4\pi iq_{s}^{2}}{\omega}\int p_{ \perp}v_{\perp}v_{\parallel}dp_{\perp}dp_{\parallel}\sum_{n=-\infty}^{+\infty }\frac{1}{\omega-k_{\parallel}v_{\parallel}-n\Omega_{s}}\left[\frac{1}{v_{ \perp}}\frac{\partial f_{s}}{\partial p_{\perp}}+\frac{k_{\parallel}}{\omega }\left(\frac{\partial f_{s}}{\partial p_{\parallel}}-\frac{v_{\parallel}}{v_{ \perp}}\frac{\partial f_{s}}{\partial p_{\perp}}\right)\right]J_{n}\left(\frac {k_{\perp}v_{\perp}}{\Omega_{s}}\right)J_{n}^{\prime}\left(\frac{k_{\perp}v_{ \perp}}{\Omega_{s}}\right)\] \[\mathcal{E}_{33}^{s} =\frac{4\pi q_{s}^{2}}{\omega}\int p_{\perp}v_{\parallel}dp_{ \perp}dp_{\parallel}\sum_{n=-\infty}^{+\infty}\frac{1}{\omega-k_{\parallel}v_ {\parallel}-n\Omega_{s}}\left[\frac{\partial f_{s}}{\partial p_{\parallel}}- \frac{n\Omega_{s}}{\omega}\left(\frac{\partial f_{s}}{\partial p_{\parallel}}- \frac{v_{\parallel}}{v_{\perp}}\frac{\partial f_{s}}{\partial p_{\perp}} \right)\right]J_{n}^{2}\left(\frac{k_{\perp}v_{\perp}}{\Omega_{s}}\right), \tag{113}\] where \(\mathcal{E}_{\alpha\beta}=\delta_{\alpha\beta}+\sum_{s}\mathcal{E}_{\alpha \beta}^{s}\). Keeping the leading terms of order \(\Omega_{s}^{-1}\) for \(\delta j_{z,\delta E_{y}}\) and \(j_{y,\delta E_{z}}\) and terms of order \(\Omega_{s}^{0}\) for \(\delta j_{z,\delta E_{z}}\) \[\mathcal{E}_{23}^{s} =-\mathcal{E}_{32}^{s}\] \[=-2i\pi\frac{k_{\perp}k_{\parallel}}{\omega^{2}}\frac{q_{s}cm_{s}} {B_{0}}\int dp_{\perp}dp_{\parallel}\frac{f_{y}p_{\perp}^{2}}{\omega-k_{ \parallel}v_{\parallel}}\left(\gamma^{2}-2\frac{p_{\parallel}^{2}}{m_{s}^{2}c^ {2}}+\frac{k_{\parallel}v_{\parallel}}{\omega}\frac{p_{\parallel}^{2}}{m_{s}^{2 }c^{2}}\right), \tag{114}\] \[\mathcal{E}_{33}^{s} =\frac{4\pi q_{s}^{2}}{\omega}\int dp_{\perp}dp_{\parallel} \frac{p_{\perp}v_{\parallel}}{\omega-k_{\parallel}v_{\parallel}}\frac{ \partial f_{s}}{\partial p_{\parallel}}=-\frac{4\pi e^{2}}{m_{s}}\int dp_{ \perp}dp_{\parallel}\frac{p_{\perp}f_{s}}{\gamma^{2}(\omega-k_{\parallel}v_{ \parallel})^{2}}\left(\gamma-p_{\parallel}\frac{\partial\gamma}{\partial p_{ \parallel}}\right)\] \[=-\frac{4\pi q_{s}^{2}}{k_{\parallel}^{2}m_{s}}\int dp_{\perp}dp_{ \parallel}\frac{p_{\perp}f_{s}}{\gamma^{3}(\omega/k_{\parallel}-v_{\parallel})^ {2}}\left(1+\frac{p_{\perp}^{2}}{m^{2}c^{2}}\right).\] In a non-relativistic limit for an isotropic distribution, this reduces to: \[\begin{split}&\mathcal{E}_{23}^{s}=-\frac{2\pi k_{\perp}}{\omega k_{ \parallel}}\frac{q_{s}c}{B_{0}}\int\frac{dp_{\perp}dp_{\parallel}p_{\perp}^{2} v_{\perp}f_{s}}{(\omega/k_{\parallel}-v_{\parallel})^{2}},\\ &\mathcal{E}_{33}^{s}=-\frac{4\pi e^{2}}{m_{s}k_{\parallel}^{2}} \int\frac{dp_{\perp}dp_{\parallel}p_{\perp}f_{s}}{(\omega/k_{\parallel}-v_{ \parallel})^{2}}\approx\frac{\omega_{p,s}^{2}}{v_{th,s}^{2}k_{\parallel}^{2}} \left(1+i\sqrt{\frac{\pi}{2}}\frac{\omega}{k_{\parallel}v_{th,s}}\right).\end{split}\] (B51) In the dispersion relation, the imaginary term in \(\mathcal{E}_{33}^{s}\) above will group with other imaginary terms in \(\mathcal{E}_{22}\) as a coefficient in front of the growth rate \(-i\omega\). Thus, the threshold condition, considering \(k_{\parallel}v_{\parallel}/\omega\gg 1\) and \(k_{\perp}\gg k_{\parallel}\), to leading order in \(\Omega_{s}\), is \[\beta_{e,\parallel}\frac{4\eta-1-\mathcal{I}}{8}+\beta_{\parallel,i}\eta_{i}( \eta_{i}-1)>1-\frac{\pi}{B_{0}^{2}}\frac{\left(\sum_{s}\frac{q_{s}}{m_{s}} \int p_{\perp}dp_{\parallel}\frac{f_{s}p_{\perp}^{2}(\gamma^{2}-2p_{\parallel }^{2}/m_{s}^{2}c^{2})}{\gamma^{4}(\omega/k_{\parallel}-v_{\parallel})^{2}} \right)^{2}}{\sum_{s}\frac{q_{s}^{2}}{m_{s}}\int p_{\perp}dp_{\parallel}\frac{f_ {s}(1+p_{\perp}^{2}/m_{s}^{2}c^{2})}{\gamma^{3}(\omega/k_{\parallel}-v_{ \parallel})^{2}}}.\] (B52) In Figure 14 we show a comparison of the non-relativistic mirror threshold \(\beta_{\parallel,e,\text{nr}}=1/(\eta-1)\) (thick black line) and the numerically calculated relativistic electron mirror threshold from Equations B41 and B42. In panel (a), \(\eta\) is shown as a function of \(\beta_{\parallel,e}\) at different temperatures. Panel (b) shows the ratio of the relativistic threshold value \(\beta_{\parallel,e}\) and the non-relativistic threshold \(\beta_{\parallel,e,\text{nr}}\) as a function of \(\eta\). Deviations are small for \(\epsilon_{\perp}\lesssim 0.1\) and small values of anisotropy parameter \(\eta\lesssim 10\). At high temperatures, the numerical solution is well approximated by the ultra-relativistic limit \(\beta_{\parallel,e,\text{nr}}\) given by Equation B44. We show their ratio, \(\beta_{\parallel,e,\text{nr}}/\beta_{\parallel,e}\), as a function of \(\eta\) in panel (c). Due to the large uncertainties in the electron anisotropy in accretion flows and since we limit the anisotropy \(T_{\parallel,e}/T_{\perp,e}\) to be \(\leq 10\) where the non-relativistic and relativistic mirror thresholds are similar, we chose to use the analytically simple non-relativistic mirror threshold for our application to BH images in SS 3. ### Parallel firehose instability To calculate the relativistic firehose threshold, we linearize Equations B17 for \(\mathbf{k}=k\hat{z}\), \(\delta\mathbf{B}=\delta B_{y}\hat{y}\), and \(\delta\mathbf{E}=\delta E_{x}\hat{x}\) [see, e.g., Barnes & Scargle (1973)]. The resulting equations for the ion and relativistic electron's currents are: \[\begin{split}\delta j_{x,e}&=-\frac{i\pi\delta E _{x}}{B_{0}^{2}c^{2}}\int dpp^{3}vd\mu S(\mu)(kc\mu-\omega)\left(p\frac{\partial f _{e}}{\partial p}-\mu\frac{\partial f_{e}}{\partial\mu}+\frac{kc}{\omega} \frac{\partial f_{e}}{\partial\mu}\right),\\ \delta j_{x,i}&=-\frac{i\pi\delta E_{x}}{B_{0}^{2}} \int dpd\mu p^{3}\sqrt{m_{i}^{2}+(p/c)^{2}}S(\mu)(kv\mu-\omega)\frac{\partial f _{i}}{\partial p}=-4i\omega\pi m_{i}c^{2}\frac{\delta E_{x}}{B_{0}^{2}}\int dpp ^{2}\frac{4(p/m_{i}c)^{2}/3+1}{\sqrt{(p/m_{i}c)^{2}+1}}f_{i}(p).\end{split}\] (B53) Solving for sub-relativistic isotropic ions with distribution function B24 and dropping the terms with odd \(\mu\)-integrands in \(\delta j_{x,e}\) we find: \[\delta j_{x}=-\frac{i\omega\delta E_{x}}{4\pi\sigma_{i}}\left[1+\frac{5}{2} \epsilon_{i}+\frac{\pi}{nm_{i}c}\int dpd\mu p^{3}(3-\mu^{2})f_{e}(p,\mu)+\frac {1}{nm_{i}c^{2}}(P_{\parallel,e}-P_{\perp,e})\frac{c^{2}k^{2}}{\omega^{2}} \right].\] (B54) Figure 14: (a): Relativistic mirror instability thresholds (anisotropy \(\eta\) as a function of \(\beta_{\parallel,e}\)) defined by Equations B41-B42 calculated for different \(\epsilon_{\perp}\) represented by different colors from darkest (\(\epsilon_{\perp}=0.01\)) to brightest (\(\epsilon_{\perp}=100\)). Dotted black line represents a non-relativistic limit. (b): Ratio of relativistic mirror instability threshold value \(\beta_{\parallel,e}\) and its non-relativistic limit \(\beta_{\parallel,e,\text{nr}}\) as a function of \(\eta\). (c): Ratio of relativistic mirror instability threshold value \(\beta_{\parallel,e}\) and its ultra-relativistic limit \(\beta_{\parallel,e,\text{nr}}\), defined by Equation B44 as a function of \(\eta\). This results in the following dispersion relation \[\begin{split}\omega^{2}=k^{2}c^{2}\frac{\sigma_{i}-(P_{\parallel,e}-P _{\perp,e})/(nm_{i}c^{2})}{\mathcal{F}},\\ \mathcal{F}=\sigma_{i}+1+\frac{5}{2}\epsilon_{i}+\frac{2\pi}{nm_{ i}c}\int dpd\mu p^{3}f_{e}(p,\mu)+\frac{P_{\perp,e}}{nm_{i}c^{2}}>0,\end{split}\] (B55) which gives the usual non-relativistic firehose threshold \[\frac{P_{\perp,e}}{P_{\parallel,e}}<1-\frac{2}{\beta_{\parallel,e}},\] (B56) where \(\beta_{\parallel,e}=8\pi P_{\parallel,e}/B_{0}^{2}\). Note that if the ions are anisotropic as well the relevant firehose threshold becomes \(P_{\perp,e}+P_{\perp,i}<P_{\parallel,e}+P_{\parallel,i}-B^{2}/4\pi\). Thus if \(P_{\perp,i}>P_{\perp,e}\) the ion anisotropy will in general be more important than the electron anisotropy in setting stability to the fluid firehose instability. The calculation presented here focuses on the fluid parallel firehose instability. There are also resonant parallel and oblique firehose instabilities: Larmor-scale resonant instabilities destabilized by cyclotron interaction. The resonant instabilities typically have faster growth rates and somewhat lower anisotropy thresholds than the fluid firehose instability (Gary et al., 1998; Hellinger & Matsumoto, 2000). Calculations of electron-scale resonant firehose instabilities for relativistically hot electrons with \(T_{p}\gtrsim T_{e}\) would be valuable but we leave this to future work. ### _Whistler instability_ The electron whistler instability, first noted in Sudan (1963) and followed by a relativistic derivation (Sudan, 1965), is an instability of circularly polarized electron waves propagating along the magnetic field direction \(B_{0}\hat{z}\). Considering a wavevector \(k\) and fluctuating electric field \(\delta E_{x}\) and \(\delta E_{y}\), the dispersion relation can be written as (Gladd, 1983) \[\frac{\epsilon_{\parallel}k^{2}c^{2}}{\beta_{\parallel,e}}-\frac{\epsilon_{ \perp}\omega^{2}}{\beta_{\perp,e}}+\pi m_{e}^{2}c^{4}\Omega_{e,0}^{2}\int\frac {p_{\perp}^{2}v_{\perp}dp_{\perp}dp_{\parallel}}{kv_{\parallel}-(\omega- \Omega_{e})}\left[\frac{\partial f_{e}}{\partial p_{\perp}^{2}}(\omega-kv_{ \parallel})+\frac{\partial f}{\partial p_{\parallel}^{2}}kv_{\parallel} \right]=0,\] (B57) where \(\epsilon_{\parallel}=T_{\parallel,e}/m_{e}c^{2}\), \(f_{e}\) is defined by Equation 7, \(p_{\parallel}\) and \(p_{\perp}\) are relativistic parallel and perpendicular momentum, respectively, \(\Omega_{e}\) and \(\Omega_{e,0}\) are the relativistic and non-relativistic electron cyclotron frequencies. The whistler instability, like the ion cyclotron instability and unlike the mirror and firehose instabilities considered in the previous section, typically does not have a formal threshold but the growth rate becomes negligible for decreasing anisotropy. This dispersion relation can thus be solved numerically to find the target growth rate for a fixed \(\beta_{\perp,e}\) and varying \(\eta\). The threshold for the relativistic whistler instability can be parameterized as (Lynn, 2014)\(P_{\perp,e}/P_{\parallel,e}=1+S(\epsilon_{\perp})/\beta_{\perp,e}^{\alpha}\), where \(S(\epsilon_{\perp})=0.265-0.165(1+\epsilon_{\perp}^{-1})\) and \(\alpha=0.58-0.043\log\Gamma\), where \(\Gamma\sim 10^{-6}|eB/m_{e}c|\) is the assumed growth rate. Since \(S(\epsilon_{\perp})\) is a slowly varying and monotonic function of temperature, \(S(\epsilon_{\perp})\approx[0.1-0.25]\) for \(\epsilon_{\perp}=[10^{-2},10^{2}]\), we choose to use \(S(\epsilon_{\perp}=1)=0.183\). ## Appendix C Anisotropy Model for Gr Radiative Transfer For our choice of the distribution function (Eq. 7), the ratio of perpendicular and parallel temperatures \(T_{\perp,e}/T_{\parallel,e}=\eta^{\lambda}\). The value of \(\lambda\) is in turn a function of temperature \(\epsilon_{\perp}\), which we show in Figure 15. The function \(\lambda(\epsilon_{\perp})\) can be well-approximated by \[\lambda=-0.08\tanh\left(1.5(\log_{10}\epsilon_{\perp}+0.5)\right)+0.92,\] (C58) In the non-relativistic limit, when \(\epsilon_{\perp}\ll 1\), this gives \(T_{\perp,e}/T_{\parallel,e}\approx\eta\), while in the ultra-relativistic limit, \(T_{\perp,e}/T_{\parallel,e}\approx\eta^{0.8}\). In our modeling of black hole accretion images we consider three limiting cases for the anisotropy of the distribution function \(T_{\perp,e}/T_{\parallel,e}\), intended to bracket the magnitude of the effect that an anisotropic distribution function can introduce: \[\begin{split}&\left(T_{\perp,e}/T_{\parallel,e}\right)_{\rm mirror }=1+1/\beta_{\perp,e},\\ &\left(T_{\perp,e}/T_{\parallel,e}\right)_{\rm whistler}=1+S/ \beta_{\perp,e}^{\alpha},\\ &\left(T_{\perp,e}/T_{\parallel,e}\right)_{\rm isotropic}\equiv 1,\\ &\left(T_{\perp,e}/T_{\parallel,e}\right)_{\rm firehose}=1-2/ \beta_{\parallel,e},\end{split}\] (C59) where we take \(S=0.183\) and \(\alpha=0.838\) as in Appendix SSB.3. Since the firehose threshold is undefined at small electron-\(\beta_{\parallel,e}\), we choose to set the threshold to a constant value of \(T_{\perp,e}/T_{\parallel,e}=0.1\) at low \(\beta_{\parallel,e}\). This is motivated by local simulations (Riquelme et al., 2015). Likewise, for the mirror instability, we limit \(T_{\perp,e}/T_{\parallel,e}<10\). In reality, the temperature anisotropy at low \(\beta\) will depend on the heating, expansion and contraction of the plasma, which is what drives the temperature anisotropy in the first place. It is useful to re-express Equations C59 in terms of the total electron temperature \[T_{e}=\frac{1}{3}(2T_{\perp,e}+T_{\parallel,e}).\] (C60) Using Equation C60, Equations C59 for the instability thresholds can be rewritten as \[\begin{split}\beta_{\perp,e,\text{mirror}}&=\frac{ \beta_{e}}{2}-\frac{1}{3}+\frac{1}{2}\sqrt{\frac{4}{9}+\frac{8}{3}\beta_{e}+ \beta_{e}^{2}},\\ \beta_{\parallel,e,\text{firehose}}&=\beta_{e}+ \frac{4}{3},\\ \beta_{\perp,e,\text{whistler}}&=\begin{cases}0.141 \beta_{e}^{3}-0.26\beta_{e}^{2}+1.171\beta_{e}+0.005,\beta_{e}<1\\ 1.054\beta_{e}+0.012,\beta_{e}>1\end{cases}\quad,\end{split}\] (C61) where \(\beta_{\parallel,e,\text{firehose}}\) and \(\beta_{\parallel,e,\text{mirror}}\) are exact solutions and \(\beta_{\perp,e,\text{whistler}}\) is a polynomial fit to a numerical solution with growth rate \(\Gamma\sim 10^{-6}|eB/m_{e}c|\) and \(\epsilon_{\perp}=1\). The thresholds in Equation C61 can then be used in Equation C59, thus providing expressions for the threshold temperature anisotropy in terms of \(\beta_{e}\). This is a variable accessible to a simulation that does not evolve temperature anisotropy, such as those that we used in SS 3. The threshold conditions are shown in Fig. 16 as a function of electron \(\beta_{e}\), which is extracted from the MHD plasma-\(\beta_{\text{th}}\) via \(\beta_{e}=2\beta_{\text{th}}/(R+1)\). j ## Appendix D GrMHD Simulations The GRMHD simulations used in SS3 were performed using the publicly available code Athena++ in spherical Kerr-Schild coordinates with a logarithmically stretched grid in the radial direction \(r\). The setup is identical to White et al. (2019) with the outer radius of \(1000r_{g}\) and the inner radius being inside the horizon. The grid is refined with the level 0 grid being \(N_{r}\times N_{\xi}\times N_{\phi}=64\times 32\times 64\). A total of 3 refinement levels are concentrated around the midplane, \(\theta=\pi/2\), resulting in an effective resolution of \(512\times 256\times 512\) in \(r\), \(\theta\), and \(\phi\). We initialize a Fishbone torus Fishbone & Moncrief (1976) with a purely poloidal magnetic field with mean plasma-\(\beta\) of 100. We study two different spin values of the BH: \(a=0.98\) and 0.5. Each of the two simulations is run up to a steady state and several eruption events for a total simulation time of more than \(15000r_{g}/c\). Figure 17 shows the time evolution of the accretion rate \(\dot{M}\) in code units (a), magnetic flux \(\Phi=0.5\int d\theta d\phi\sqrt{-4\pi g}|B^{r}|\) though a hemisphere (b), and dimensionless magnetic flux \(\phi_{\text{BH}}=\Phi/\sqrt{\dot{M}r_{g}^{2}c}\) (c) measured at \(2r_{g}\) as functions of time, starting from \(8000r_{g}/c\). Here \(g\) is the determinant of spherical Kerr-Schild metric. Spins of 0.5 and 0.98 are shown by blue and black lines respectively. The time periods chosen for the GR radiative transfer in SS 3 (shown by shaded Figure 15: Numerically calculated \(\lambda\) as a function of \(\epsilon_{\perp}\) for \(T_{\perp,e}/T_{\parallel,e}=\eta^{\lambda}\) for an anisotropic relativistic bi-Maxwellian distribution function. A simple fit for \(\lambda\) is given in Equation C58. blue and grey regions for \(a=0.5\) and \(a=0.98\) respectively) are such that the accretion rate is almost constant and no magnetic flux eruptions occur. We have also performed the same analysis for different quiescent time periods and found no qualitative difference in the obtained results. The time interval we use to calculate average images is relatively short but we do not analyze the time variability properties of our results so this modest time interval is sufficient for our purposes.
2302.00148
Detecting entanglement of unknown states by violating the Clauser-Horne-Shimony-Holt inequality
Entangled states play a fundamental role in Quantum Mechanics and are at the core of many contemporary applications, such as quantum communication and quantum computing. Therefore, determining whether a state is entangled or not is an important task. Here, we propose a method to detect the entanglement of unknown two-qubit quantum states. Our method is based on the violation of the Clauser-Horne-Shimony-Holt inequality. This maximizes the value of the inequality even when \lp{it} contains an unknown quantum state. The method iteratively generates local measurement settings that lead to increasing values of the inequality. We show by numerical simulations for pure and mixed states that our algorithm exceeds the classical limit of 2 after a few iterations.
J. Cortés-Vega, J. F. Barra, L. Pereira, A. Delgado
2023-01-31T23:49:55Z
http://arxiv.org/abs/2302.00148v1
# Detecting entanglement of unknown states by violating the ###### Abstract Entangled states play a fundamental role in Quantum Mechanics and are at the core of many contemporary applications, such as quantum communication and quantum computing. Therefore, determining whether a state is entangled or not is an important task. Here, we propose a method to detect the entanglement of unknown two-qubit quantum states. Our method is based on the violation of the Clauser-Horne-Shimony-Holt inequality. This maximizes the value of the inequality even when it contains an unknown quantum state. The method iteratively generates local measurement settings that lead to increasing values of the inequality. We show by numerical simulations for pure and mixed states that our algorithm exceeds the classical limit of 2 after a few iterations. pacs: 03.67.-a, 0365.-w, 02.60.Pn ## I Introduction Quantum mechanics predicts the existence of quantum states of composite systems that cannot be written as products of states of their individual components [1]. These are the so called entangled states. Today, these states play a central role in quantum information theory [2; 3] and in many applications, such as, for instance, quantum cryptography [4], quantum teleportation [5; 6], frequency standards improvement [7; 8; 9], one-way quantum computing [10], clock synchronization [11], and entanglement assisted orientation in space [12], among many others. Interestingly, entangled states play a key role in the argument put forward by Einstein, Podolsky, and Rosen [13]. This was aimed at ascribing objective values to measurable quantities, that is, values that exist prior to and independently of measurements. Bell's inequality [14] shows that precisely the existence of entangled states precludes such conception of reality. In view of the foundational significance of entangled states and their many applications, theoretical and experimental characterization and detection of entangled states are important research subjects. One of the first criteria employed to study the entanglement of quantum states is the violation of the Clauser-Horne-Shimony-Holt inequality [15; 16] (CHSH), which is the generalization of Bell's inequality to two observers each having the choice of two measurement settings with two outcomes. In this scenario, the violation of the CHSH inequality indicates the presence of entanglement. This approach has also been studied in the context of the theory of entanglement witnesses [17; 18]. These are observables with positive expectation values with respect to the complete set of separable states that for at least one entangled state provide a negative expectation value. Thus, a negative expectation value signals the presence of entanglement. It has been shown that the CHSH inequality can be related to an entanglement witness [18; 19]. Here, we study the detection of entanglement of unknown states via the violation of the CHSH inequality. Since the majority of the entanglement measures and entanglement detectors are based on the knowledge of the quantum state, the unknown character of the state increases the difficulty of the problem. The presence of unknown quantum states is common in quantum communication [20; 21; 22] and quantum computing [23; 24; 25], where an objective entangled state is prepared, but it is modified by the action of the environment. Entanglement detection of unknown quantum states has been previously studied from the point of view of quantum tomography [26; 27] by means of an adaptive scheme [28; 29], employing a succession of measurements of witness operators [30; 31], via the measurement of the energy observable [32], via local parity measurements on two-fold copies of the unknown state [33], series of local random measurements from which entanglement witnesses are constructed [34], and variational determination of geometrical entanglement [35], among many others. We follow a different approach. For a given known state, the maximal violation of the CHSH inequality is obtained by maximizing the inequality onto the set of 4-tuples of dichotomic observables. This procedure is typically carried out by means of semidefinite programming (SDP) techniques. If the state is unknown, then the function to be optimized, that is, the target function, contains unknown fix parameters and SDP cannot be employed to find the measurements leading to the maximal violation. Analogously, the use of an entanglement witness also requires the knowledge about the state. To overcome this problem we employ a recently developed optimization algorithm [36], the Complex simultaneous perturbation stochastic approximation (CSPSA), which can handle functions with unknown parameters. CSPSA works natively within the field of the complex numbers. Thereby, no parameteri zation of the complex arguments onto the real numbers is necessary. Also, this algorithm has exhibited an improved convergence rate in certain applications such as, for instance, the estimation of unknown quantum pure states [37]. CSPSA uses a stochastic approximation of the complex Wirtinger gradient of the target function, that is, the function to be optimized, which requires the value of the target function at two different points in the optimization space. In the case at hand, these two values can be obtained experimentally, regardless of whether the state remains unknown. CSPSA iteratively generates a sequence of sets with four local measurement settings with increasing values of the CHSH function until reaching the highest possible violation of the inequality. We first study via numerical simulations the performance of the method here proposed when applied to unknown pure 2-qubit states. In this case, the maximal value achieved by the CHSH function depends on the Schmidt coefficient of the state. Thereby, the performance of the method can be compared with an analytical bound. We show that for the set formed by states that have the same set of local Schmidt bases, the method leads in tens of iterations to a value close to the maximum of the CHSH function for each value of the Schmidt coefficient. We also consider sets of states that have the same concurrence value but different local Schmidt bases. In this case, the method also approaches the corresponding maximum value of the CHSH inequality in tens of iterations. However, the higher the concurrence value, the fewer iterations are required for a violation of the CHSH inequality. Also, all states with the same concurrence value exhibit a very similar behavior of the CHSH function as a function of the number of iterations, that is, CSPSA produces results that are nearly independent of the particular set of local Schmidt bases. We also consider the average behavior of the method on the Hilbert space of two qubits. In this case, the method reaches a CHSH function value greater than 2 after 17 iterations for an ensemble size of \(10^{2}\). After 25 iterations the interquartile range is also above 2, which indicates that for 75% of the simulated states the method reached a violation of the CHSH inequality. A further increase of the ensemble size leads to a reduction in the number of iterations required to achieve a violation of the CHSH inequality. In order to study the accuracy achieved by our method we employ the squared error. We show that the mean and median squared error on the 2-qubit Hilbert space are nearly indistinguishable. After 25 iterations the mean square error achieves a value in the order of \(10^{-1}\) for an ensemble size of \(10^{2}\). A further increase of the ensemble size to \(10^{3}\) leads to a decrease in mean square error in the order of half order of magnitude. Thereafter, we study the case of two-qubit mixed states. Unlike the case of pure states, there is no known analytical formula for the maximum value of the CHSH function for an arbitrary mixed state. However, in the particular case of Werner states, that is, a maximally entangled state affected by white noise, it is possible to obtain the maximum value of the CHSH function in terms of the mixing parameter. We show that CSPSA is capable of achieving a value close to the maximum violation of the CHSH inequality for all Werner states. As the ensemble size increases the value of the function provided by CSPSA becomes closer to the maximal violation. Finally, we analyze the results achieved by CSPSA for unknown mixed states. For these states there is no analytical expression for maximal violation, so we calculate this value via SDP. After generating \(10^{6}\) density matrices, a subset of \(8\times 10^{3}\) density matrices that violate the CHSH inequality is identified. These states have a small value of the negativity, a well-known entanglement measure. Within this subset, the mean and median values of the CHSH function provided by CSPSA achieve a value close to the theoretical maximal violation after approximately 75 iterations. Our results show that the maximization of the CHSH function via the CSPSA method allows detecting the entanglement of unknown states, pure or mixed, with a high degree of accuracy. Furthermore, the highest value of the CHSH function can also be achieved. Our approach requires the ability to adapt local measurements, which are carried out on single copies of the unknown state. This can be implemented in various experimental platforms [38; 39; 40; 16; 41; 42]. We stress the fact that no _a priori_ information about the unknown state, such as purity, Schmidt coefficient, or Schmidt bases, has been employed to optimize the performance of CSPSA. ## II CHSH inequality and CSPSA optimization algorithm The target function to be optimized is the Clauser-Horne-Shimony-Holt function \(S\) defined by the expression [15] \[S(\mathbf{z},\mathbf{z}^{*}) = E(\mathbf{z}_{a},\mathbf{z}_{b})+E(\mathbf{z}_{a},\mathbf{z}_{b}^{\prime})+E( \mathbf{z}_{a}^{\prime},\mathbf{z}_{b}) \tag{1}\] \[- E(\mathbf{z}_{a}^{\prime},\mathbf{z}_{b}^{\prime}),\] where the expectation value \(E(\mathbf{z}_{a},\mathbf{z}_{b})\) is given by the average of the products of the outcomes of two locally performed dichotomic measurements \(A(\mathbf{z}_{a})\) and \(B(\mathbf{z}_{b})\) defined by the settings \(\mathbf{z}_{a}\) and \(\mathbf{z}_{b}\), respectively. The vector \(\mathbf{z}\) contains the settings of the four local measurements, that is, \(\mathbf{z}=(\mathbf{z}_{a},\mathbf{z}_{a}^{\prime},\mathbf{z}_{b},\mathbf{z}_{b}^{\prime})\). The CHSH inequality adopts the form \(|S|\leq 2\). A quantum mechanical dichotomic observable \(A(\mathbf{z}_{a})\) is defined as the one having \(\pm 1\) eigenvalues, that is, an observable with the spectral decomposition \[A(\mathbf{z}_{a})=|\psi(\mathbf{z}_{a})\rangle\langle\psi(\mathbf{z}_{a})|-|\psi^{\perp}( \mathbf{z}_{a})\rangle\langle\psi^{\perp}(\mathbf{z}_{a})|, \tag{2}\] where \(|\psi(\mathbf{z}_{a})\rangle\) is an arbitrary two-dimensional quantum state \[|\psi(\mathbf{z}_{a})\rangle=\frac{z_{a,1}|0\rangle+z_{a,2}|1\rangle}{\sqrt{|z_{a, 1}|^{2}+|z_{a,2}|^{2}}}. \tag{3}\] The state \(|\psi^{\perp}(\mathbf{z}_{a})\rangle\) is orthogonal to \(|\psi(\mathbf{z}_{a})\rangle\) and the components \(z_{a,1}\) and \(z_{a,2}\) of the vector \(\mathbf{z}_{a}\) are complex numbers. Thereby, the expectation value \(E(\mathbf{z}_{a},\mathbf{z}_{b})\) is given by the expression \[E(\mathbf{z}_{a},\mathbf{z}_{b}) = Tr(\rho|\psi(\mathbf{z}_{a})\rangle\langle\psi(\mathbf{z}_{a})|\otimes| \psi(\mathbf{z}_{b})\rangle\langle\psi(\mathbf{z}_{b})|) \tag{4}\] \[+ Tr(\rho|\psi^{\perp}(\mathbf{z}_{a})\rangle\langle\psi^{\perp}(\mathbf{z }_{a})|\otimes|\psi^{\perp}(\mathbf{z}_{b})\rangle\langle\psi^{\perp}(\mathbf{z}_{b})|)\] \[- Tr(\rho|\psi(\mathbf{z}_{a})\rangle\langle\psi(\mathbf{z}_{a})|\otimes| \psi^{\perp}(\mathbf{z}_{b})\rangle\langle\psi^{\perp}(\mathbf{z}_{b})|)\] \[- Tr(\rho|\psi^{\perp}(\mathbf{z}_{a})\rangle\langle\psi^{\perp}(\mathbf{z }_{a})|\otimes|\psi(\mathbf{z}_{b})\rangle\langle\psi(\mathbf{z}_{b})|),\] where \(\rho\) is a fixed known two-qubit state. The problem of violating the CHSH inequality consists in finding a complex vector \(\mathbf{z}\) such that for a given known state \(\rho\) leads to a maximal value of \(|S(\mathbf{z},\mathbf{z}^{*})|\) larger than the classical bound of 2. This optimization problem can be solved by means of semidefinite programing or other numerical optimization techniques. However, when the state \(\rho\) entering in the function \(S\) is unknown, the standard approaches to the problem cannot be employed. The reason for this is that the function \(S\) and its derivatives cannot be evaluated. ``` Consider a state \(\rho\). This plays the role of the unknown state. Set initial guess \(\hat{\mathbf{z}}_{0}\), and gain coefficients \(a\), \(A\), \(s\), \(b\), and \(r\). for\(k=1,\ldots,k_{max}\)do Set \[a_{k}=\frac{a}{(k+1+A)^{s}},\quad c_{k}=\frac{b}{(k+1)^{r}}.\] Choose \(\Delta_{k,i}\) randomly in the set \(\{\pm 1,\pm i\}\). Calculate \(\hat{\mathbf{z}}_{k\pm}=\hat{\mathbf{z}}_{k}\pm c_{k}\mathbf{\Delta}_{k}\). Estimate from experimentally acquired data or numerically simulate the values \(S(\rho,\hat{\mathbf{z}}_{k\pm})\) considering an ensemble of \(N\) equally prepared pairs of qubits in the state \(\rho\). Estimate the gradient as \[\hat{g}_{k,i}=\frac{S(\rho,\hat{\mathbf{z}}_{k+})-S(\rho,\hat{\mathbf{z}}_{k-})}{2c_{k }\Delta_{k,i}}.\] Actualize the guess \(\hat{\mathbf{z}}_{k+1}=\hat{\mathbf{z}}_{k}+a_{k}\hat{\mathbf{g}}_{k}\). Normalize coefficients \(\hat{\mathbf{z}}_{k+1}\) ``` **Algorithm 1** CSPSA optimization of \(\text{S}(\rho,\mathbf{z})\) In order to overcome this problem, we resort to the recently introduced CSPSA [36] optimization algorithm for real-valued functions of complex arguments. This algorithm works natively on the field of the complex numbers, which make unnecessary the use of real parameterizations of the complex arguments. For a target function \(f(\mathbf{z},\mathbf{z}^{*}):\mathbb{C}^{n}\times\mathbb{C}^{n}\rightarrow\mathbb{R}\), CSPSA is defined by the iterative rule \[\hat{\mathbf{z}}_{k+1}=\hat{\mathbf{z}}_{k}+a_{k}\hat{\mathbf{g}}_{k}(\hat{\mathbf{z}}_{k},\hat {\mathbf{z}}_{k}^{*}), \tag{5}\] where \(a_{k}\) is a positive gain coefficient and \(\hat{\mathbf{z}}_{k}\) is the estimate of the maximizer \(\tilde{\mathbf{z}}\) of \(f(\mathbf{z},\mathbf{z}^{*})\) at the k-th iteration. The iteration starts from an initial guess \(\hat{\mathbf{z}}_{0}\), which is randomly chosen. The function \(\hat{\mathbf{g}}_{k}(\hat{\mathbf{z}}_{k},\hat{\mathbf{z}}_{k}^{*})\) is an estimator for the Wirtinger gradient [43] of \(f(\mathbf{z},\mathbf{z}^{*})\) whose components are defined by \[\hat{g}_{k,i}=\frac{f(\hat{\mathbf{z}}_{k+},\hat{\mathbf{z}}_{k+}^{*})+\epsilon_{k,+}-( f(\hat{\mathbf{z}}_{k-},\hat{\mathbf{z}}_{k-}^{*})+\epsilon_{k,-})}{2c_{k}\Delta_{k,i}^{*}}, \tag{6}\] with \[\hat{\mathbf{z}}_{k\pm}=\hat{\mathbf{z}}_{k}\pm c_{k}\mathbf{\Delta}_{k}, \tag{7}\] where \(c_{k}\) is a positive gain coefficient and \(\epsilon_{k,\pm}\) describes the presence of noise in the values of \(f(\hat{\mathbf{z}}_{k\pm},\hat{\mathbf{z}}_{k\pm}^{*})\). The components of the vector \(\mathbf{\Delta}_{k}\in\mathbb{C}^{n}\) are identically and independently distributed random variables in the set \(\{\pm 1,\pm i\}\). The gain coefficients \(a_{k}\) and \(c_{k}\) control the convergence of CSPSA and are chosen as \[a_{k}=\frac{a}{(k+1+A)^{s}},\;\;c_{k}=\frac{b}{(k+1)^{r}}. \tag{8}\] The values of \(a,A,s,b\) and \(r\) are adjusted to optimize the rate of convergence depending on the target function. We use the values: \(a=1.0\), \(b=0.25\), \(s=1.0\), \(r=1/6\), and \(A=0\). Two main properties of CSPSA are: (i) it converges asymptotically in mean to the maximizer \(\tilde{\mathbf{z}}\) of \(f(\mathbf{z},\mathbf{z}^{*})\) and (ii) \(\hat{\mathbf{g}}_{k}\) is an asymptotically unbiased estimator of the Wirtinger gradient. With proper conditions, these properties are maintained even in the presence of the noise terms \(\epsilon_{k,\pm}\) entering in Eq. (6). CSPSA is the generalization of the Simultaneous perturbation stochastic approach (SPSA) [44; 45] from the field of real numbers to the field of complex numbers. SPSA has been applied to the problem of estimating pure states [46; 47] and experimentally realized [48]. Thus, the application of CSPSA to the maximization of the CHSH function proceeds as follows: an initial guess \(\hat{\mathbf{z}}_{0}\) for the vector containing the measurement settings and a vector \(\mathbf{\Delta}_{0}\) are randomly generated. These two vectors are employed to calculate the vectors \(\hat{\mathbf{z}}_{0\pm}\) according to Eq. (7). Thereafter, the values \(S(\hat{\mathbf{z}}_{0\pm},\hat{\mathbf{z}}_{0\pm}^{*})\) of the CHSH function are obtained, which involves the realization of measurements on a finite ensemble of \(N\) copies of the unknown state \(\rho\). The values \(S(\hat{\mathbf{z}}_{0\pm},\hat{\mathbf{z}}_{0\pm}^{*})\) are then employed to calculate the estimator for the Wirtinger gradient \(\hat{\mathbf{g}}_{0}(\hat{\mathbf{z}}_{0},\hat{\mathbf{z}}_{0}^{*})\) using Eq. (6). Finally, a new estimate \(\hat{\mathbf{z}}_{1}\) for the vector of settings is obtained by means of Eq. (5). This process is iterated until achieving a violation of the CHSH inequality or until reaching a predefined number of iterations. Algorithm 1 shows a pseudocode for the optimization of the CHSH function via CSPSA. ``` Initialize \(\mathbf{z}_{0}\), \(\mathbf{z interaction of the photon with a sequence of half- and quarter-wave plates followed by a polarizing beam splitter and single-photon detectors. In this case a setting vector is given by the rotation angles of the wave plates. Thereby, it is possible to implement any local measurement up to the angular resolution of the wave plates. It is possible to achieve a high degree of control in other experimental platforms, for instance in time-bin or energy-time encoded qubits, where local measurements can be implemented introducing electronically controlled phase shifts. Thus, we will assume that the CHSH function can be measured for any value of the setting vector \(\mathbf{z}\). ## III Results A single run of CSPSA starts with the choice of an initial guess \(\mathbf{z}_{0}\) of the four local measurement bases and proceeds through the choice of the vector \(\mathbf{\Delta}_{k}\) at every iteration. Since there is no a priori information about the initial state, the initial guess for each of the local measurements, which are defined by Eqs. (2) and (3), is randomly chosen according to a Haar-uniform distribution. The choice of \(\mathbf{\Delta}_{k}\) is equally random. Thereby, CSPSA is an intrinsically stochastic optimization algorithm. A third source of randomness is the value of the CHSH function. This is obtained by means of probabilities that are inferred from local measurements made on a set of equally prepared copies of the unknown state. Since the size \(N\) of the ensemble is finite, the inferred probabilities are affected by finite statistic noise. Thereby, CSPSA exhibits three different sources of randomness and, consequently, each run of CSPSA will follow a different trajectory in the optimization space, that is, the space of all four setting vectors. Here, we report the results of numerical experiments for the cases of pure and mixed states considering the sources of randomness affecting the performance of the proposed method. To study the violation of the CHSH inequality with an unknown state \(\rho\), pure or mixed, we compute the expected value \(\bar{S}(\rho)\) by sampling a sufficiently large number of independent trajectories, each obtained through the optimization of \(S\) by CSPSA for \(\rho\), as \[\bar{S}(\rho)=\frac{1}{K}\sum_{\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta} _{k}\}}S(\rho,\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta}_{k}\}), \tag{9}\] where \(S(\rho,\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta}_{k}\})\) is the value of the CHSH function evaluated on a particular trajectory generated by a single run of CSPSA and \(K\) is the total number of simulated trajectories. \(S(\rho,\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta}_{k}\})\) depends on the unknown state \(\rho\), the set \(\mathbf{z}_{0}\) of complex numbers that defines the initial guess for the four local measurements, and the particular sequence of choices \(\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta}_{k}\}\). The mean \(\bar{S}(\rho)\) will be studied as a function of the number \(k\) of iterations for a fixed ensemble size \(N\). Since we are interested in the overall behavior of the algorithm for unknown states, we calculate the mean \(\bar{S}_{C}\) of \(\bar{S}(\rho)\) in a set \(\Omega_{C}\), that is, \[\bar{S}_{C}=\frac{1}{M}\sum_{\rho\in\Omega_{C}}\bar{S}(\rho), \tag{10}\] where \(M\) is the number of states in \(\Omega_{C}\) and \(C\) is a parameter that characterizes the states in the set. Alternatively, we calculate the median \(\bar{S}_{C}\) of \(\bar{S}(\rho)\) in the set \(\Omega_{C}\) and the interquartile range. This is done to determine whether the distribution of \(\bar{S}(\rho)\) in \(\Omega_{C}\) exhibits a symmetric distribution or not and the possible existence of outliers. ### Unknown pure states We start our analysis of the proposed algorithm by considering the violation of the CHSH inequality for the set \(\Omega_{\lambda}\) of two-qubit pure states defined by the Schmidt decomposition \[|\psi(\lambda)\rangle=\sqrt{\lambda}|0\rangle_{1}|0\rangle_{2}+\sqrt{1-\lambda} |1\rangle_{1}|1\rangle_{2}, \tag{11}\] where \(\lambda\in[0,1/2]\) is the Schmidt coefficient and \(\{|0\rangle_{1},|1\rangle_{1}\}\) and \(\{|0\rangle_{2},|1\rangle_{2}\}\) are fixed local Schmidt bases of each qubit. States in \(\Omega_{\lambda}\) lead to a value of the function \(S\) given by \[S(\lambda)=2\sqrt{1+4\lambda(1-\lambda)}. \tag{12}\] In Fig. 1 we show \(\bar{S}(\rho_{\lambda})\) for \(\rho_{\lambda}=|\psi_{\lambda}\rangle\langle\psi_{\lambda}|\) as a function of \(\lambda\) for \(N=10^{2}\) after 200 iterations and \(K=10^{4}\). Initial guesses for the set of four local observables are randomly chosen. In particular, information about the fixed bases in \(|\psi_{\lambda}\rangle\) has not been used to improve the performance Figure 1: CHSH function \(S(|\psi_{\lambda}\rangle)\) as a function of the Schmidt coefficient \(\lambda\) for two-qubit states with fixed local Schmidt bases. Continuos green line represents the theoretical prediction given by Eq. (12). Solid red circles (blue x’s) represent the mean \(\bar{S}(|\psi_{\lambda}\rangle)\) (median \(\tilde{S}(|\psi_{\lambda}\rangle)\)) of \(S(|\psi_{\lambda}\rangle)\) obtained via CSPSA considering \(10^{4}\) initial guesses for each state \(|\psi_{\lambda}\rangle\), 200 iterations, and an ensemble size \(N=10^{2}\). of CSPSA. As is apparent from Fig. 1, CSPSA provides mean and median of \(S(|\psi_{\lambda}\rangle)\) that closely resemble the theoretical prediction of Eq. (12) for any value of \(\lambda\). A much better agreement can be obtained by increasing the ensemble from \(N=10^{2}\) to \(N=10^{4}\), which is illustrated in Fig. 2. Next we analyze the case of pure states with a known value of the concurrence \(C\), which is given by the expression \[C(\lambda)=2\sqrt{\lambda}\sqrt{1-\lambda}. \tag{13}\] The local Schmidt bases of the state are unknown. In the simulations we choose a fixed value \(C\) of the concurrence, which in turn fixes the value of the Schmidt coefficient. The local Schmidt bases are randomly chosen. As in the previous simulations, the knowledge about the value of the concurrence is not employed to improve the performance of CSPSA. Figure 3 shows the behavior of \(\bar{S}_{C}\), which is the mean of \(\bar{S}(\rho)\) calculated on a set \(\Omega_{C}\) of pure states with a fixed value \(C\) of the concurrence, as a function of the number \(k\) of iterations for several values of \(C\). Each set \(\Omega_{C}\) contains 100 states chosen according to a Haar-uniform distribution and \(\bar{S}(\rho)\) is calculated with \(10^{4}\) trajectories. Each one of the four local measurements is simulated considering an ensemble size of \(N=10^{2}\). According to Fig. 3, the quantity \(\bar{S}_{C}\) exhibits a fast increase of the value of the CHSH function within the first tens of iterations followed by a linear behavior, which asymptotically approaches the maximal value of the function \(S\) for the value \(C\) of the concurrence. The overall behavior of \(\bar{S}_{C}\) does not depend on the value of \(C\). Figure 4 displays the median \(\tilde{S}_{C}\) of \(\bar{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for several values of the concurrence \(C\). Shaded areas represent the interquartile range. Monte Carlo experiments are carried out as in Fig. 3. As is apparent from this figure, the median exhibits the same overall behavior as the mean \(\bar{S}_{C}\). Mean and median reach after a few tens interations values that are nearly indistinguishable and contained within the interquartile range. This indicates that the stochasticity of CSPSA does not lead to outliers in the histogram of \(\bar{S}(\rho)\) for all simulated sets \(\Omega_{C}\). The interquartile range, which is a quartile-based measure of variability, decreases rapidly with the number of iteratio Figure 2: CHSH function \(S(|\psi_{\lambda}\rangle)\) as a function of the Schmidt coefficient \(\lambda\) for two-qubit states with fixed local Schmidt bases. Continues green line represents the theoretical prediction given by Eq. (12). Solid red circles (blue x’s) represent the mean \(\bar{S}(|\psi_{\lambda}\rangle)\) (median \(\bar{S}(|\psi_{\lambda}\rangle)\)) of \(S(|\psi_{\lambda}\rangle)\) obtained via CSPSA considering \(10^{4}\) initial guesses for each state \(|\psi_{\lambda}\rangle\), 200 iterations, and an ensemble size \(N=10^{4}\). Figure 3: Mean \(\bar{S}_{C}\) of \(\bar{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for several values of the concurrence \(C\) in the interval \([0.1,1.0]\), from bottom to top. The mean \(\bar{S}(\rho)\) is calculated with \(10^{4}\) independent trajectories and each local measurement is simulated with an ensemble size \(N=10^{2}\). Upper and lower straight lines represent the values \(2\sqrt{2}\) and 2, correspondingly. Figure 4: Median of \(\bar{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for several values of the concurrence \(C\) in the interval \([0.1,1.0]\), from bottom to top. The mean \(\bar{S}(\rho)\) is calculated with \(10^{4}\) independent trajectories and each local measurement is simulated with an ensemble size \(N=10^{2}\). Upper and lower straight lines represent the values \(2\sqrt{2}\) and 2, correspondingly. narrow fringe. This is an indication that the histogram of \(\tilde{S}(\rho)\) for a particular \(\Omega_{C}\) after a few tens iterations is highly concentrated around the mean. Thus, Figs. 3 and 4 clearly indicate that CSPSA can be employed to iteratively increase the value of the CHSH function for unknown pure states and detect entanglement. The greater the entanglement of the unknown state, the fewer iterations will be required to obtain a violation of the CHSH inequality. Furthermore, approximately 70 iterations are necessary to reach a value of the CHSH function close to the maximal violation allowed by quantum mechanics. Figs. 5 and 6 depicts the mean \(\bar{S}_{C}\) and the median \(\tilde{S}_{C}\) of \(\tilde{S}(\rho)\) in \(\Omega_{C}\), correspondingly. In this case local measurements are simulated with an ensemble size of \(N=10^{4}\), that is, a quadratic increase with respect to previous simulations. As is apparent from Figs. 5 and 6, the overall behavior remains unchanged with respect to Figs. 3 and 4. In particular, both values of ensemble size, \(N=10^{2}\) and \(N=10^{4}\), show small differences in the asymptotic linear regime. For instance, for weakly entangled states, that is, \(C=0.1\), after the total of iterations, in Fig. 4 CSPSA is close to 2 but below. In Fig. 6, CSPSA is slightly above 2. Similar differences can be observed for other values of \(C\). Furthermore, a small reduction in the number of iterations required to violated the CHSH inequality can be observed. This reduction depends on the initial amount of entanglement of the unknown state. Also, the increase in \(N\) leads to narrower interquartile ranges. This is more clearly illustrated in Fig. 7, which shows the median \(\tilde{S}_{C}\) of \(S(\rho)\) in \(\Omega_{C}\) for \(C=0.5\) and \(C=0.9\) for three values of ensemble size \(N=10^{2},10^{3},10^{4}\). The interquartile range is also depicted. As is apparent from Fig. 7, CSPSA provides very similar values of \(\tilde{S}_{C}\) almost independently of the size of the ensemble employed. However, in the regime of a few tens of iterations, \(N=10^{2}\) leads to lower values of \(\tilde{S}_{C}\), while \(N=10^{3}\) and \(10^{4}\) lead to very similar values of \(\tilde{S}_{C}\), which are higher than in the case \(N=10^{2}\). This has for consequence that higher values of \(N\) lead to a decrease in the number of iterations required to observe a violation of the CHSH inequality, but this improvement is saturated for an enough large sample size. This later effect is analyzed with the help of Fig. 8 that displays the number of iterations \(k_{S>2}\) required to obtain Figure 5: Mean \(\tilde{S}_{C}\) of \(\tilde{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for several values of the concurrence \(C\) in the interval \([0.1,1.0]\), from bottom to top. The mean \(\bar{S}(\rho)\) is calculated with \(10^{4}\) independent trajectories and each local measurement is simulated with an ensemble size \(N=10^{4}\). Upper and lower straight lines represent the values \(2\sqrt{2}\) and \(2\), correspondingly. Figure 6: Median of \(\bar{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for several values of the concurrence \(C\) in the interval \([0.1,1.0]\), from bottom to top. The mean \(\bar{S}(\rho)\) is calculated with \(10^{4}\) independent trajectories and each local measurement is simulated with an ensemble size \(N=10^{4}\). Upper and lower straight lines represent the values \(2\sqrt{2}\) and \(2\), correspondingly. Figure 7: Median \(\tilde{S}_{C}\) of \(\tilde{S}(\rho)\) in \(\Omega_{C}\) as a function of the number of iterations for \(C=0.5\) and \(C=0.9\). Each local measurement is simulated with an ensemble size \(N=10^{4},10^{3},10^{2}\). The median \(\tilde{S}(\rho)\) for each value of \(C\) is calculated with \(10^{4}\) independent trajectories. Upper and lower straight lines represent the values \(2\sqrt{2}\) and \(2\), correspondingly. a violation of the inequality with \(75\%\) of the states generated for a given value of \(C\) and with \(N=10^{2},10^{3},10^{4}\). Here, we observe that \(N=10^{4}\) and \(N=10^{3}\) lead to a very similar behavior while \(N=10^{2}\) requires the largest number of iterations to reach a violation of the CHSH inequality. Also, the lower the concurrency value, the greater the number of iterations required for the violation. In fact, Fig. 8 suggests that \(k_{S>2}\) decreases exponentially with \(C\). This figure also illustrates the interplay between \(k_{S>2}\) and the total ensemble size \(N_{S>2}\) required for violating the CHSH inequality. For example, in the case of \(C=0.1\) and \(N=10^{2}\), we have that approximately \(k_{S>2}=100\), which leads to \(N_{S>2}=8\times 10^{4}\). For \(N=10^{4}\) we have that approximately \(k_{S>2}=35\) and thus \(N_{S>2}=280\times 10^{4}\). Clearly, the reduction in the value of \(k_{S>2}\) comes at the expense of using a much larger total ensemble \(N_{S>2}\). For states with a high value of concurrence \(C\), the reduction in the value of \(k_{S>2}\) by increasing the value of \(N\) is marginal. So far, our study of the violation of CHSH inequality through CSPSA has been done considering that the initial amount of entanglement is known. This was done to show that CSPSA drives the value of the CHSH function \(S\) close to the maximum value regardless of the amount of entanglement. We now lift this assumption and consider unknown pure states. In order to do this, we generate a set \(\Omega_{\mathcal{H}}\) with \(100\) pure states in the Hilbert space \(\mathcal{H}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) of two qubits according to a Haar-uniform distribution and calculate the mean \(\bar{S}_{\mathcal{H}}\) and the median \(\bar{S}_{\mathcal{H}}\) of \(\bar{S}(|\psi\rangle\langle\psi|)\) in \(\Omega_{\mathcal{H}}\), together with the corresponding interquartile range. These quantities are depicted in Fig. 9 as a function of the number of iterations. The behavior exhibited by the mean and media is very similar and characterized by a fast increase within the first tens of iterations followed by an asymptotic linear regime. Fig. 9 also shows the mean and media of the maximal theoretical values of \(S\) for each state in \(\Omega_{\mathcal{H}}\), which are indicated as two superposed straight lines. As can be seen from Fig. 9, CSPSA produces a mean and a median that are very closely to the theoretical values. Also, the expected number of iterations \(k_{S>2}\) such that \(75\%\) of the simulated states violates the CHSH inequality is about \(25\). Fig. 10 shows the same information as Fig. 9 but with \(N=10^{4}\). In this case, we see that the quadratic increase in the ensemble size allows CSPSA to reach mean and media values that are even closer to the theoretical values. Furthermore, there is a small reduction in the number of iterations required to obtain a value of \(S\) greater than two from \(25\) to \(20\). Our previous simulations seem to indicate that the optimization of the CHSH function for an unknown state through the CSPSA method provides maximum values of the CHSH functional close to the theoretical maximum values. In order to analyze this we employ the mean square error. For a given state \(\rho=|\psi\rangle\langle\psi|\) and a single realization of CSPSA we calculate the square error \(SE(\rho)\) as \[SE(\rho)=|S(\rho,\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\dots,\mathbf{\Delta}_{k}\}-S_{max} (\rho)|^{2}. \tag{14}\] The mean square error \(MSE(\rho)\) for a fixed unknown state \(\rho\) with respect to a large set of realizations is given Figure 8: Number of iterations \(k_{S>2}\) such that the interquartile range is above \(S=2\) as a function of the concurrence \(C\) for \(N=10^{2},10^{3}\), and \(10^{4}\), from top to bottom. by \[MSE(\rho)=\frac{1}{K}\sum_{\mathbf{z}_{0},\{\mathbf{\Delta}_{1},\ldots,\mathbf{\Delta}_{k}\}} SE(\rho), \tag{15}\] which corresponds to an estimation accuracy metric. This is then used to calculate the average of the mean square error \(\overline{MSE}\) on the total Hilbert space \(\mathcal{H}\) as \[\overline{MSE}=\frac{1}{M}\sum_{\rho\in\Omega_{\mathcal{H}}}MSE(\rho). \tag{16}\] Figure 11 shows the mean \(\overline{MSE}\) of the square error on the Hilbert space as a function of the number of iterations for \(N=10^{2},10^{3},10^{4}\). For each value of ensemble size, \(\overline{MSE}\) displays a fast decrease followed by an approximately asymptotic lineal behavior. \(N=10^{3}\) and \(N=10^{4}\) produce very similar values of the mean square error while an ensemble size of \(N^{2}\) produces a value that is almost half order of magnitude higher. After 25 iterations the difference between the maximal theoretical value and the value achieved by CSPSA is between \(10^{-1}\) and \(10^{-2}\). Adding 50 more iterations this difference is approximately between \(10^{-2}\) and \(10^{-3}\). Let us recall that after 75 iterations the lower bound of the interquartile range of \(\bar{S}(|\psi\rangle\langle\psi|)\) has an approximate value of 2.12, so that for 75% of states in the bipartite Hilbert space we can ascertain its entangled nature and assign an accurate value of the CHSH function. A further improvement in the accuracy achieved by CSPSA can be obtained at the expense of a large increase in the number of iterations, after adding 150 iterations we obtain a new decrease by one order of magnitude, that is, the mean \(\overline{MSE}\) of the square error on the Hilbert space is approximately in the interval between \(10^{-3}\) and \(10^{-4}\). ### Unknown mixed states In the previous section, we have studied the violation of the CHSH inequality for unknown pure states by means of a CSPSA-driven sequence of local measurements. Here, we study the case of mixed bipartite states. We start by reproducing the value of the CHSH function on the set of the Werner states, which are given by the expression \[\rho_{\lambda}=\lambda|\psi_{s}\rangle\langle\psi_{s}|+\frac{(1-\lambda)}{d}I, \tag{17}\] where \(|\psi_{s}\rangle\) is the the maximally entangled singlet state defined as \[|\psi_{s}\rangle=\frac{1}{\sqrt{2}}(|0\rangle|1\rangle-|1\rangle|0\rangle) \tag{18}\] and \(I\) is a 4-dimensional identity operator. This mixture of the singlet state with white noise is separable if and only if \(\lambda\leq 1/3\) and violates the CHSH inequality if and only if \(\lambda>1/\sqrt{2}\). The maximal value of the CHSH function for a Werner state \(\rho_{\lambda}\) is given by \[S(\rho_{\lambda})=2\sqrt{2}\lambda. \tag{19}\] Figure 12 displays the mean \(\bar{S}(\rho_{\lambda})\) and median \(\tilde{S}(\rho_{\lambda})\) as a function of \(\lambda\) obtained via CSPSA for an ensemble size \(N=10^{2}\) after 75 iterations. With the exception of the first 5 points, Figure 12 shows a very good agreement between the maximal value of the CHSH function of Eq. (19) and the value achieved with the help of CSPSA. Furthermore, mean and median exhibit values that also are very close and the interquartile (not depicted) range is very narrow. Thus, within the family of Werner states CSPSA drives the sequence of local measurement bases Figure 11: Mean square error \(\overline{MSE}\) as a function of the number \(k\) of iterations for \(N=10^{2},10^{3}\), and \(10^{4}\), from top to bottom. Shaded areas represent interquartile range. Figure 12: Mean \(\bar{S}(\rho_{\lambda})\) (solid red dots) and median \(\tilde{S}(\rho_{\lambda})\) (blue x’s) as a function of \(\lambda\) for Werner states. Continuous black line depicts the maximal value of the CHSH function of Eq. (19). Local measurements are simulated with an ensemble size \(N=10^{2}\) and 75 iterations are realized. very close to the optimal set. An increase in the ensemble size leads to even better results. This is illustrated in Fig. 13, where local measurements are simulated with an ensemble \(N=10^{4}\). In this case all points are closer to the maximal value of the CHSH inequality. Next we proceed with the case of unknown mixed states. We randomly generated a set of \(10^{6}\) two-qubit mixed states. In order to determine whether a mixed state violates or not the CHSH inequality we employ the \(M\) quantity criterion [49]. A mixed state \(\rho\) acting on a Hilbert space \(\mathcal{H}=\mathcal{H}_{2}\otimes\mathcal{H}_{2}\) can be represented in the form \[\rho = \frac{1}{4}\left(I\otimes I+\sum_{i=1}^{3}r_{i}\sigma_{i}\otimes I +I\otimes\sum_{i=1}^{3}s_{i}\sigma_{i}\right. \tag{20}\] \[+ \left.\sum_{n,m=1}^{3}t_{nm}\sigma_{n}\otimes\sigma_{m}\right),\] where \(I\) represents the 2-dimensional identity operator, \(\{\sigma_{n}\}_{n=1}^{3}\) are the standard Pauli matrices, and the real coefficients \(r_{i},s_{i}\) and \(t_{n,m}\) define the mixed state. The quantity \(M\) is defined by \(M(\rho)=u+\tilde{u}\), where \(u\) and \(\tilde{u}\) denote the greater positive eigenvalues of the matrix \(U_{\rho}:=T_{\rho}^{T}T_{\rho}\) being the coefficients of the matrix \(T(\rho)\) given by \(t_{nm}=\mathrm{Tr}(\rho\sigma_{n}\otimes\sigma_{m})\). A state \(\rho\) violates the CHSH inequality if and only if the condition \(M(\rho)\;>\;1\) holds [49]. Employing this criterium, the initial set of \(10^{6}\) mixed states was reduced to a set \(\Omega\) containing \(8\times 10^{3}\) mixed states with \(M(\rho)>1\) that violate the CHSH inequality. To analyze the values of the CHSH function obtained through CSPSA we use those obtained through SDP. In the SDP case we need to fix the state that is used in the maximization. However, let us recall that even when the states are fixed, the maximization of \(S\) remains to be a nonlinear problem. Therefore, to find the maximum value of \(S\) for each state in \(\Omega\) we use the see-saw method [50; 51] to iterate a SDP test [52; 53] where either observable A or B remain fixed while optimizing in the other variable. The SDP that we solve is the following \[\mathrm{given}\;\rho_{\Omega},A(z_{a}),A(z_{a}^{\prime}), \tag{21}\] \[\max_{B(z_{b}),B(z_{b}^{\prime})}S(\rho_{\Omega},A(z_{a}),A(z_{a}^ {\prime}),B(z_{b}),B(z_{b}^{\prime})), \tag{22}\] with the conditions \[|\Psi(z_{b})\rangle\langle\Psi(z_{b})|,|\Psi^{\perp}(z_{b})\rangle \langle\Psi^{\perp}(z_{b})|\geq 0\quad\forall\;z_{b},z_{b}^{\prime}, \tag{23}\] \[|\Psi(z_{b})\rangle\langle\Psi(z_{b})|+|\Psi^{\perp}(z_{b}) \rangle\langle\Psi^{\perp}(z_{b})|=I\quad\forall\;z_{b},z_{b}^{\prime}. \tag{24}\] Notice that this SDP takes Alice's observables \(A(z_{a})\) and \(A(z_{a}^{\prime})\) as inputs and for a given mixed state from the \(\Omega\) set, it finds Bob's observables \(B(z_{b})\) and \(B(z_{b}^{\prime})\) that maximally violate \(S\). Then, we take the observables \(B\) outputted by this SDP as inputs in a new iteration to obtain optimal observables \(A\). This procedure is iterated until some suitable convergence condition is satisfied. We performed this optimization for every mixed bipartite state in the set \(\Omega\), which allows us to find better lower bounds on \(S\), together with the optimal observables \(A\) and \(B\). Figure 14 displays the behavior of the mean \(\bar{S}_{\Omega}\), median \(\bar{S}_{\Omega}\), and interquartile range as functions of the number of iterations. This figure also displays the values of these quantities obtained via SDP. As is apparent from this figure, the values of the mean and median provided via CSPSA are very close and tend to agree with the values delivered by SDP after tens of iterations. Also, the interquartile ranges tend to overlap. However, in the case of mixed states the number of iterations needed to obtain a violation of the CHSH inequal Figure 14: Mean \(\bar{S}_{\Omega}\) (red solid line) and median \(\tilde{S}_{\Omega}\) (blue solid line) obtained via CSPSA on the set \(\Omega\) of randomly generated mixed entangled states as a function of the number \(k\) of iterations. Mean \(\bar{S}_{\Omega}\) (yellow solid line) and median \(\tilde{S}_{\Omega}\) (green solid line) obtained via SDP on the set \(\Omega\) of randomly generated mixed entangled states as a function of the number \(k\) of iterations. Shaded areas correspond to interquartile range. CSPSA simulations consider ensemble size \(N=10^{4}\). Figure 13: Mean \(\bar{S}(\rho_{\lambda})\) (solid red dots) and median \(\tilde{S}(\rho_{\lambda})\) (blue x’s) as a function of \(\lambda\) for Werner states. Continuous black line depicts the maximal value of the CHSH function of Eq. (19). Local measurements are simulated with an ensemble size \(N=10^{4}\) and 75 iterations are realized. in the case of pure states. This is due to the fact that the mixed states in \(\Omega\) typically have small values of the negativity, a well-known measure of entanglement, and thus, as in the case of weakly-entangled pure states, need more iterations to reach a violation of the CHSH inequality. ## IV Conclusions We have studied the problem of detecting the entanglement of unknown two-qubit states, mixed or pure, by violating the Clauser-Horne-Shimony-Holt inequality. Our approach to this problem is based on the maximization of the CHSH function by means of a stochastic optimization method, the Complex simultaneous perturbation stochastic approximation. This allows optimizing functions with unknown parameters, which in our case correspond to the unknown quantum state. CSPSA employs an iterative rule which requires at each iteration the value of the target function, that is, the CHSH function, at two different points in the optimization space. This is formed by vectors on the field of the complex numbers containing the measurement settings of four observables. The values of the CHSH function can be experimentally obtained even if the two-qubit state remains unknown. Thereby, CSPSA generates a sequence of measurement settings that in mean lead to increasing values of the CHSH inequality. To analyze the characteristics of the proposed method, we carried out several numerical experiments. In particular, due to the stochastic nature of CSPSA, we employ random sampling to obtain estimates of the mean, median, and interquartile range of the quantities of interest. We first note that for a fixed unknown state, CSPSA provides very similar values of the mean and median of the CHSH function and a very narrow interquartile range. This indicates that CSPSA does not generates outliers, that is, for a given unknown state different realizations of our method provide very close results. This feature has been observed for each state in a universe of \(5\times 10^{4}\) randomly generated pure two-qubit states. The typical behavior of the mean of the CHSH function, as a function of the number of iterations, corresponds to a rapid increase followed by an approximately linear asymptotic behavior, which approaches the maximal value of the CHSH function. Unknown states characterized by the same concurrence value exhibit a very similar behavior of the CHSH function. However, the rate of convergence towards the maximum depends on the initial value of the concurrence. The higher the concurrence value, the fewer iterations are required to obtain a violation of the CHSH inequality and, consequently, detect entanglement. For example, states with maximum concurrence need 13 iterations while states with a concurrence of 0.1 need approximately 75 iterations to reach a violation. The number of iterations required to detect entanglement can be decreased by increasing the size of the ensemble of identically prepared copies that is employed to estimate the expectation values entering in the CHSH function. In our simulations, however, the effect of increasing the ensemble size is more notorious in the case of highly entangled states. We have studied the mean of the CHSH function on the 2-qubit Hilbert space. In this case, for an ensemble size of \(10^{2}\) the entanglement of the randomly generated states is detected in mean by violating the CHSH inequality after 17 iterations, while after 25 iterations 75% of the randomly generated states violate the CHSH inequality. These figures can be reduced by increasing the ensemble size. We have also studied the accuracy provided by our method in the estimation of the maximum value of the CHSH function. As accuracy metric we have used the mean squared error, which shows that after 25 iterations the difference between the maximal theoretical value and the value achieved by CSPSA is between \(10^{-1}\) and \(10^{-2}\). After 75 iterations the accuracy is approximately between \(10^{-2}\) and \(10^{-3}\). We have also considered the case of mixed states. The proposed method is capable of reproducing the maximal value of the CHSH function for Werner states and for randomly chosen mixed states. Therefore, the numerical simulations indicate that the maximization of the CHSH function through CSPSA leads to the detection of the entanglement of unknown states, pure or mixed. In mean, 25 iterations detect the entanglement of 75% of the generated states. Also, it is possible to reach an accurate value of the maximal violation. There are some variations of the method here proposed that could reduce the number of iterations used to detect entanglement. We implement CSPSA considering the standard choice for the gain coefficients. However, these can be optimized. This is in general a difficult problem. Nevertheless, some simple heuristic prescriptions have been discussed in the study of various proposals of variational quantum eigensolvers [54]. These are based on SPSA, a version of CSPSA that works on the field of the real numbers. It seems possible that the SPSA performance-enhancing prescriptions could also be used to improve the CSPSA convergence rate, which would reduce the number of iterations required to detect entanglement. The usage of second-order methods or quantum natural gradient could also speed up the protocol [55; 56; 57; 58; 59]. These employ additional measurements of the objective function to estimate its Hessian matrix, or fidelity to estimate the metric tensor. Thereafter, these matrices are used to precondition the gradient in order to improve the convergence rate, avoiding the need for tuning of some gain coefficients. Another possibility arises when considering the large amount of information generated by our method. At each iteration 4 local observables are measured, which after several iterations provide a considerable amount of information about the unknown state. Thus, we can obtain an estimate of the unknown states by means of maximum likelihood [37]. This, together with the estimate of the optimal measurement settings provided by CSPSA, can be used as initial guesses in a SDP problem to optimize the CHSH function. The solution of this problem can be used as the initial guess of the optimal measurement settings in the next iteration of CSPSA. This procedure does not increases the amount of measurements to be carried out but the computational cost. Besides, the use of _a priori_ information can be employed to further increase the CSPSA convergence rate and achieve entanglement detection with a reduced number of iterations. We would like to remark that our approach based on CSPSA can be employed in other interesting problems. The construction of entanglement witnesses is a demanding computational task [30; 31], especially if the state is unknown, but it could be done efficiently with our method. The search for the optimal measurement settings to violate a multiqubit Bell inequality is challenging [60; 61; 62]. This is because the dimension scales exponentially with the number of qubits, so finding the optimal with quantum tomography and SPD is unfeasible. Our approach could provide an advantage in this problem since its resource scales with the number of iterations and not with the number of qubits. ###### Acknowledgements. This work was supported by ANID - Millennium Science Initiative Program - ICN17\({}_{-}\)012 and by Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) Grant No 1180558. J. C.-V. was supported by CONICYT- PCHA/DoctoradoNacional/2018-21181692. J. F. B. acknowledges support from FONDECYT Grant No 317030. L.P. was supported by ANID-PFCHA/DOCTORADO-BECAS-CHILE/2019-772200275, the CSIC Interdisciplinary Thematic Platform (PTI+) on Quantum Technologies (PTI-QTEP+), the CAM/FEDER Project No. S2018/TCS-4342 (QUITEMAD-CM), and the Proyecto Sinergico CAM 2020 Y2020/TCS-6545 (NanoQuCo-CM). ## Declarations The code and the data simulated to generate the figures are available on reasonable request.
2309.15151
Small-scale signatures of primordial non-Gaussianity in k-Nearest Neighbour cumulative distribution functions
Searches for primordial non-Gaussianity in cosmological perturbations are a key means of revealing novel primordial physics. However, robustly extracting signatures of primordial non-Gaussianity from non-linear scales of the late-time Universe is an open problem. In this paper, we apply k-Nearest Neighbor cumulative distribution functions, kNN-CDFs, to the \textsc{quijote-png} simulations to explore the sensitivity of kNN-CDFs to primordial non-Gaussianity. An interesting result is that for halo samples with $M_h<10^{14}$ M$_\odot$/h, the kNN-CDFs respond to \textit{equilateral} PNG in a manner distinct from the other parameters. This persists in the galaxy catalogs in redshift space and can be differentiated from the impact of galaxy modelling, at least within the halo occupation distribution (HOD) framework considered here. kNN-CDFs are related to counts-in-cells and, through mapping a subset of the kNN-CDF measurements into the count-in-cells picture, we show that our results can be modeled analytically. A caveat of the analysis is that we only consider the HOD framework, including assembly bias. It will be interesting to validate these results with other techniques for modeling the galaxy--halo connection, e.g., (hybrid) effective field theory or semi-analytical methods.
William R. Coulton, Tom Abel, Arka Banerjee
2023-09-26T18:00:03Z
http://arxiv.org/abs/2309.15151v1
Small-scale signatures of primordial non-Gaussianity in k-Nearest Neighbour cumulative distribution functions ###### Abstract Searches for primordial non-Gaussianity in cosmological perturbations are a key means of revealing novel primordial physics. However, robustly extracting signatures of primordial non-Gaussianity from non-linear scales of the late-time Universe is an open problem. In this paper, we apply k-Nearest Neighbor cumulative distribution functions, kNN-CDFs, to the quijote-png simulations to explore the sensitivity of kNN-CDFs to primordial non-Gaussianity. An interesting result is that for halo samples with \(M_{h}<10^{14}\) M\({}_{\odot}\)/h, the kNN-CDFs respond to _equilateral_ PNG in a manner distinct from the other parameters. This persists in the galaxy catalogs in redshift space and can be differentiated from the impact of galaxy modelling, at least within the halo occupation distribution (HOD) framework considered here. kNN-CDFs are related to counts-in-cells and, through mapping a subset of the kNN-CDF measurements into the count-in-cells picture, we show that our results can be modeled analytically. A caveat of the analysis is that we only consider the HOD framework, including assembly bias. It will be interesting to validate these results with other techniques for modeling the galaxy-halo connection, e.g., (hybrid) effective field theory or semi-analytical methods. ## 1 Introduction Many theories of the early Universe predict that the statistical distribution of primordial potential perturbations is close to Gaussian, but with small deviations (see e.g., Chen, 2010; Achicarro et al., 2022; Meerburg et al., 2019, for recent reviews). The structure of these deviations, known as primordial non-Gaussianity (PNG), encodes the details of the physical processes governing the evolution of the Universe at that epoch. Primordial properties ranging from the number of particles present, the masses and spins of these particles, the strength of interactions, and primordial symmetries all leave distinct non-Gaussian signatures (e.g. Maldacena, 2003; Creminelli & Zaldarriaga, 2004; Alishahiha et al., 2004; Chen et al., 2007; Meerburg et al., 2009; Arkani-Hamed & Maldacena, 2015; Cabass et al., 2023d). Thus, characterizing the statistical distribution of primordial perturbations is a powerful way to reveal new information on the early Universel and probe energy scales far beyond the reach of terrestrial experiments. In this work, we focus on three templates of non-Gaussianity that are most relevant for Large Scale Structure (LSS) - _local_, _equilateral_, and _orthogonal_(Komatsu & Spergel, 2001; Senatore et al., 2010). Each of these generates a unique signature in the primordial bispectrum, which characterize the skewness of the primordial perturbations as a function of scale. To date, observational studies of primordial non-Gaussianity have been driven by measurements of the bispectrum of the cosmic microwave background (CMB) anisotropies (e.g., Komatsu et al., 2003; Planck Collaboration, 2020) and the large-scale distribution of galaxies (D'Amico et al., 2022a, b; Cabass et al., 2022a, b, 2023b). Whilst no signatures of primordial non-Gaussianity have yet been detected, large regions of primordial model space have already been ruled out. New experiments, such as the Dark Energy Spectroscopic Instrument, SPHEREX, the Simons Observatory, and CMB-S4, will provide dramatically expanded and more precise data sets that will significantly improve upon current bounds (Dore et al., 2014; DESI Collaboration et al., 2016; Abazajian et al., 2016; Ade et al., 2019). However, these constraints have not yet reached the regions of greatest theoretical interest that divide qualitatively different regions, such as strong and weakly coupled physics (Cabass et al., 2023c). The bispectrum has been used extensively for two reasons: first, it is the optimal statistic to constrain _local_, _equilateral_ and _orthogonal_ non-Gaussianity in the CMB and in the very large-scale distribution of galaxies (Babich, 2005; Philcox, 2021). Second, analytical tools have been developed that can accurately model these observations (e.g. Baumann et al., 2012; Carrasco et al., 2012; Cabass et al., 2023a, with the latter for a review). However, for measurements of the small-scale distribution of galaxies, where the signal-to-noise ratio (SNR) is high, and where the relation to the primordial anisotropies is non-linear, these statements break down. The non-linear evolution redistributes information from the primordial bispectrum to not only the late-time bispectrum, but also to the trispectrum, pentaspectrum, and beyond (where the trispectrum and pentaspectrum are the kurtosis and 5th moment as a function scale). This means that analyses based purely on the late-time bispectrum are not accessing all of the available information. One approach is to include these higher-order correlation functions in the analysis, however it is highly challenging to compute these statistics (Philcox et al., 2021). Further, the ability to model the bispectrum relies on a perturbative analysis which is typically only valid on large scales. A final challenge of this approach is that the non-linear processes governing structure evolution generate late-time non-Gaussianities, even in the absence of PNG. These non-Gaussianities can mimic the bispectrum signatures of PNG and thereby bias inferences. When removing these biases, we need to marginalize over the uncertainties in our understanding of these processes, for example in how galaxies occupy halos. This significantly degrades the resulting PNG constraints, especially for _equilateral_ non-Gaussianity (Baldauf et al., 2016; Lazanu et al., 2017; Baumann and Green, 2022; Cabass et al., 2023). In this work we explore the efficacy of k-nearest neighbour cumulative distributions functions (kNN-CDFs), an alternative summary statistic to the hierarchy of \(N\)-point correlations, to constrain PNG. kNN-CDFs describe the volume-averaged probability of finding at least k objects, in our case dark matter halos or galaxies, within a sphere of radius \(R\). Recent work (Banerjee and Abel, 2021, 2021, 2022; Banerjee and Abel, 2023) have shown that kNN-CDFs are a powerful way of analyzing large scale structure data sets. In particular, kNN-CDFs can break parameter degeneracies that are found with other statistical probes Banerjee et al. (2022). We investigate whether the response of kNN-CDFs to PNG is distinct from other processes and therefore whether kNN-CDFs can separate PNG from late-time non-Gaussianities generated by nonlinear gravitational evolution and galaxy formation. kNN-CDFs are closely related to the counts-in-cell (CiC) summary statistic. CiCs have been extensively studied (Bernardeau and Valageas, 2000; Valageas, 2002; Bernardeau et al., 2015; Bernardeau and Reimberg, 2016; Uhlemann et al., 2016), including for constraining PNG from the dark matter field Uhlemann et al. (2018); Friedrich et al. (2020), and accurate analytical models have been developed for them (see e.g., Uhlemann et al., 2020, and references therein). By exploiting the relationship between CiCs and kNN-CDFs, we can obtain analytical models that describe our results and thereby replicate one desirable feature of bispectra analyses. To examine the impact of primordial non-Gaussianity on kNNs we first use the qulijote-png suite of simulations (Coulton et al., 2022). These simulations were designed to test PNG analysis methods and have been used to studying bispectrum statistics of the matter (Coulton et al., 2022; Jung et al., 2022) and halo fields (Coulton et al., 2022; Jung et al., 2022), the halo mass function (Jung et al., 2023) and machine learning statistics (Jung et al., 2023; Floss and Meerburg, 2023). Combined with the original qulijote suite of simulations (Villaescusa-Navarro et al., 2020), we can explore how kNNs respond to cosmological parameters jointly with primordial non-Gaussianity. This paper is structured as follows: in Section 2 we briefly review kNN-CDFs and in Section 3 we describe the simulations used in this work. In Section 4 we apply kNN-CDFs to catalogs of dark matter halos and characterize the key features induced by PNG and their similarity to features arising from other key parameters. In Section 5 repeat this analysis on a set of mock galaxy catalogs, compare the simulated catalogs to the CiC model and perform a Fisher forecast of the constraining power. We present our conclusions and outlook in Section 6. In Appendix A we discuss how different choices in the definition of our sample impact the results and in Appendix B we discuss the convergence of our numerical Fisher forecasts. ## 2 Overview of k-nearest neighbour cumulative distributions functions k-Nearest Neighbour cumulative distributions functions simply measure the volume-averaged probability of finding at least \(k\) objects with a sphere of radius, \(R\). In this work we denote these statistics as kNN(\(R\)). They provide an alternative means of accessing the information contained within all the orientation averaged \(N\)-point correlation functions. There are several useful features of kNN-CDFs: first they can be computed in a very efficient manner (see e.g., Banerjee and Abel, 2021, for details). Second, they naturally can be applied to catalogs of objects, as is obtained from observations, without needing the data to be grided (see e.g, Jing, 2005; Sefusatti et al., 2016, for issues arising from gridding data sets). Third, the kNN-CDFs sample all regions of the volume equally rather than focusing on over-dense regions, yielding sensitivity to underdense regions in the volume. In fact, the 1NN-CDF is directly related to the Void Probability Function (VPF) (White, 1979). The analysis here largely follows the methods described in Banerjee and Abel (2021) and we refer the reader to Banerjee and Abel (2021) for more details. To measure the CDFs we use the following procedure: 1. We generate a set of distributed volume-filling points distributed on a grid. We call these the query points. To ensure dense sampling, we typically use 10 times as many query points as there are data points. 2. We build a kd-tree using the data points in a chosen simulation (halo positions or galaxy positions). For each query point, we use this tree structure to compute the distances from the query points to the \(k\)-th nearest neighbor data points. In this terminology \(k=1\) refers the closest data point to a query point, \(k=2\) refers to the second nearest data point from a query point, and so on. 3. For a particular \(k\), we sort the list of distances to generate the empirical \(k\)NN-CDF for that \(k\). We repeat the same step for different values of \(k\). 4. We then repeat the measurements for each simulation, and compute the average at a given cosmology and set of galaxy parameters. To compute the response of the kNN-CDFs to different parameters we use finite differences as \[\frac{\partial\overline{kNN}}{\partial\theta}=\left\langle\frac{kNN|_{\theta= \theta+\delta\theta}-kNN|_{\theta=\theta-\delta\theta}}{2\delta\theta}\right\rangle, \tag{1}\] where \(\delta\theta\) is a small step in the parameter of interest and the expected is computed as the average over the different simulation realizations. The most significant difference between this work and Banerjee and Abel (2021) is that we consider all objects in the catalog, rather using a fixed number of objects. This was used to allow easier comparison to previous quiote-png analyses (e.g., Coulton et al., 2022b). In Appendix A, we present the results for samples with fixed number density and find a qualitatively similar picture. ## 3 Simulations We use the quiote and quiote-png simulations for a detailed description we refer the reader to Villaescusa-Navarro et al. (2020); Coulton et al. (2022a). These are a suite of gadget-3 (Springel, 2005) dark-matter, N-body simulations run at a cosmology consistent with _Planck_(Planck Collaboration VI, 2020): \(\Omega_{m}=\)0.3175, \(\Omega_{\Lambda}=0.6825\), \(\Omega_{b}=0.049\), \(\sigma_{8}=0.834\), \(h=0.6711\) and \(n_{s}=0.9624\). The initial conditions were modified to include three shapes of primordial bispectrum: _local_, _quiotetal_ and _orthogonal_. The primordial bispectrum, \(B_{\Phi}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3})\), is defined as \[\langle\Phi(\mathbf{k}_{1})\Phi(\mathbf{k}_{2})\Phi(\mathbf{k}_{3})\rangle= \delta^{(3)}(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3})B_{\Phi}(\mathbf{k }_{1},\mathbf{k}_{2},\mathbf{k}_{3}) \tag{2}\] where \(\Phi(\mathbf{k})\) is the primordial potential at wavenumber \(\mathbf{k}\). The three primordial bispectra considered here probe a range of different physical process (see e.g., Chen, 2010; Meerburg et al., 2019; Achucarro et al., 2022, for overviews) and have the forms: \[B_{\Phi}^{\rm local}(k_{1},k_{2},k_{3})= 2f_{\rm NL}^{\rm local}P_{\Phi}(k_{1})P_{\Phi}(k_{2})+\ 2\ {\rm perm.}, \tag{3}\] \[B_{\Phi}^{\rm equil.}(k_{1},k_{2},k_{3})=6f_{\rm NL}^{\rm equil.} \Big{[}-P_{\Phi}(k_{1})P_{\Phi}(k_{2})+\ 2\ {\rm perm.}\] \[-2\left(P_{\Phi}(k_{1})P_{\Phi}(k_{2})P_{\Phi}(k_{3})\right)^{ \frac{2}{3}}+P_{\Phi}(k_{1})^{\frac{1}{3}}P_{\Phi}(k_{2})^{\frac{2}{3}}P_{\Phi }(k_{3})\] \[+5\ {\rm perm.}\Big{]}, \tag{4}\] and \[B_{\Phi}^{\rm ortho-LSS}(k_{1},k_{2},k_{3})=\] \[6f_{\rm NL}^{\rm ortho-LSS}\left(P_{\Phi}(k_{1})P_{\Phi}(k_{2})P _{\Phi}(k_{3})\right)^{\frac{2}{3}}\Bigg{[}\] \[-\left(1+\frac{9p}{27}\right)\frac{k_{3}^{2}}{k_{1}k_{2}}+2\ {\rm perms}+ \left(1+\frac{15p}{27}\right)\frac{k_{1}}{k_{3}}\] \[+5\ {\rm perms}-\left(2+\frac{60p}{27}\right)\] \[+\frac{p}{27}\frac{k_{1}^{2}}{k_{2}^{2}k_{3}^{2}}+2\ {\rm perms}- \frac{20p}{27}\frac{k_{1}k_{2}}{k_{3}^{2}}+2\ {\rm perms}\] \[-\frac{6p}{27}\frac{k_{1}^{3}}{k_{2}k_{3}^{2}}+5\ {\rm perms}+ \frac{15p}{27}\frac{k_{1}^{2}}{k_{3}^{2}}+5\ {\rm perms}\Bigg{]}, \tag{5}\] where \(P_{\Phi}(k)\) is the primordial potential power spectrum, \(f_{\rm NL}^{\rm X}\) is the amplitude of each bispectrum and \[p=\frac{27}{-21+\frac{73}{7(20\pi^{2}-193)}}\,. \tag{6}\] We refer the reader to Coulton et al. (2022a) for a detailed description of the implementation of these bispectra. For each shape 500 simulations are run with an amplitudes of the primordial bispectrum, \(f_{\rm NL}^{\rm X}=100\), where \(X\) denotes the shape, and 500 with \(f_{\rm NL}^{\rm X}=-100\). The seeds of the \(f_{\rm NL}^{\rm X}=100\) and \(f_{\rm NL}^{\rm X}=-100\) simulations are matched to reduce cosmic variance. The quiote simulations varied Figure 1: The normalized response of five dark matter halo kNN-CDFs to primordial non-Gaussianity, cosmological and bias parameters. These responses are measured from the quiote-png simulations for all halos with \(M_{h}\geq 3.2\times 10^{13}\ {\rm M_{\odot}/h}\) at \(z=0\) and, for ease of comparison, each is normalized by its largest value. The response of the kNN-CDFs to _equilateral_ non-Gaussianity is different from the other parameters. a set of cosmological parameters above and below the fiducial value, for use in Fisher forecasts. We used simulations that varied the amplitude of the linear matter fluctuations on smoothed on 8Mpc/\(h\) scales, \(\sigma_{8}\), the Hubble constant, \(h\), the fractional density of matter, \(\Omega_{m}\), and the primordial spectral tilt, \(n_{s}\). For each parameter there are 500 simulations with the parameter perturbed above and 500 perturbed below the fiducial value, again with matched seeds. We also use the 15,000 simulations run at the fiducial cosmology to compute covariance matrices. For the analysis of the dark matter halos, we analyze the same samples as used Coulton et al. (2022b); Jung et al. (2022a). Specially we use the friends-of-friends (FoF, Davis et al. 1985) halo catalog at redshifts, z, \(z=0.0\) and \(z=0.5\), and only include halos with mass, \(M_{h}\), \(M_{h}\geq 3.2\times 10^{13}\) M\({}_{\odot}\)/h. We work in redshift space by displacing the halos along the line of sight (\(\hat{\bf z}\) axis) according to their velocity. We use a halo occupation distribution (HOD) to generate mock galaxy catalogs from the simulations. Within the HOD framework used here, galaxies are assigned to halos probabilistically based solely on the halo mass, i.e. \(P(N_{\rm gal}|M_{h})\). In this work we use the Zheng et al. (2007) formulation that decomposes the total number of galaxies in a halo into central and satellite contributions as \(N_{\rm gal}=N_{\rm central}+N_{\rm satellite}\). The central galaxies follow a Bernoulli distribution with mean \[\langle N_{\rm central}\rangle=\frac{1}{2}\left[1+{\rm erf}\left(\frac{\log M _{h}-\log M_{\rm min}}{\sigma_{\log M}}\right)\right] \tag{7}\] and the satellite galaxies are Poissonian distribution with rate \[\langle N_{\rm satellite}\rangle=\langle N_{\rm central}\rangle\left(\frac{M_{h }-M_{0}}{M_{1}}\right)^{\alpha}. \tag{8}\] The parameters \(M_{\rm min}\) and \(\sigma_{\log M}\) set the minimum mass of halos that host galaxies and the width of the transition to hosting a central galaxy. The parameters \(M_{0}\), \(M_{1}\) and \(\alpha\) control the power law distribution of the satellite galaxies. The central galaxies are placed at the center of the halo, with the halo's velocity, whilst the satellite galaxies are distributed according to a NFW profile with velocities set acording to the isotropic Jeans equations (Navarro et al. 1996; Lokas & Mamon 2001). We use these velocities to displace the galaxies along the line of sight to produce catalogs in redshift space. Biagetti & et al. (prep) derived a set of best fit HOD parameters for the quiote-png simulations, such that the galaxy catalogs matched the CMASS BOSS galaxy survey at \(z=0.5\). We used a set of parameters motivated by those fits: \(M_{\rm min}=2.2\times 10^{13}\) M\({}_{\odot}\)/h, \(M_{0}=2.8\times 10^{13}\) M\({}_{\odot}\)/h, \(M_{1}=1.78\times 10^{14}\) M\({}_{\odot}\)/h, \(\sigma_{\log M}=0.15\) and \(\alpha=0.5\). We use these parameters to generate catalogs at \(z=0.0\). The purpose Figure 2: The signature of primordial non-Gaussianity in the kNN-CDFs displays a strong mass dependence as is demonstrated by examining three halo mass samples: a high mass sample (\(M_{h}\geq 1\times 10^{14}\) M\({}_{\odot}\)/h, green ), an intermediate mass sample (\(6\times 10^{13}\)M\({}_{\odot}\)/h \(\leq M_{h}<1\times 10^{14}\) M\({}_{\odot}\)/h, orange) and a lower mass sample (\(3.2\times 10^{13}\)M\({}_{\odot}\)/h \(\leq M_{h}<6\times 10^{13}\) M\({}_{\odot}\)/h, blue). For comparison we plot the total halo sample as a black dotted line. of this choice is to demonstrate properties of the kNN-CDFs and provide a comparison to previous works at \(z=0.0\). Thus this galaxy catalog is not designed to match any specific experiment. Note that minimum dark matter halo mass used for the galaxy catalogs is \(M_{h}=1.3\times 10^{1}3M_{\odot}/h\), so lower than that used in analyses of the dark matter halos. ## 4 The impact of PNG on dark matter halo K-nearest neighbour cumulative distribution functions In Fig. 1 we show how kNN-CDFs distributions for five different numbers of neighbours respond to PNG, variations in cosmological parameters and a simple bias parameter (\(M_{\rm min}\)). Interestingly the kNN-CDFs statistics, when compared across different numbers of nearest neighbours, respond differently to _equilateral_ non-Gaussianity than to all other parameters. To understand this further we break the halo catalog into three subsets: a high mass sample, \(M_{h}\geq 1\times 10^{14}\) M\({}_{\odot}\)/h, an intermediate sample, \(6\times 10^{13}\)M\({}_{\odot}\)/h \(\leq M_{h}\leq 1\times 10^{14}\) M\({}_{\odot}\)/h, and a low mass sample \(3.2\times 10^{13}\)M\({}_{\odot}\)/h \(\leq M_{h}\leq 6\times 10^{13}\) M\({}_{\odot}\)/h. The results are shown in Fig. 2. There is a complex mass dependence of these responses and for some mass bins the _local_ PNG shows a similar response. These results show similarities to effects seen in the halo mass function (see e.g., LoVerde et al., 2008; Wagner et al., 2010; Jung et al., 2023); the response of the halo mass function to _equilateral_ and _local_ PNG changes sign at \(~{}7\times 10^{13}M_{\odot}\)/h and \(1\times 10^{14}M_{\odot}\)/h. These similarities suggest a common underlying cause. Changes in the number density will impact the kNN-CDFs; the simplest case of a rescaling of the mass function leads to a horizontal shift of the kNN-CDF. The response of the halo mass function to PNG is complex (see e.g, Fig 1 of Jung et al., 2023) with the number of high mass halos being boosted, whilst the number of low mass halos is reduced. As different mass halos are clustered to differing extents we expect a complex signature in the kNN-CDFs. A second suggestive piece of evidence to support this hypothesis is that the redshift evolution of the kNN signature mirrors the effects seen in the mass function. This can be seen in Fig. 3 where we show the response of the kNN-CDFs at redshift \(z=0.5\), again split in four mass samples. The distinctive feature moves to lower mass, as does the signature in the halo mass function. A key challenge in constraining _equilateral_ non-Gaussianity is disentangling it from non-Gaussianity introduced by the non-linear evolution of the LSS. The distinct impact of PNG on the kNN-CDFs, for certain samples, means that the degeneracy with these late time effects will be significantly reduced. To explore this we perform a simple Fisher forecast for constraints using kNN-CDFs measurements. In this forecast we use kNN-CDFs with the following number of neighbours: 1, Figure 3: An examination of the kNN-CDFs obtained from halos at \(z=0.5\) for four different halo mass samples. The configuration is otherwise the same as Fig. 2. 2, 4, 8, 16, 32, 64 and 128. We cut the kNN-CDFs at a minimum scale of 10 Mpc/h and cut the tails of the distributions, where kNN-CDF\(<0.005\) or kNN-CDF\(>0.995\). We use the halo catalog with \(M_{h}>3.2\times 10^{13}\)M\({}_{\odot}\)/h at z=0.0. An interesting question is what likelihood describes the kNN-CDFs. Characterizing the distribution of the kNN-CDFs is complex and in this work we consider an alternative avenue. We choose to compress the statistics and then assume a Gaussian likelihood for the compressed statistics. From other studies (Anbajagane et al., 2023), we know that the likelihoods of the CDFs are very close to Gaussian, as long as we stay away from the tails and do not sample the CDF too densely. If the likelihood of the kNN-CDFs was known, the data could be compressed lossesly into a set of summary statistics that, as quasi maximum likelihood estimators, are Gaussian distributed (see e.g., Lehmann and Casella, 2006; Alsing and Wandelt, 2018). Here compress the kNN-CDFs distribution functions using the moded compression (Heavens et al., 2000) as \[\hat{\theta}_{i}=\frac{\partial\overline{kNN}}{\partial\theta_{i}}\mathcal{C} ^{-1}\left(kNN-\overline{kNN}\right), \tag{9}\] where \(\hat{\theta}_{i}\) are the compressed statistics and \(\overline{kNN}\) and \(\mathcal{C}\) are the mean and covariance of the kNN-CDFs measurements. As the kNN-CDFs are not Gaussian, this compression losses information. However, the compressed statistics are well approximated by a Gaussian distribution - this can be understood through the central limit theorem. We then compute forecast parameter constraints as \[\sigma(\theta_{i})^{2}=F_{ii}^{-1} \tag{10}\] where the Fisher information is given by \[F_{ij}=\frac{\partial\hat{\theta}_{\mathbf{i}}}{\partial\theta_{i}}\Sigma_{ IJ}\frac{\partial\hat{\theta}_{\mathbf{j}}}{\partial\theta_{j}}, \tag{11}\] and \(\Sigma_{IJ}\) is the covariance of the summary statistics. We compute the derivatives for compression and the Fisher forecast numerically as described in Coulton et al. (2022). We split the simulations into two disjoint sets: the first set is used for the compression and the second for the Fisher forecast. A more detailed discussion of Fisher forecasts using compressed statistics is given in Coulton and Wandelt (2023). As seen in Fig. 4, marginalizing over a simple bias parameter dramatically degrades the power-spectrum and bispectrum prediction, whilst leaves the kNN-CDF result largely unchanged. It is well known that the impact _equilateral_ non-Gaussianity of the galaxy power-spectrum and bispectrum is highly degenerate with galaxy bias (Baldauf et al., 2016; Lazanu et al., 2017; Baumann and Green, 2022); our results, even with a simple bias model, reproduce this. This degeneracy significantly degrades our constraints and limits the scales that can be used. Effective field theory (EFT) approaches provide a systematic way to marginalize over these effects (Baumann et al., 2012), however no systematic method exists for scales beyond the validity of EFT and the large degeneracy means obtaining a robust result from non-linear scales will be difficult. On the other hand, this result demonstrates that kNN-CDFs can effectively disentangle this bias parameter and PNG. This suggests that they many provide a path to robust, small-scale PNG measurements and in the next section we test this hypothesis more stringently with an extended, and more realistic, bias model. It is also interesting that the unmarginalization bispectrum and kNN constraints are very similar. ## 5 Extracting signatures of PNG from galaxy K-nearest neighbour distributions Next we apply the kNN-CDFs to the mock galaxy sample described in Section 3. The choice of this galaxy sample is motivated by the results of Section 4: it is interesting to see whether, for a galaxy sample not dominated by contributions from halos with \(M_{h}>10^{14}\) M\({}_{\odot}\)/h, the _equilateral_ PNG signature remains. In this section, we primarily focus on exploring what can be learnt about _equilateral_ non-Gaussianity with kNN-CDFs. ### The impact of PNG on galaxy kNN-CDFs Using the HOD galaxy sample we compare how cosmological parameters, HOD parameters, and primordial non-Gaussianity impact the properties of the galaxy sample. As seen in Fig. 5, the impact of _equilateral_ PNG in this sample remains distinct from all other contributions. Note that several of the HOD parameters are highly degenerate. In particular \(\sigma_{\log M}\) and \(\log M_{\rm min}\) are essentially perfectly degenerate. The fiducial HOD used in this work, as described in Section 3, assumes that the number of galaxies in each dark matter halo is only a function of the halo's mass. However, numerous studies of hydrodynamical simulations and semi-analytic Figure 4: A comparison of the constraints on the amplitude of _equilateral_ non-Gaussianity, \(f_{\rm NL}^{\rm coll}\), from the bispectrum and power spectrum up to \(k=0.5\)/Mpc (as reported in Coulton et al., 2022), with those from the kNN-CDFs. This analysis uses the \(M_{h}>3.2\times 10^{13}\) halo sample from the 1 Gpc\({}^{3}\) box at \(z=0.0\) and considers two cases: constraining only \(f_{\rm NL}^{\rm coll}\) and constraining \(f_{\rm NL}^{\rm equil}\): whilst marginalizing an effective bias parameter, the minimum halo mass of the catalog \(M_{\rm min}\). The distinct impact of primordial non-Gaussianity on the kNN-CDFs for this sample means that, unlike the bispectrum and power spectrum, the signature of primordial non-Gaussianity is not strongly degenerate with the bias parameter. models have shown that other properties, such as concentration and environment, are important (e.g., Gao et al., 2005; Gao and White, 2007; Wechsler et al., 2006; Croton et al., 2007; Li et al., 2008; Bose et al., 2019; Hadzhiyska et al., 2020; Xu et al., 2021; Hadzhiyska et al., 2021). The dependence on additional halo properties is called 'assembly bias'. Observational evidence that assembly bias is important for current two-point analyses is mixed: some works find assembly bias to be important (Zentner et al., 2019; Yuan et al., 2021, 2022; Contreras et al., 2023) and others finding no strong evidence (Lin et al., 2016; Niemiec et al., 2018; Salcedo et al., 2022; Yuan et al., 2023; Rocher et al., 2023). The reconciliation of these mixed results lies in the scales used in the analyses and the properties of the samples. Given the increasing sensitivity of upcoming surveys, the importance of assembly bias in simulations and small scale observations, and the hints that assembly bias may be more important for beyond two-point statistics (Yuan et al., 2023c); we explore how one model of assembly bias impacts our kNN-CDF analysis. We explore assembly bias using the Luminous Red Galaxy (LRG) extended HOD from Yuan et al. (2022, 2023c). This model uses the local density around dark matter halos as the secondary halo parameter that modulates the number of halos within a dark matter halo. This is implemented by the following modifications \[\log M_{\rm min}^{\rm new}=\log M_{\rm min}+B_{\rm cent}\left( \delta_{\rm m}^{\rm rank}-0.5\right),\] \[M_{0}^{\rm new}=\frac{M_{0}}{M_{\rm min}}M_{\rm min}^{\rm new}, \ {\rm and}\] \[\log M_{1}^{\rm new}=\log M_{1}+B_{\rm sat}\left(\delta_{\rm m}^ {\rm rank}-0.5\right), \tag{12}\] where \(B_{\rm cent}\) and \(B_{\rm sat}\) characterize the level of assembly bias and \(\delta_{\rm m}^{\rm rank}\) is the rank of the local over-density, smoothed on a scale of 5Mpc/h. In Fig. 5, we show how response of the kNN-CDF mea Figure 5: The signature of primordial non-Gaussianity, cosmological and HOD parameters in the kNN-CDFs of the galaxy sample. The response of this galaxy sample to primordial non-Gaussianity is distinct from the response to cosmological and HOD parameters. surements to variations of the two assembly bias parameters, around the fiducial values of 0. We see that, for this galaxy sample, the impact on the kNN-CDFs measurements is not similar to the signature of _equilateral_ non-Gaussianity. However, further work is required to test whether this holds for other galaxy samples and other secondary parameters (such as concentration or local tidal field). ### Analytical Modeling kNN-CDFs are closely related to counts-in-cell statistics, which characterize the distribution of number counts, or the over density, smoothed over some scale. More precisely, measuring all the kNN-CDFs for all numbers of nearest neighbours at a single radius is exactly equivalent to the pdf of counts-in-cell (CiC) statistic, smoothed with a spherical top hat, evaluated a single smoothing scale. Thus, a measurement of all nearest neighbours of the kNN-CDFs is exactly equivalent to measuring the CiC pdf at all smoothing scales. However, typical kNN-CDFs analyses only consider a subset of all the kNN-CDFs nearest neighbours distributions (here we have used nearest neighbours 1-8,16,32,64 and 128), whilst typical CiC analyses consider only a handful of smoothing scales. From this view, the two statistics are highly complementary. There has been extensive work on CiC statistics (e.g., Fossalba and Gaztanaga, 1998; Valageas, 2001, 2002; Bernardeau et al., 2014; Uhlemann et al., 2016; Ivanov et al., 2019; Uhlemann et al., 2020) including on using CiCs measurements for constraining PNG (e.g, Gaztanaga and Fosalba, 1998; Valageas, 2002; Uhlemann et al., 2018; Friedrich et al., 2020). Most recently Friedrich et al. (2020) found that the counts-in-cell matter field pdf is a powerful probe of primordial non-Gaussianity that can be modelled analytically. This model for the matter field can be combined with work on the galaxy-matter connection to obtain predictions for the galaxy CiC pdf. In this work we use the cosmomentum package developed in Friedrich et al. (2020) with the galaxy-matter connections from Friedrich et al. (2018, 2022). These CiC analytical advancements can be equally applied to understand kNN-CDFs measurements, thanks to the intimate connection between the two statistics. As analytical modeling of the kNN-CDFs has been considered in past work (Banerjee et al., 2022), here we simply use the CiC models to validate our results. To do so we map the kNN-CDF measurements onto the CiC framework and compare our measurements to the CiC predictions. We perform this comparison, rather than mapping the analytical predictions on the kNN-CDF frame, primarily as the model requires a rescaling of the non-linear variance at each smoothing scale, see Friedrich et al. (2020) for a detailed discussion of this. This rescaling is performed once for the CiC pdf, as it is a single smoothing scale, but must be performed for each radius of the kNN-CDFs. As this equates to a rescaling of each point for a single kNN-CDFs, it is only through consistency across kNN-CDFs, with many different numbers of nearest neighbours, can the theory model be rigorously tested. Tests across different numbers of nearest neighbours of kNN-CDFs effectively equates to the CiC picture! A more minor reason is that the analytical method is only valid for the bulk on the CiC pdf and these cuts have a simpler expression in the CiC framework. To generate our analytical predictions we use the Lagrangian bias method from Friedrich et al. (2022), matching the galaxy linear bias, \(b_{1}\), and number density, \(\bar{n}\) to that of our simulated galaxy catalogs. Further we use our simulations to compute the non-linear variance for each smoothing scale and use this to calibrate our analytical predictions as described in Friedrich et al. (2020). We refit the parameters of the non-Poissonian shot noise (c.f. Eq. 24 of Friedrich et al., 2022) to match our simulations. We find the following parameterization of the non-Poissonian shot noise describes the bulk-pdf behavior of our simulations \[\frac{\langle N^{2}|\delta_{m}\rangle-\langle N|\delta_{m}\rangle^{2}}{\langle N |\delta_{m}\rangle}=\begin{cases}\alpha_{0}+\alpha_{1}\delta_{m},&\text{if }\delta_{m}<0\\ \alpha_{0},&\text{otherwise},\end{cases} \tag{13}\] where \(\alpha_{0}=0.95\) and \(\alpha_{1}=-0.35\) control the deviation from Poissonian noise. An alternative could be a quadratic bias model, however for simplicity we use piece-wise linear model. In Fig. 6 we compare measurements of the derivatives of the first 250 kNN-CDFs of our galaxy sample to the analytical predictions. For both the smoothing scales shown, \(15\,\mathrm{Mpc/h}\) and \(28.25\,\mathrm{Mpc/h}\), and for all three parameters (\(\sigma_{8}\), \(\Omega_{m}\) and \(f_{\mathrm{NL}}^{\mathrm{equil.}}\)) we find generally good agreement between the simulations and theoretical model. This agreement provides strong evidence that the signatures of PNG in the kNN-CDFs are physical, and not an artifact of our simulation methods. This eases any potential concerns on the limitations of the simulations, such as on the impact of resolution, finite volumes, the approximation generation of primordial non-Gaussianity or missing components (such as baryons). The analytical method also helps develop intuition behind our observed features. The theoretical model from Friedrich et al. (2020) and Friedrich et al. (2022) focused on modelling the bulk of the pdf and thus the model is not expected to work in the tails of the distribution. Further, Friedrich et al. (2020) noted that resolution effects can lead to small discrepancies between the model and simulations. Combined these effects are thought to explain the observed differences seen in our measurements. ### Parameter Constraints Using these results we consider a Fisher forecast using the galaxy sample. As in Section 4 we perform the Fisher forecast in the compressed data space. As \(\sigma_{\log M}\) and \(\log M_{\mathrm{min}}\) are perfectly degenerate for our sample, we only consider one of these as a free parameter, \(\log M_{\mathrm{min}}\). The results are shown in Fig. 7. From \(1\,\mathrm{Gpc^{3}}\) volume the kNN-CDFs would be able to constrain _equilateral_ non-Gaussianity to \(\sigma(f_{\mathrm{NL}}^{\mathrm{equil.}})=299\). We note that the resolution limitations of the simulations mean that a realistic sample, which much higher number densities, would be able to obtain tighter constraints. The value of this approach can be seen by comparing to results in (Coulton et al., 2022; Jung et al., 2022, 2023), where the sample halo sample is analyzed with bispectrum and halo mass function statistics. Whilst \(f_{\mathrm{NL}}^{\mathrm{equil.}}\) is not degenerate with any single parameter, see Fig. 5, there are some mild degeneracies in the 9 dimensional space. Quantitatively, the marginalized constraint is 2.4 times larger than the case when the eight other parameters are fixed. Given the large degree of flexibility within the 9 parameter space, it is not surprising that degeneracies appear. The degradation seen here is similar to that found when marginalizing over the same cosmological parameters and only one, simple bias parameter in a bispectrum halo analysis (Coulton et al., 2022; Jung et al., 2022), implying that the size of kNN degeneracies is significantly smaller. Given the large parameter space, it is unsurprising that parameter degeneracies would develop. The information in the kNN-CDFs primarily comes from small-scale clustering. Therefore, a kNN-CDF analysis could be combined with other cosmological probes such as baryon acoustic oscillation measurements, supernova, CMB power spectra measurements or even large-scale galaxy power spectrum and bispectrum measurements. These probes are all highly complementary and could be used to break degeneracies with the kNN-CDFs. In Fig. 7 we show two examples of how this could be beneficial. In the first case we add a prior on the Hubble parameter, this prior is from Riess et al. (2022) and in the second case we include independent priors on the cosmological parameters based on _Planck_ 2018 cosmology (Planck Collaboration et al., 2020). In both of these cases we see significantly reduced degeneracies \(f_{\rm NL}^{\rm equil.}\). With the _Planck_ priors the \(f_{\rm NL}^{\rm equil.}\) constraint is only 40% larger than the case where all other parameters are fixed. This demonstrates that kNN-CDFs can separate late-time physics, such as how galaxies occupy halos, from the primordial signatures. Note that these priors are purely demonstrative, future experiments would offer significantly improved priors. ## 6 Discussions In this work we have simulated the impact of three types of primordial non-Gaussianity on dark matter halo and galaxy kNN-CDFs from the quiot-png simulation suite. The signatures of primordial non-Gaussianity in kNN-CDFs are distinct from changes in cosmological parameters and also from bias parameters (a simple \(M_{\rm min}\) for dark matter halos and five halo occupation distribution parameters for the galaxies). The signature of PNG was characterized by examining multiple mass samples and two redshifts. The mass and redshift evolution show features similar to the halo mass function and therefore suggest the kNN-CDFs are accessing related physical effects. The scales used in this analysis (\(10-70\) Mpc/h) are smaller than typical primordial non-Gaussianity analyses (Nishimichi et al., 2020; D'Amico et al., 2022; Philcox et al., 2022) and so present a highly complementary analysis approach. An interesting future topic would be to combine these approaches. This would utilize both the optimal analysis of the large scales with the bispectrum and small scale information probed by the kNN measurements. Further kNN-CDFs have already been shown to powerfully constrain bias parameters (Yuan et al., 2023) and this could further enhance large-scale bispectrum measurements. To further validate and build intuition for our results, we compared them to theoretical predictions. This was done by exploiting the close relationship between kNN-CDFs and counts-in-cells. The impact of PNG on counts in cells for dark matter has been studied before (Uhlemann et al., 2018; Friedrich et al., 2020) and accurate analytic tools have been developed for those results. Mapping our kNN-CDFs results into the counts-in-cells (CiC) frame we found reasonable good agreement with the theoretical prediction. This provides a stringent validation of simulations and helps demonstrate that spurious artifacts of the initial conditions, which can arise from higher order effects of the IC generation method, are negligible. Further, this provides a theoretical framework to further understand the signatures found in the simulations. The close relationship between kNN-CDFs and CiCs means that our results can equally be thought of as both an extension of the results of Friedrich et al. (2020) to dark matter halos and galaxies and an examination of the CiC space (counts at fixed scales) from a different perspective (thresholded counts as a function of scale). In the primordial space, the primordial non-Gaussianities are bispectra as a function of both scale and configuration - the relative magnitudes of the three wavevectors. The theoretical model that accurately describes our kNN-CDFs is only a function of the skewness at different scales of the linear density field, as demonstrated in Eq. 52 of Friedrich et al. (2020). This has two consequences for the kNN-CDFs: first, this states that distinct signature of _equilateral_ PNG, shown in Fig. 1 and Fig. 5, arises due to the scale dependence of the _equilateral_ PNG skewness. This is connected to the halo mass function, which shows similar mass dependence and is discussed in Section 4, as the skewness on different scales controls the response of the halo mass function to PNG (Chongchitnan and Silk, 2010; LoVerde and Smith, 2011). Second, this suggests that kNN-CDFs are not able to fully access all the information contained in the primordial bispectrum, as the skewness averages over the configuration of the bispectrum and may "wash out" some of the signal. An interesting ques Figure 6: A comparison between the analytical model presented in Section 5.2, solid lines, and the kNN-CDF measurements converted to counts-in-cells for two smoothing scales, dotted lines. The error bars denote the error on the mean. tion is how much of the primordial signal is lost. To answer that we compare the information in the skewness of the linear density field to the bispectrum of the same field. The results, shown in Table 1, indicate that for _equilateral_ non-Gaussianity almost all of the information can be accessed via measurements of the skewness. For the other types of non-Gaussianity, especially _orthogonal_, more information is lost. There are several interesting future directions. First,the galaxy sample considered here does not represent an observational sample. It was constructed to explore the HOD degeneracies for a sample populating the 'low mass' halos in our simulation. Our simulation mass resolution leads to a minimum halo mass is very close to the mean mass of observational samples (e.g., BOSS, unWISE, DESI LRGs and More et al., 2015; Krolewski et al., 2020; Yuan et al., 2023). Considering a more realistic sample, which requires higher resolution simulations, would be a valuable next step. Second, whilst the catalogs used in this analysis were in red Figure 7: Constraints on _equilateral_ non-Gaussianity and a set of cosmological and HOD parameters. Whilst the impact of _equilateral_ non-Gaussianity is not degenerate with any single parameter, there are more complex degeneracies in the large 9 dimensional parameter space. As the kNN-CDF information primarily comes from small-scale, \(<50\) Mpc/h, clustering these measurements can easily be combined with probes of the background expansion, large-scale clustering and CMB measurements. Thus, we compare constraints from only the kNN-CDFs (blue) with constraints including a prior on the Hubble constant consistent with local distance measurements (Riess et al., 2022) or a _Planck_ 2018 prior on the cosmological parameters (\(\sigma_{8}\), \(\Omega_{m}\), \(h\) and \(n_{s}\)). Combining just with the Hubble prior allows most of the _equilateral_ degeneracies to be removed. These constraints are from our galaxy sample at \(z=0.0\) with a volume of 1 Gpc\({}^{3}\). shift space, the kNN-CDFs used here were isotropic. Recent work by Yuan et al. (2023c) has shown that using 2 dimensional kNN-CDFs allows more information to be extracted from cosmological data sets. A third interesting extension, also proposed in Yuan et al. (2023c), is to compute the kNN-CDFs not from a set of random points, but from the data-points themselves. This statistic was shown to be highly effective in constraining HOD parameters and thus could be effective for further reducing degeneracies between PNG and late-time physics. The results in the main text used all the halos/galaxies in the catalog. This induced a dependence on the total number of objects in the sample. In Appendix A we explore one means of obviated this: using a fixed number density. However, for a realistic survey we would desire to use all observed objects (rather than downsampling to a desired number density). An alternative would be to normalize the radius by the mean separation. Similarly this analysis worked in the simplified geometry of a periodic box at a single redshift. Exploring the impacts of non-trivial geometries, light-cone effects (e.g. Yuan et al., 2023b), observational masks and combining multiple redshifts would set the stage for an analysis of observations. A final direction would be to consider in more detail the effects of assembly bias - for example including velocity bias or investigating alternative secondary halo parameters. These topics will be the subject of future work. In conclusion, these results suggest that kNN-CDFs could be a powerful statistic for separating out primordial non-Gaussianity and late-time physical processes. ## Acknowledgements The authors are very grateful to Sandy Yuan, Oliver Philcox, Francisco Villaescusa-Navarro, David Spergel, Oliver Friedrich, Cora Uhlemann and the Quijote-PNG collaboration for useful discussions. This work was supported by collaborative visits funded by the Cosmology and Astroparticle Student and Postdoc Exchange Network (CASPEN). This work was also supported by U.S. Department of Energy grant DE-AC02-76SF00515 to SLAC National Accelerator Laboratory managed by Stanford University. ## Appendix A Fixed Number Density In this appendix we investigate how a different analysis choice, computing the kNN-CDFs from a fixed number of data points, alters our conclusions. Using a fixed number of objects, rather than all objects, removes the sensitivity to the total number of objects. In Fig. 11 we recompute the dark matter halo kNN-CDFs using samples of 150000 randomly chosen dark matter halos at \(z=0.0\) with \(M_{h}>3.2\times 10^{13}\,M_{\odot}/h\). The response of all kNN-CDFs to different parameters is different to the case where all the halos are used. The smooth bell curves are no longer seen as the shifts in the number of halos has been removed. Despite the difference in shape, the response to _equilateral_ non-Gaussianity across the different numbers of neighbours is still distinct in shape from the other cosmological parameters. In Section 4, we discussed that the kNN-CDFs may be responding to changes in the halo mass function. Using a fixed total number of halos does not remove this sensitivity. The hypothesis was that the kNN-CDF signature is related to the different effects on the number of high and low mass halos, rather than the total number, and the different clustering of high and low mass halos. This differential effect would not be removed by using a fixed number of objects, as considered in this appendix. Thus while the quantitative details are different, the results of this appendix qualitatively match the case where we use all the objects in the catalog. ## Appendix B Convergence Tests For simulation-based Fisher forecasts it is vital to test that the results are converged. Unconverged results arise from the noise associated with using a finite number of Monte Carlo simulations. In Fig. 12 we test the convergence our our galaxy Fisher forecast, Section 5.3. The slow change in the forecast constraints with the number of simulations used implies that the forecast is sufficiently converged and so reliable.
2309.13520
The prime-counting Copeland-Erdős constant
Let $(a(n) : n \in \mathbb{N})$ denote a sequence of nonnegative integers. Let $0.a(1)a(2)...$ denote the real number obtained by concatenating the digit expansions, in a fixed base, of consecutive entries of $(a(n) : n \in \mathbb{N})$. Research on digit expansions of this form has mainly to do with the normality of $0.a(1)a(2)...$ for a given base. Famously, the Copeland-Erd\H{o}s constant $0.2357111317...$, for the case whereby $a(n)$ equals the $n^{\text{th}}$ prime number $p_{n}$, is normal in base 10. However, it seems that the ``inverse'' construction given by concatenating the decimal digits of $(\pi(n) : n \in \mathbb{N})$, where $\pi$ denotes the prime-counting function, has not previously been considered. Exploring the distribution of sequences of digits in this new constant $0.0122...9101011...$ would be comparatively difficult, since the number of times a fixed $m \in \mathbb{N}$ appears in $(\pi(n) : n \in \mathbb{N})$ is equal to the prime gap $g_{m} = p_{m+1} - p_{m}$, with the behaviour of prime gaps notoriously elusive. Using a combinatorial method due to Sz\"usz and Volkmann, we prove that Cram\'er's conjecture on prime gaps implies the normality of $0.a(1)a(2)...$ in a given base $g \geq 2$, for $a(n) = \pi(n)$.
John M. Campbell
2023-09-24T01:24:48Z
http://arxiv.org/abs/2309.13520v1
# The prime-counting Copeland-Erdos constant ###### Abstract Let \((a(n):n\in\mathbb{N})\) denote a sequence of nonnegative integers. Let \(0.a(1)a(2)...\) denote the real number obtained by concatenating the digit expansions, in a fixed base, of consecutive entries of \((a(n):n\in\mathbb{N})\). Research on digit expansions of this form has mainly to do with the normality of \(0.a(1)a(2)...\) for a given base. Famously, the Copeland-Erdos constant \(0.2357111317...\), for the case whereby \(a(n)\) equals the \(n^{\text{th}}\) prime number \(p_{n}\), is normal in base \(10\). However, it seems that the "inverse" construction given by concatenating the decimal digits of \((\pi(n):n\in\mathbb{N})\), where \(\pi\) denotes the prime-counting function, has not previously been considered. Exploring the distribution of sequences of digits in this new constant \(0.0122...9101011...\) would be comparatively difficult, since the number of times a fixed \(m\in\mathbb{N}\) appears in \((\pi(n):n\in\mathbb{N})\) is equal to the prime gap \(g_{m}=p_{m+1}-p_{m}\), with the behaviour of prime gaps notoriously elusive. Using a combinatorial method due to Szusz and Volkmann, we prove that Cramer's conjecture on prime gaps implies the normality of \(0.a(1)a(2)...\) in a given base \(g\geq 2\), for \(a(n)=\pi(n)\). _Keywords:_ Normal number, prime-counting function, decimal expansion, prime gap _MSC:_ 11K16, 11A63 ## 1 Introduction The study of normal numbers forms a large and important area in probabilistic number theory. Much of our notation concerning normal numbers is based on the work of Szusz and Volkmann in [19]. Following [19], we let \(g\geq 2\) be a fixed parameter throughout our article, letting it be understood that we are working with digits in base \(g\), unless otherwise specified. For a real value \(\alpha\), and for a block \(E\) of digits, let \(A_{E}(\alpha,n)\) denote the number of copies of \(E\) within the first \(n\) digits of \(\alpha\). The real number \(\alpha\) is said to be _normal of order \(k\)_ if \[\lim_{n\rightarrow\infty}\frac{A_{E}(\alpha,n)}{n}=\frac{1}{g^{\ell(E)}} \tag{1}\] for all \(E\) such that \(\ell(E)=k\), where \(\ell(E)\) denotes the _length_ of \(E\), or the number of digits in \(E\), counting multiplicities. For \(k=1\), the specified property concerning (1) is referred to as _simple normality_. A real number \(\alpha\) is said to be _normal_ if it is normal for all orders \(k\in\mathbb{N}\). In this article, we introduce a constant related to the prime-counting function that may be seen as something of an inverse relative to the construction of the famous Copeland-Erdos constant [6], and we prove, under the assumption of Cramer's conjecture, that this new constant is normal. This is inspired by past work related to the normality of real numbers defined via concatenations of number-theoretic functions, as in [4, 9, 10, 16, 17, 20]. ### Background For a sequence \((a(n):n\in\mathbb{N})\) of nonnegative integers, we let \[0.a(1)a(2)\ldots \tag{2}\] denote the real value given by concatenating the digit expansions of consecutive entries of the aforementioned sequence. The first real value proved to be normal is famously due to Champernowne [5] and is given by the case whereby \(a(n)=n\) for all \(n\in\mathbb{N}\) in (2), in base 10. Shortly afterwards, Besicovitch [3] proved a result that may be applied to obtain the normality of the corresponding constant for the \(a(n)=n^{2}\) case [16]. This normality result was then generalized by Davenport and Erdos [8] for the case whereby \(a(n)\) is a polynomial satisfying certain conditions. The polynomial cases we have covered lead us to consider the behaviour of the digits in (2) for number-theoretic sequences. A famous 1946 result due to Copeland and Erdos [6] provides the base-10 normality of (2) for the case whereby \(a(n)\) is equal to the \(n^{\rm th}\) prime number \(p_{n}\). In this case, the constant \[0.235711131719232... \tag{3}\] of the form indicated in (2) is referred to as the _Copeland-Erdos constant_. Copeland and Erdos' 1946 article [6] is seminal within areas of number theory concerning normal numbers, and this inspires the exploration of variants of (3), with the use of number-theoretic sequences in place of \((p_{n}:n\in\mathbb{N})\). As a natural variant of the Copeland-Erdos constant, we consider the constant \[0.012233444455666677888899999910101111... \tag{4}\] obtained from (2) by setting the sequence \((a(n):n\in\mathbb{N})\) to be equal to the sequence \((\pi(n):n\in\mathbb{N})\) given by the prime-counting function. Copeland and Erdos' proof [6] of the normality of (3) relied on the property whereby the sequence \((p_{n}:n\in\mathbb{N})\) is strictly increasing and the property given by \(p_{n}=n^{1+o(1)}\), but, since \((\pi(n):n\in\mathbb{N})\) is not strictly increasing, the techniques from [6] cannot be translated so as to be applicable to the constant in (4). ## 2 Main construction It seems that references on normal numbers related to Copeland and Erdos' work in [6], including references such as [1, 2, 8, 13, 15] that have inspired our work, have not involved the "inverse" constant in (4). Moreover, integer sequences involving \[(0,1,2,2,3,3,4,4,4,4,5,5,6,6,6,7,7,8,8,8,8,9,9,9,9,9,1,0,\ldots) \tag{5}\] are not currently included in the On-Line Encyclopedia of Integer Sequences, where the tuple in (5) is given by the consecutive digits of the constant in (4), which we refer to as the _prime-counting Copeland-Erdos (PCCE) constant_. In relation to this constant, we are to apply the remarkable result that was originally formulated by Szusz and Volkmann in 1994 [19] and that was later corrected by Pollack and Vandehey [16] and that is reproduced below, and we are to later explain notation/terminology given in the following Theorem. **Theorem 1**.: _(Szusz \(\&\) Volkmann, 1994) Suppose that \(f\) is a differentiable function, and that \(f\) is monotonically increasing and positive for all \(x\geq n_{0}(f)\) and that both \(\eta(f)\) and \(\eta(f^{\prime})\) exist and that \(0<\eta(f)\leq 1\). It follows that \(f\) is a Champernowne function [19] (cf. [16])._ **Definition 1**.: For a real-valued function \(f\) defined on a domain containing \(\mathbb{N}\) such that \(f(n)>0\), we let \(\alpha(f)=\alpha_{g}(f)\) denote the real number such that the base-\(g\) expansion of this real number is of the form \(0.b_{1}b_{2}...\), where \(b_{n}\) denotes the base-\(g\) expansion of \(\lfloor f(n)\rfloor\)[19]. **Example 1**.: The PCCE constant in (4) is equal to \(\alpha_{10}(\pi)\), writing \(\pi\) in place of the prime-counting function. **Definition 2**.: A function \(f\) such that \(\lim_{x\to\infty}f(x)=\infty\) is a _Champernowne function_ if: For all \(g\geq 2\), the value \(\alpha_{g}(f)\) is normal in base \(g\)[19]. **Example 2**.: The identity function \(f\) mapping \(n\) to \(n\) is a Champernowne function (cf. [14]). **Definition 3**.: For a real-valued, positive function \(f\), we set \[\eta(f)=\lim_{x\to\infty}\frac{\log f(x)}{\log x}, \tag{6}\] under the assumption that this limit exists. **Example 3**.: As noted in [16, 19], the constant \(0.1112222233333334...\) given by setting \(f(n)=\sqrt{n}\) in \(\alpha(f)\) is normal, with \(\eta(f)=\frac{1}{2}\). A difficulty associated with the explicit construction of a function \(f\) that satisfies all of the required properties in Theorem 1 and that can be used, via Szusz and Volkmann's combinatorial method, to prove the normality of (4) has to do with the limit associated with \(\eta(f^{\prime})\), since, for example, \(f^{\prime}\) cannot vanish infinitely often, in view of the definition in (6). A more serious problem has to do with how the distribution of groupings of digits in the PCCE constant would depend on the behaviour of the sequence (\(g_{n}=p_{n+1}-p_{n}:n\in\mathbb{N}\)) of prime gaps, since the number of times \(m\in\mathbb{N}\) appears in (\(\pi(n):n\in\mathbb{N}\)) is equal to \(g_{m}\). In this regard, we are to make use of a purported formula, under the assumption of Cramer's conjecture, concerning the size of prime gaps. _Cramer's conjecture_ (cf. [7]) often refers to the purported estimate \[g_{n}=O(\log^{2}p_{n}). \tag{7}\] The weaker formula \[g_{n}=O(\sqrt{p_{n}}\log p_{n}), \tag{8}\] which Cramer proved [7] under the assumption of the Riemann Hypothesis (RH), is not sufficient for our purposes, as we later discuss. The problem, here, has to do with how we would want \(\lim_{n\to\infty}\frac{\log h_{n}}{\log n}\) to vanish for a function \(h_{n}\) approximating \(g_{n}\). Our construction of a function \(f\) that satisfies the conditions in Theorem 1 and that may be used to prove the normality of the PCCE constant may be viewed as being analogous to how the \(\Gamma\)-function provides a differentiable analogue of the factorial function defined on natural numbers. As indicated above, \(\eta(f^{\prime})\) could not be vanishing infinitely often, if we were to apply Szusz and Volkmann's combinatorial method [19], which leads us to consider how \(f^{\prime}\) could be bounded, in such a way to guarantee the existence of the limit associated with \(\eta(f^{\prime})\). Informally, if we consider the graph of the prime-counting function as a function defined on \(\mathbb{R}\), and if we consider a function \(f\) that projects onto the graph of \(\pi\) under \(\lfloor\cdot\rfloor\) and that has a derivative that is reasonably well behaved, we would expect the derivative function \(f^{\prime}(x)\), for values \(x\) such that \(p_{m}\leq x<p_{m+1}\), to be "reasonably close" to \(\frac{1}{g_{m}}\). Formalizing this notion in a way that would allow us to apply Theorem 1 is nontrivial, as below. **Theorem 2**.: _Under the assumption of Cramer's conjecture, the PCCE constant is normal in base 10 and, more generally, the constant \(0.a(1)a(2)\cdots\) is normal in base \(g\) for \(a(n)=\pi(n)\)._ Proof.: First suppose that the natural number \(m\in\mathbb{N}_{\geq 2}\) is such that \(\frac{1}{g_{m-1}}>\frac{1}{g_{m}}\). Let \(\varepsilon^{(m)}>0\). We set \(q_{1}^{(m)}=p_{m}+\varepsilon^{(m)}\) for \(\varepsilon^{(m)}<1\), and we set \[q_{2}^{(m)}=\frac{q_{1}^{(m)}(g_{m-1}-g_{m})}{g_{m-1}g_{m}^{2}}+\frac{g_{m}p_ {m}-g_{m-1}p_{m}+g_{m-1}g_{m}}{g_{m-1}g_{m}^{2}}. \tag{9}\] We may verify that (9) reduces in such a way so that \[q_{2}^{(m)}=\frac{\varepsilon^{(m)}\left(\frac{1}{g_{m}}-\frac{1}{g_{m-1}} \right)}{g_{m}}+\frac{1}{g_{m}}, \tag{10}\] which gives us that \(q_{2}^{(m)}<\frac{1}{g_{m}}\). By letting \(\varepsilon^{(m)}\) be sufficiently small, with \[0<\varepsilon^{(m)}<\frac{1}{\frac{1}{g_{m-1}}-\frac{1}{g_{m}}}, \tag{11}\] the condition in (11) gives us that \(0<q_{2}^{(m)}<\frac{1}{g_{m}}\). We define the function \(h_{m}^{\varepsilon}(x)\) on the interval \([p_{m},p_{m+1}]\) so that: \[h_{\varepsilon}^{(m)}(x)=\begin{cases}\frac{1-q_{2}^{(m)}g_{m-1}}{g_{m-1} \left(p_{m}-q_{1}^{(m)}\right)}x+\frac{q_{2}^{(m)}g_{m-1}p_{m}-q_{1}^{(m)}}{g _{m-1}\left(p_{m}-q_{1}^{(m)}\right)}&\text{if }p_{m}\leq x\leq p_{m}+ \varepsilon^{(m)},\\ \frac{1-q_{2}^{(m)}g_{m}}{g_{m}\left(p_{m+1}-q_{1}^{(m)}\right)}x+\frac{q_{2} ^{(m)}g_{m}p_{m+1}-q_{1}^{(m)}}{g_{m}\left(p_{m+1}-q_{1}^{(m)}\right)}&\text{ if }p_{m}+\varepsilon^{(m)}\leq x\leq p_{m+1}.\end{cases}\] Again for a natural number \(m\in\mathbb{N}_{\geq 2}\), we proceed to suppose that \(\frac{1}{g_{m-1}}<\frac{1}{g_{m}}\). We write \(\delta^{(m)}>0\). We set \(r_{1}^{(m)}=p_{m}+\delta^{(m)}\), with \(\delta^{(m)}<1\). We set \[r_{2}^{(m)}=\frac{r_{1}^{(m)}(g_{m-1}-g_{m})}{g_{m-1}g_{m}^{2}}+\frac{g_{m}p_{m }-g_{m-1}p_{m+1}+2g_{m-1}g_{m}}{g_{m-1}g_{m}^{2}}. \tag{12}\] We may verify that (12) reduces so that \[r_{2}^{(m)}=\frac{\delta^{(m)}\left(\frac{1}{g_{m}}-\frac{1}{g_{m-1}}\right)} {g_{m}}+\frac{1}{g_{m}}, \tag{13}\] so that \(r_{2}^{(m)}>\frac{1}{g_{m}}\). We define the function \(h_{\delta}^{(m)}(x)\) on the interval \([p_{m},p_{m+1}]\) so that: \[h_{\delta}^{(m)}(x)=\begin{cases}\frac{1-r_{2}^{(m)}g_{m-1}}{g_{m-1}\big{(}p_ {m}-r_{1}^{(m)}\big{)}}x+\frac{r_{2}^{(m)}g_{m-1}p_{m}-r_{1}^{(m)}}{g_{m-1} \big{(}p_{m}-r_{1}^{(m)}\big{)}}&\text{if $p_{m}\leq x\leq p_{m}+\delta^{(m)}$},\\ \frac{1-r_{2}^{(m)}g_{m}}{g_{m}\big{(}p_{m+1}-r_{1}^{(m)}\big{)}}x+\frac{r_{2}^ {(m)}g_{m}p_{m+1}-r_{1}^{(m)}}{g_{m}\big{(}p_{m+1}-r_{1}^{(m)}\big{)}}&\text{ if $p_{m}+\delta^{(m)}\leq x\leq p_{m+1}$}.\end{cases}\] Finally, if \(\frac{1}{g_{m-1}}=\frac{1}{g_{m}}\) for \(m\in\mathbb{N}_{\geq 2}\), we define \(h_{\text{null}}^{(m)}(x)\) on the interval \([p_{m},p_{m+1}]\) so that \(h_{\text{null}}^{(m)}(x)=\frac{1}{g_{m}}\). Now, set \(f(x)=\frac{1}{96}\left(4x^{2}+12x+39\right)\) if \(0\leq x\leq\frac{3}{2}\), and set \(f(x)=\frac{1}{4}\left(3x^{2}-8x+8\right)\) if \(\frac{3}{2}\leq x\leq 2\) and set \(f(x)=x-1\) if \(2\leq x\leq 3\). Now, for \(\pi(x)\in\mathbb{N}_{\geq 2}\), we set \(f(x)\) so that \[f(x)=\begin{cases}\pi(x)+\int_{p_{\pi(x)}}^{x}h_{\varepsilon}^{(\pi(x))}(k)\, dk&\text{if $\frac{1}{g_{\pi(x)-1}}>\frac{1}{g_{\pi(x)}}$},\\ \pi(x)+\int_{p_{\pi(x)}}^{x}h_{\delta}^{(\pi(x))}(k)\,dk&\text{if $\frac{1}{g_{\pi(x)-1}}< \frac{1}{g_{\pi(x)}}$},\\ \pi(x)+\int_{p_{\pi(x)}}^{x}h_{\text{null}}^{(\pi(x))}(k)\,dk&\text{if $\frac{1}{g_{\pi(x)-1}}= \frac{1}{g_{\pi(x)}}$}.\end{cases}\] By construction, we have that \(f\) is a differentiable function and that \(f\) is monotonically increasing and positive for \(x\geq 0\). Moreover, the function \(f\) is constructed so that \(\lfloor f(x)\rfloor=\pi(x)\) for all \(x\geq 0\). We have that \(\frac{1}{8}\leq f^{\prime}(x)\leq 1\) for \(0\leq x\leq 3\), and, for \(x\geq 3\), we have that \(f^{\prime}(x)\) is either equal to \(h_{\varepsilon}^{(m)}(x)\) or \(h_{\delta}^{(m)}(x)\) or \(h_{\text{null}}^{(m)}(x)\), according, respectively, to the possibilities whereby \(\frac{1}{g_{m-1}}>\frac{1}{g_{m}}\) and \(\frac{1}{g_{m-1}}<\frac{1}{g_{m}}\) and \(\frac{1}{g_{m-1}}=\frac{1}{g_{m}}\). If \(\frac{1}{g_{m-1}}>\frac{1}{g_{m}}\), then, by construction, we have that \[0<\frac{\varepsilon^{(m)}\left(\frac{1}{g_{m}}-\frac{1}{g_{m-1}}\right)}{g_{m}} +\frac{1}{g_{m}}\leq f^{\prime}(x)\leq\frac{1}{g_{m-1}} \tag{14}\] for \(p_{m}\leq x\leq p_{m+1}\), in view of (10), where, for fixed \(m\), the expression \(\varepsilon^{(m)}>0\) may be arbitrary, subject to (11). If \(\frac{1}{g_{m-1}}<\frac{1}{g_{m}}\), then, by construction, we have that \[0<\frac{1}{g_{m-1}}\leq f^{\prime}(x)\leq\frac{\delta^{(m)}\left(\frac{1}{g_{m }}-\frac{1}{g_{m-1}}\right)}{g_{m}}+\frac{1}{g_{m}}, \tag{15}\] in view of (13), and, for \(p_{m}\leq x\leq p_{m+1}\), the expression \(\delta^{(m)}>0\) is arbitrary on the specified interval. Finally, by construction, if \(\frac{1}{g_{m-1}}=\frac{1}{g_{m}}\), then \[0<\frac{1}{g_{m-1}}=\frac{1}{g_{m}}\leq f^{\prime}(x)\leq\frac{1}{g_{m-1}}= \frac{1}{g_{m}}.\] So, in any case, we find that \[\min\left\{\overline{\varepsilon^{(\pi(x))}}+\frac{1}{g_{\pi(x)}},\frac{1}{g_ {\pi(x)-1}}\right\}\leq f^{\prime}(x)\leq\max\left\{\overline{\delta^{(\pi(x) )}}+\frac{1}{g_{\pi(x)}},\frac{1}{g_{\pi(x)-1}}\right\},\] where the "overlined" expressions given above are, respectively, given by the positive terms involving \(\varepsilon^{(m)}\) and \(\delta^{(m)}\) in (14) and (15), writing \(m=\pi(x)\). Under the assumption of Cramer's conjecture, as formulated in (7), we find that there exists \(M>0\) and \(x_{0}\in\mathbb{R}\) such that \[\forall y\geq x_{0}\ \frac{1}{M\log^{2}p_{y}}\leq\frac{1}{g_{y}}\leq\frac{1}{2}. \tag{16}\] So, for suitable natural numbers \(m\in\mathbb{N}_{\geq 2}\), by taking \(\overline{\varepsilon^{(m)}}\) to be sufficiently small throughout a given interval \([p_{m},p_{m+1}]\), and similarly for \(\overline{\delta^{(m)}}\), from the consequence of Cramer's conjecture given in (16), we may deduce that \[\frac{1}{M\log^{2}p_{\pi(x)}}\leq f^{\prime}(x)\leq 0.51, \tag{17}\] for sufficiently large \(x\). Using explicit bounds as in [11, 12, 18] for the prime-counting function and for the sequence of primes, we may obtain, from (17), that \[\frac{1}{M\log^{2}\left(\frac{x\log\left(\frac{x\log\left(\frac{x}{\log(x)-1.1}\right)}{\log(x)-1.1}\right)}{\log(x)-1.1}\right)}\leq f^{\prime}(x)\leq 0.51, \tag{18}\] for sufficiently large \(x\). By taking the natural logarithm of \(f^{\prime}(x)\) and the bounds in (18), and then dividing by \(\log(x)\), it is a matter of routine to verify that the limit as \(x\to\infty\) of the resultant bounds vanishes, so that \(\eta(f^{\prime})\) exists, as desired, with \(\eta(f^{\prime})=0\). By construction, we have that \(\pi(x)\leq f(x)\leq\pi(x)+1\) for all \(x\). So, we find that \[\frac{\log\left(\frac{x}{\log(x)-1}\right)}{\log(x)}\leq\frac{\log(f(x))}{\log (x)}\leq\frac{\log\left(\frac{x}{\log(x)-1.1}+1\right)}{\log(x)}. \tag{19}\] By taking the limit as \(x\to\infty\) of the upper and lower bounds given in (19), this gives us the same value of \(1\), so that \(\eta(f)=1\), as desired. So, we have that \(f\) is a differentiable function and is monotonically increasing and positive, and we have that \(\eta(f)\) and \(\eta(f^{\prime})\) exist and that \(0<\eta(f)\leq 1\), and we have that \(\lim_{x\to\infty}f(x)=\infty\). So, from Theorem 1, we have that \(f\) is a Champernowne function. ## 3 Discussion Following the work of Szusz and Volkmann, as in the main article that has inspired our work [19], we adopt the convention whereby strings of digits are counted "with overlaps allowed" in the sense that two occurrences of the same substring in a larger string are counted separately if these strings happen to overlap or otherwise. For example, Szusz and Volkmann [19] provide the illustration whereby the substring \(131\) is counted four times within \(713131051310131\), noting the substring \(131\) overlapping with itself within \(13131\). This convention concerning normal numbers does not agree with the Mathematica commands StringCount or SequenceCount, but, as a way of avoiding this kind of issue, we may instead use the Wolfram StringPosition command. For example, inputting StringPosition["713131051310131", "131"] into the Mathematica Computer Algebra System, we obtain a list of four elements, which agrees with Szusz and Volkmann's convention for enumerating subsequences of digits. By taking the first \(10\) million entries of the integer sequence \((\pi(n):n\in\mathbb{N}_{0})\), and then converting this subsequence into a string without commas or spaces or brackets, and then counting the number of occurrences of each digit among 1, 2, \(\ldots\), 9, 0 using the Wolfram StringPosition function, and then computing the frequency by dividing by the length of the string corresponding to \((\pi(n):n\in\mathbb{N}_{\leq 10,000,000})\), we obtain the frequencies listed in Table 1. For each of the given frequencies \(f\) shown in Table 1, we have that \(\left|f-\frac{1}{10}\right|<0.0165\). The data in Table 1 may suggest that lower-value digits, according to the given ordering, may appear more frequently within the PCCE constant, say, in the sense that the frequencies, within substrings corresponding to \((\pi(n):n\in\mathbb{N}_{0})\), for lower-order digits may be, in general, higher compared to the frequencies for higher-valued digits. This recalls the statistical principle known as _Benford's law_, and we encourage the number-theoretic exploration of this phenomenon with the use of the PCCE constant. One may wonder why it would be appropriate, for the purposes of our application of the combinatorial method due to Szusz and Volkmann [19], to use the formulation of Cramer's conjecture in (7), as opposed to the estimate for prime gaps that Cramer proved under the assumption of the RH. The problem, here, has to do with the lower bound that would correspond to (16), if estimates other than Cramer's conjecture were to be used. If one were to attempt to make use of a lower bound as in \[\frac{1}{M\sqrt{p_{\pi(x)}}\log p_{\pi(x)}}\leq f^{\prime}(x)\leq 0.51, \tag{20}\] under the assumption of the RH, in the hope of applying the estimate of \begin{table} \begin{tabular}{|c|c|} \hline Digit & Frequency corresponding to \((\pi(n))_{n\in\mathbb{N}_{<107}}\) \\ \hline 1 & 0.110875 \\ \hline 2 & 0.111823 \\ \hline 3 & 0.112635 \\ \hline 4 & 0.113276 \\ \hline 5 & 0.113596 \\ \hline 6 & 0.102678 \\ \hline 7 & 0.0835609 \\ \hline 8 & 0.0836607 \\ \hline 9 & 0.0839711 \\ \hline 0 & 0.0839241 \\ \hline \end{tabular} \end{table} Table 1: Numerical evidence of the simple normality of the PCCE constant. prime gaps shown in (8), we encounter a problem concerning the behaviour of prime gaps, in the sense described as foillows. To apply Theorem 1, the limit corresponding to \(\eta(f^{\prime})\) would have to exist, but, by manipulating the inequalities in (20) so that \[\frac{\log\left(\frac{1}{M\sqrt{p_{\pi}(x)}\log p_{\pi(x)}}\right)}{\log(x)} \leq\frac{\log\left(f^{\prime}(x)\right)}{\log(x)}\leq\frac{\log\left(0.51 \right)}{\log(x)}, \tag{21}\] we encounter the problem indicated as follows. By taking the limit as \(x\to\infty\) for the bounds in (21), the Prime Number Theorem gives us that the limit corresponding to the lower bound reduces to \(-\frac{1}{2}\), whereas the limit corresponding to the upper bound vanishes, so the valuations for these limits corresponding to the upper and lower bounds for (21) would not imply that \(\eta(f^{\prime})\) would exist. How could the RH be used, in place of the formulation of Cramer's conjecture in (7), to prove the normality of the PCCE constant? The data in Table 1 may be regarded as offering strong evidence for the normality of the PCCE constant if we compare these data to the correpsonding frequencies for the Szusz-Volkmann constant (cf. [16, 19]) given by concatenating the digit expansions of consecutive integer parts of the square root function, as in Example 3. Given the notoriously unpredictable behaviour of prime gaps, one might think that the frequencies of digits in the PCCE constant would be similarly unpredictable, compared to the Szusz-Volkmann \begin{table} \begin{tabular}{|c|c|} \hline Digit & Frequency corresponding to \(\left(\left\lfloor\sqrt{n}\right\rfloor\right)_{n\in\mathbb{N}_{<107}}\) \\ \hline 1 & 0.156047 \\ \hline 2 & 0.198942 \\ \hline 3 & 0.0980257 \\ \hline 4 & 0.0740972 \\ \hline 5 & 0.0758164 \\ \hline 6 & 0.0762815 \\ \hline 7 & 0.0776263 \\ \hline 8 & 0.0793403 \\ \hline 9 & 0.0810544 \\ \hline 0 & 0.0827685 \\ \hline \end{tabular} \end{table} Table 2: Frequencies associated with the appearance of digits in the Szusz-Volkmann constant (cf. [16, 19]) indicated in Example 3. constant, but the data in Tables 1 and 2 suggest that the PCCE constant is actually much more well behaved compared to the \(a(n)=\lfloor\sqrt{n}\rfloor\) case of (2). For example, the largest frequency in Table 2 is much farther away from the desire mean of 0.1, relative to the PCCE constant, and similarly for the smallest frequency in Table 2. ### Acknowledgements The author was supported through a Killam Postdoctoral Fellowship from the Killam Trusts. The author is thankful to Joel E. Cohen for useful comments concerning the subject of this article, and the author is thankful to Karl Dilcher for useful discussions concerning this article.
2308.16765
Twisted Mahler discrete residues
Recently we constructed Mahler discrete residues for rational functions and showed they comprise a complete obstruction to the Mahler summability problem of deciding whether a given rational function $f(x)$ is of the form $g(x^p)-g(x)$ for some rational function $g(x)$ and an integer $p > 1$. Here we develop a notion of $\lambda$-twisted Mahler discrete residues for $\lambda\in\mathbb{Z}$, and show that they similarly comprise a complete obstruction to the twisted Mahler summability problem of deciding whether a given rational function $f(x)$ is of the form $p^\lambda g(x^p)-g(x)$ for some rational function $g(x)$ and an integer $p>1$. We provide some initial applications of twisted Mahler discrete residues to differential creative telescoping problems for Mahler functions and to the differential Galois theory of linear Mahler equations.
Carlos E. Arreche, Yi Zhang
2023-08-31T14:38:36Z
http://arxiv.org/abs/2308.16765v1
# Twisted Mahler discrete residues ###### Abstract. Recently we constructed Mahler discrete residues for rational functions and showed they comprise a complete obstruction to the Mahler summability problem of deciding whether a given rational function \(f(x)\) is of the form \(g(x^{p})-g(x)\) for some rational function \(g(x)\) and an integer \(p>1\). Here we develop a notion of \(\lambda\)-twisted Mahler discrete residues for \(\lambda\in\mathbb{Z}\), and show that they similarly comprise a complete obstruction to the twisted Mahler summability problem of deciding whether a given rational function \(f(x)\) is of the form \(p^{\lambda}g(x^{p})-g(x)\) for some rational function \(g(x)\) and an integer \(p>1\). We provide some initial applications of twisted Mahler discrete residues to differential creative telescoping problems for Mahler functions and to the differential Galois theory of linear Mahler equations. Key words and phrases:Mahler operator, difference fields, difference equations, partial fractions, discrete residues, summability, creative telescoping The work of C.E. Arreche was partially supported by NSF grant CCF-1815108. The work of Y. Zhang was supported by the NSFC Young Scientist Fund No. 12101506, the Natural Science Foundation of the Jiangsu Higher Education Institutions of China No. 21KJB110032, and XJTLU Research Development Fund No. RDF-20-01-12. ## 1. Introduction Continuous residues are fundamental and crucial tools in complex analysis, and have extensive and compelling applications in combinatorics [11]. In the last decade, a theory of discrete and \(q\)-discrete residues was proposed in [12] for the study of telescoping problems for bivariate rational functions, and subsequently found applications in the computation of differential Galois groups of second-order linear difference [1] and \(q\)-difference equations [1] and other closely-related problems [13, 14]. More recently, the authors of [15, 16] developed a theory of residues for skew rational functions, which has important applications in duals of linearized Reed-Solomon codes [1]. In [17] the authors introduce a notion of elliptic orbit residues which, in analogy with [12], similarly serves as a complete obstruction to summability in the context of elliptic shift difference operators. In [1] we initiated a theory of Mahler discrete residues aimed at helping bring to the Mahler case the successes of these earlier notions of residues. Let \(\mathbb{K}\) be an algebraically closed field of characteristic zero and \(\mathbb{K}(x)\) be the field of rational functions in an indeterminate \(x\) over \(\mathbb{K}\). Fix an integer \(p\geq 2\). For a given \(f(x)\in\mathbb{K}(x)\), we considered in [1] the _Mahler summability problem_ of deciding effectively whether \(f(x)=g(x^{p})-g(x)\) for some \(g(x)\in\mathbb{K}(x)\); if so, we say \(f(x)\) is _Mahler summable_. We defined in [1] a collection of \(\mathbb{K}\)-vectors, called _Mahler discrete residues_ of \(f(x)\) and defined purely in terms of its partial fraction decomposition, having the property that they are all zero if and only if \(f(x)\) is Mahler summable. More generally, a (linear) _Mahler equation_ is any equation of the form \[y(x^{p^{n}})+a_{n-1}(x)y(x^{p^{n-1}})+\cdots+a_{1}(x)y(x^{p})++a_{0}(x)y(x)=0, \tag{1.1}\] where the \(a_{i}(x)\in\mathbb{K}(x)\) and \(y(x)\) is an unknown "function" (or possibly some more general entity, e.g., the generating series of a combinatorial object, a Puisseux series, etc.). The motivation to study Mahler equations in general comes from several directions. They first arose in [15] in connection with transcendence results on values of special functions at algebraic numbers, and have since found other applications to automata theory and automatic sequences since the work of [12]. We refer to [1, 13, 14, 15] and the references therein for more details. We also mention that a different (and, for some purposes, better) approach to the Mahler summability problem is contained in [1], where the authors develop efficient algorithms to find, in particular, all the rational solutions to a linear Mahler equation. Thus [1] decides efficiently whether any _given_\(f(x)\in\mathbb{K}(x)\) is Mahler summable: namely, by either actually finding the corresponding certificate \(g(x)\in\mathbb{K}(x)\) such that \(f(x)=g(x^{p})-g(x)\) if it exists or else deciding that there is no such \(g(x)\in\mathbb{K}(x)\). We emphasize that, in contrast, the approach undertaken in [1] is obstruction-theoretic, with the upshot that it spells out (theoretically) exactly what it takes for any \(f(x)\in\mathbb{K}(x)\) whatsoever to be Mahler summable or not, but with the drawback that it is likely to be infeasible in practice for all but the simplest/smallest choices of \(f(x)\). All the same, the approach initiated in [1], and continued in the present work, is a worthwhile and useful complement to that of [1] -- not only because of the theoretical questions that it answers for the first time, but moreover also because of its practical implications. A particularly fruitful approach over the last few decades to study difference equations in general, and Mahler equations such as (1.1) in particular, is through the Galois theory for linear difference equations developed in [10], and the differential (also sometimes called parameterized) Galois theory for difference equations developed in [14]. Both theories associate a geometric object to a given difference equation such as (1.1), called the _Galois group_, that encodes the sought (differential-)algebraic properties of the solutions to the equation. There are now several algorithms and theoretical results (see in particular [15, 16, 17, 18]) addressing qualitative questions about solutions of Mahler equations (1.1), in particular whether they must be (differentially) transcendental, which rely on procedures to compute "enough" information about the corresponding Galois group (i.e., whether it is "sufficiently large"). These Galois-theoretic arguments very often involve, as a sub-problem, deciding whether a certain auxiliary object (often but not always a rational solution to some Riccati-type equation) is Mahler summable (possibly after applying some linear differential operator to it, i.e., a telescoper). Rather than being able to answer the Mahler summability question for any one individual rational function, the systematic obstructions to the Mahler summability problems developed here serve as essential building blocks for other results and algorithms that rely on determining Mahler summability as an intermediate step. An immediate application of the technology of the technology developed here is Proposition 6.2, which has the following concrete consequence (when paired with the results of [1, Theorem 1.3]): if \(y_{1}(x),\ldots,y_{t}(x)\in\mathbb{K}((x))\) are Laurent series solutions to Mahler equations of the form \[y_{i}(x^{p})=a_{i}(x)y_{i}(x)\] for some non-zero \(a_{i}(x)\in\mathbb{K}(x)\), then either the \(y_{1}(x),\ldots,y_{t}(x)\) are differentially independent over \(\mathbb{K}(x)\) or else they are multiplicatively dependent over \(\mathbb{K}(x)^{\times}\), i.e., there exist integers \(k_{1},\ldots,k_{t}\in\mathbb{Z}\), not all zero, such that \(\prod_{i=1}^{t}y_{i}(x)^{k_{i}}\in\mathbb{K}(x)\). Let us explain in more detail the technology that we develop. For arbitrary \(\lambda\in\mathbb{Z}\) and \(f(x)\in\mathbb{K}(x)\), we say that \(f(x)\) is \(\lambda\)_-Mahler summable_ if there exists \(g(x)\in\mathbb{K}(x)\) such that \(f(x)=p^{\lambda}g(x^{p})-g(x)\). We shall construct certain \(\mathbb{K}\)-vectors from the partial fraction decomposition of \(f(x)\), which we call the _(twisted) \(\lambda\)-Mahler discrete residues_ of \(f(x)\), and prove our main result in Section 5.4: **Theorem 1.1**.: _For \(\lambda\in\mathbb{Z}\), \(f(x)\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable if and only if every \(\lambda\)-Mahler discrete residue of \(f\) is zero._ Our desire to develop an obstruction theory for such a "twisted" \(\lambda\)-Mahler summability problem, beyond the "un-twisted" \(0\)-Mahler summability problem considered in [1], is motivated by our desire to apply this obstruction theory to the following kind of _Mahler creative telescoping problem_. Given \(f_{1},\ldots,f_{n}\in\mathbb{K}(x)\) decide whether there exist linear differential operators \(\mathcal{L}_{1},\ldots,\mathcal{L}_{n}\in\mathbb{K}[\delta]\), for \(\delta\) some suitable derivation, such that \(\mathcal{L}_{1}(f_{1})+\cdots+\mathcal{L}_{n}(f_{n})\) is suitably Mahler summable. The unfortunately vague (but deliberate) double-usage of "suitable" above is due to the fact that there are in the Mahler case two traditional and respectable ways to adjoin a "Mahler-compatible" derivation in order to study differential-algebraic properties of solutions of Mahler equations, as we next explain and recall. A \(\sigma\delta\)-field is a field equipped with an endomorphism \(\sigma\) and a derivation \(\delta\) such that \(\sigma\circ\delta=\delta\circ\sigma\). Such are the basefields considered in the \(\delta\)-Galois theory for linear \(\sigma\)-equations developed in [10]. Denoting by \(\sigma:\mathbb{K}(x)\to\mathbb{K}(x):f(x)\mapsto f(x^{p})\) the _Mahler endomorphism_, one can show there is no non-trivial derivation \(\delta\) on \(\mathbb{K}(x)\) that commutes with this \(\sigma\). In the literature one finds the following two approaches (often used in combination; see e.g. [1, 1]): (1) take \(\delta=x\frac{d}{dx}\), and find a systematic way to deal with the fact that \(\sigma\) and \(\delta\) do not quite commute (but almost do), \(\sigma\circ\delta=p\,\delta\circ\sigma\); or (2) work over the larger field \(\mathbb{K}(x,\log x)\), where \(\sigma(\log x)=p\log x\), and set \(\delta=x\log x\frac{d}{dx}\), and find a systematic way to deal with this new element \(\log x\) as the cost of having \(\sigma\circ\delta=\delta\circ\sigma\) on the nose. There is, to be sure, a dictionary of sorts between these two approaches. We postpone a more careful discussion of these issues until it becomes absolutely necessary in Section 6, except to adopt the latter approach in this introduction to briefly motivate the centrality of the \(\lambda\)-Mahler summability problems for arbitrary \(\lambda\in\mathbb{Z}\) in the differential study of Mahler functions. Let us consider the \(\sigma\delta\)-field \(L:=\mathbb{K}(x,\log x)\), and given \(F\in L\), let us write the log-Laurent series expansion \[F=\sum_{\lambda\geq N}f_{\lambda}(x)\log^{\lambda}x\in\mathbb{K}(x)((\log x)),\] where \(f_{\lambda}(x)\in\mathbb{K}(x)\) for each \(\lambda\in\mathbb{Z}\), and \(\log^{\lambda}x:=[\log x]^{\lambda}\). Let us suppose that there exists \(G\in\hat{L}:=\mathbb{K}(x)((\log x))\) such that \(F=\sigma(G)-G\) (where \(\sigma\) is applied term-by-term). Writing such a putative \(G=\sum_{\lambda\geq N}g_{\lambda}(x)\log^{\lambda}x\in\hat{L}\), for some \(g_{\lambda}(x)\in\mathbb{K}(x)\) for \(\lambda\in\mathbb{Z}\), we find that \(F\) is Mahler summable within \(\hat{L}\) if and only if each \(f_{\lambda}(x)=p^{\lambda}g_{\lambda}(x^{p})-g(x)\) for each \(\lambda\in\mathbb{Z}\). Our strategy expands upon that of [1], which in turn was inspired by that of [13]: for \(\lambda\in\mathbb{Z}\), we utilize the coefficients occurring in the partial fraction decomposition of \(f(x)\) to construct in Section 5.5 a _\(\lambda\)-Mahler reduction_\(\bar{f}_{\lambda}(x)\in\mathbb{K}(x)\) such that \[\bar{f}_{\lambda}(x)=f(x)+\big{(}p^{\lambda}g_{\lambda}(x^{p})-g_{\lambda}(x) \big{)} \tag{1.2}\] for some \(g_{\lambda}(x)\in\mathbb{K}(x)\) (whose explicit computation it is our purpose to avoid!), with the structure of this \(\bar{f}_{\lambda}(x)\) being such that it cannot possibly be \(\lambda\)-Mahler summable unless \(\bar{f}_{\lambda}(x)=0\). The \(\lambda\)-Mahler discrete residues of \(f(x)\) are (vectors whose components are) the coefficients occurring in the partial fraction decomposition of \(\bar{f}_{\lambda}(x)\). This \(\bar{f}_{\lambda}(x)\) plays the role of a "\(\lambda\)-Mahler remainder" of \(f(x)\), analogous to the remainder of Hermite reduction in the context of integration. ## 2. Preliminaries In this section we recall and expand upon some conventions, notions, and ancillary results from [1] that we shall use systematically throughout this work. ### Notation and conventions We fix once and for all an algebraically closed field \(\mathbb{K}\) of characteristic zero and an integer \(p\geq 2\) (not necessarily prime). We denote by \(\mathbb{K}(x)\) the field of rational functions in the indeterminate \(x\) with coefficients in \(\mathbb{K}\). We denote by \(\sigma:\mathbb{K}(x)\to\mathbb{K}(x)\) the \(\mathbb{K}\)-linear endomorphism defined by \(\sigma(x)=x^{p}\), called the _Mahler operator_, so that \(\sigma(f(x))=f(x^{p})\) for \(f(x)\in\mathbb{K}(x)\). For \(\lambda\in\mathbb{Z}\), we write \(\Delta_{\lambda}:=p^{\lambda}\sigma-\mathrm{id}\), so that \(\Delta_{\lambda}(f(x))=p^{\lambda}f(x^{p})-f(x)\) for \(f(x)\in\mathbb{K}(x)\). We often suppress the functional notation and write simply \(f\in\mathbb{K}(x)\) instead of \(f(x)\) whenever no confusion is likely to arise. We say that \(f\in\mathbb{K}(x)\) is _\(\lambda\)-Mahler summable_ if there exists \(g\in\mathbb{K}(x)\) such that \(f=\Delta_{\lambda}(g)\). Let \(\mathbb{K}^{\times}=\mathbb{K}\backslash\{0\}\) denote the multiplicative group of \(\mathbb{K}\). Let \(\mathbb{K}^{\times}_{t}\) denote the torsion subgroup of \(\mathbb{K}^{\times}\), _i.e._, the group of roots of unity in \(\mathbb{K}^{\times}\). For \(\zeta\in\mathbb{K}^{\times}_{t}\), the _order_ of \(\zeta\) is the smallest \(r\in\mathbb{N}\) such that \(\zeta^{r}=1\). We fix once and for all a compatible system of \(p\)-power roots of unity \((\zeta_{p^{n}})_{n\geq 0}\subset\mathbb{K}^{\times}_{t}\), that is, each \(\zeta_{p^{n}}\) has order \(p^{n}\) and \(\zeta_{p^{n}}^{p^{\ell}}=\zeta_{p^{n-\ell}}\) for \(0\leq\ell\leq n\). Each \(f\in\mathbb{K}(x)\) decomposes uniquely as \[f=f_{\infty}+f_{\mathcal{T}}, \tag{2.1}\] where \(f_{\infty}\in\mathbb{K}[x,x^{-1}]\) is a Laurent polynomial and \(f_{\mathcal{T}}=\frac{a}{b}\) for polynomials \(a,b\in\mathbb{K}[x]\) such that either \(a=0\) or else \(\deg(a)<\deg(b)\) and \(\gcd(a,b)=1=\gcd(x,b)\). The reasoning behind our choice of subscripts \(\infty\) and \(\mathcal{T}\) for the Laurent polynomial component of \(f\) and its complement will become apparent in the sequel. **Lemma 2.1**.: _The \(\mathbb{K}\)-linear decomposition \(\mathbb{K}(x)\simeq\mathbb{K}[x,x^{-1}]\oplus\mathbb{K}(x)_{\mathcal{T}}\) given by \(f\leftrightarrow f_{\infty}\oplus f_{\mathcal{T}}\) as in (2.1) is \(\sigma\)-stable. For \(f,g\in\mathbb{K}(x)\) and for \(\lambda\in\mathbb{Z}\), \(f=\Delta_{\lambda}(g)\) if and only if \(f_{\infty}=\Delta_{\lambda}(g_{\infty})\) and \(f_{\mathcal{T}}=\Delta_{\lambda}(g_{\mathcal{T}})\)._ ### Mahler trajectories, Mahler trees, and Mahler cycles We let \[\mathcal{P}:=\{p^{n}\ |\ n\in\mathbb{Z}_{\geq 0}\}\] denote the multiplicative monoid of non-negative powers of \(p\). Then \(\mathcal{P}\) acts on \(\mathbb{Z}\) by multiplication, and the set of _maximal trajectories_ for this action is \[\mathbb{Z}/\mathcal{P}:=\big{\{}\{0\}\big{\}}\cup\big{\{}\{ip^{n}\ |\ n\in\mathbb{Z}_{\geq 0}\}\ \big{|}\ i\in\mathbb{Z}\text{ such that }p\nmid i\big{\}}.\] **Definition 2.2**.: For a maximal trajectory \(\theta\in\mathbb{Z}/\mathcal{P}\), we let \[\mathbb{K}[x,x^{-1}]_{\theta}:=\left\{\sum_{j}c_{j}x^{j}\in\mathbb{K}[x,x^{-1} ]\ \Big{|}\ c_{j}=0\text{ for all }j\notin\theta\right\}, \tag{2.2}\] and call it the _\(\theta\)-subspace_. The _\(\theta\)-component_\(f_{\theta}\) of \(f\in\mathbb{K}(x)\) is the projection of the component \(f_{\infty}\) of \(f\) in (2.1) to \(\mathbb{K}[x,x^{-1}]_{\theta}\) as in (2.2). We obtain similarly as in [1, Lem. 2.3] the following result. **Lemma 2.3**.: _For \(f,g\in\mathbb{K}(x)\) and for \(\lambda\in\mathbb{Z}\), \(f_{\infty}=\Delta_{\lambda}(g_{\infty})\) if and only if \(f_{\theta}=\Delta_{\lambda}(g_{\theta})\) for every maximal trajectory \(\theta\in\mathbb{Z}/\mathcal{P}\)._ **Definition 2.4**.: We denote by \(\mathcal{T}\) the set of equivalence classes in \(\mathbb{K}^{\times}\) for the equivalence relation \(\alpha\sim\gamma\Leftrightarrow\alpha^{p^{r}}=\gamma^{p^{s}}\) for some \(r,s\in\mathbb{Z}_{\geq 0}\). For \(\alpha\in\mathbb{K}^{\times}\), we denote by \(\tau(\alpha)\in\mathcal{T}\) the equivalence class of \(\alpha\) under \(\sim\). The elements \(\tau\in\mathcal{T}\) are called _Mahler trees_. We refer to [1, Remark 2.7] for a brief discussion on our choice of nomenclature in Definition 2.4. **Definition 2.5**.: For a Mahler tree \(\tau\in\mathcal{T}\), the _\(\tau\)-subspace_ is \[\mathbb{K}(x)_{\tau}:=\big{\{}f_{\mathcal{T}}\in\mathbb{K}(x)_{\mathcal{T}}\ \big{|}\text{ every pole of }f_{\mathcal{T}}\text{ is contained in }\tau\big{\}}. \tag{2.3}\] For \(f\in\mathbb{K}(x)\), the _\(\tau\)-component_\(f_{\tau}\) of \(f\) is the projection of the component \(f_{\mathcal{T}}\) of \(f\) in (2.1) to the \(\tau\)-subspace \(\mathbb{K}(x)_{\tau}\) in (2.3). The following result is proved similarly as in [1, Lem. 2.12]. **Lemma 2.6**.: _For \(f,g\in\mathbb{K}(x)\) and for \(\lambda\in\mathbb{Z}\), \(f_{\mathcal{T}}=\Delta_{\lambda}(g_{\mathcal{T}})\) if and only if \(f_{\tau}=\Delta_{\lambda}(g_{\tau})\) for every Mahler tree \(\tau\in\mathcal{T}\)._ **Definition 2.7**.: For a Mahler tree \(\tau\in\mathcal{T}\), the (possibly empty) _Mahler cycle_ of \(\tau\) is \[\mathcal{C}(\tau):=\{\gamma\in\tau\ |\ \gamma\text{ is a root of unity of order coprime to }p\}.\] The (possibly zero) _cycle length_ of \(\tau\) is defined to be \(\varepsilon(\tau):=|\mathcal{C}(\tau)|\). For \(e\in\mathbb{Z}_{\geq 0}\), we write \(\mathcal{T}_{e}:=\{\tau\in\mathcal{T}\ |\ \varepsilon(\tau)=e\}\). We refer to \(\mathcal{T}_{0}\) as the set of _non-torsion Mahler trees_, and to \(\mathcal{T}_{+}:=\mathcal{T}-\mathcal{T}_{0}\) as the set of _torsion Mahler trees_. _Remark 2.8_.: Let us collect as in [1, Rem. 2.10] some immediate observations about Mahler cycles that we shall use, and refer to, throughout the sequel. For \(\tau\in\mathcal{T}\) it follows from the Definition 2.4 that either \(\tau\subset\mathbb{K}_{t}^{\times}\) or else \(\tau\cap\mathbb{K}_{t}^{\times}=\emptyset\) (that is, either \(\tau\) consists entirely of roots of unity or else \(\tau\) contains no roots of unity at all). In particular, \(\tau\cap\mathbb{K}_{t}^{\times}=\emptyset\Rightarrow\mathcal{C}(\tau)= \emptyset\Leftrightarrow\varepsilon(\tau)=0\Leftrightarrow\tau\in\mathcal{T}_{0}\) (the _non-torsion case_). On the other hand, \(\mathbb{K}_{t}^{\times}\) consists of the pre-periodic points for the action of the monoid \(\mathcal{P}\) on \(\mathbb{K}^{\times}\) given by \(\alpha\mapsto\alpha^{p^{n}}\) for \(n\in\mathbb{Z}_{\geq 0}\). For \(\tau\subset\mathbb{K}_{t}^{\times}\) (the _torsion case_), the Mahler cycle \(\mathcal{C}(\tau)\) is a non-empty set endowed with a simply transitive action of the quotient monoid \(\mathcal{P}/\mathcal{P}^{e}\simeq\mathbb{Z}/e\mathbb{Z}\), where \(\mathcal{P}^{e}:=\{p^{ne}\ |\ n\in\mathbb{Z}\}\), and \(e:=\varepsilon(\tau)\). We emphasize that in general \(\mathcal{C}(\tau)\) is only a set, and not a group. The Mahler tree \(\tau(1)\) consists precisely of the roots of unity \(\zeta\in\mathbb{K}_{t}^{\times}\) whose order \(r\) is such that \(\gcd(r,p^{n})=r\) for some \(p^{n}\in\mathcal{P}\), or equivalently such that every prime factor of \(r\) divides \(p\). When \(\tau\subset\mathbb{K}_{t}^{\times}\) but \(\tau\neq\tau(1)\), the cycle length \(\varepsilon(\tau)=e\) is the order of \(p\) in the group of units \((\mathbb{Z}/r\mathbb{Z})^{\times}\), where \(r>1\) is the common order of the roots of unity \(\gamma\in\mathcal{C}(\tau)\), and \(\mathcal{C}(\tau)=\{\gamma^{p^{\ell}}\ |\ 0\leq\ell\leq e-1\}\) for any given \(\gamma\in\mathcal{C}(\tau)\). We shall often abusively write \(\mathcal{C}(\tau)=\{\gamma^{p^{\ell}}\ |\ \ell\in\mathbb{Z}/e\mathbb{Z}\}\). ### Mahler supports and singular supports in Mahler trees Mahler trees allow us to define the following bespoke variants of the singular support \(\operatorname{sing}(f)\) of a rational function \(f\) (_i.e._, its set of poles) and the order \(\operatorname{ord}_{\alpha}(f)\) of a pole of \(f\) at \(\alpha\in\mathbb{K}\), which are particularly well-suited to the Mahler context. **Definition 2.9**.: For \(f\in\mathbb{K}(x)\), we define \(\operatorname{supp}(f)\subset\mathcal{T}\cup\{\infty\}\), called the _Mahler support_ of \(f\), as follows: * \(\infty\in\operatorname{supp}(f)\) if and only if \(f_{\infty}\neq 0\); and * for \(\tau\in\mathcal{T}\), \(\tau\in\operatorname{supp}(f)\) if and only if \(\tau\) contains a pole of \(f\). For \(\tau\in\mathcal{T}\), the _singular support_ of \(f\) in \(\tau\), denoted by \(\operatorname{sing}(f,\tau)\), is the (possibly empty) set of poles of \(f\) contained in \(\tau\), and the _order_ of \(f\) at \(\tau\) is \[\operatorname{ord}(f,\tau):=\max\bigl{(}\{0\}\cup\{\operatorname{ord}_{\alpha }(f)\ |\ \alpha\in\operatorname{sing}(f,\tau)\}\bigr{)}.\] For the sake of completeness, we include the straightforward proof of the following lemma, which was omitted from [1, Section 2.2] for lack of space. **Lemma 2.10**.: _For \(f,g\in\mathbb{K}(x)\), \(\tau\in\mathcal{T}\), \(\lambda\in\mathbb{Z}\), and \(0\neq c\in\mathbb{K}\), we have the following:_ 1. \(\operatorname{supp}(f)=\emptyset\Longleftrightarrow f=0\)_;_ 2. \(\operatorname{supp}(\sigma(f))=\operatorname{supp}(f)=\operatorname{supp}(c \cdot f)\)_; and_ _;_ 3. \(\operatorname{supp}(f+g)\subseteq\operatorname{supp}(f)\cup\operatorname{supp}(g)\)_._ 4. \(\tau\in\operatorname{supp}(\Delta_{\lambda}(g))\Longleftrightarrow\tau\in \operatorname{supp}(g)\)_;_ 5. \(\operatorname{ord}(\sigma(f),\tau)=\operatorname{ord}(f,\tau)=\operatorname{ ord}(c\cdot f,\tau)\)_;_ 6. \(\operatorname{ord}(f+g,\tau)\leq\max(\operatorname{ord}(f,\tau),\operatorname{ ord}(g,\tau)\big{)}\)_; and_ 7. \(\operatorname{ord}(\Delta_{\lambda}(g),\tau)=\operatorname{ord}(g,\tau)\)_._ Proof.: (1). \(f=0\Longleftrightarrow f_{\infty}=0\) and \(f_{\mathcal{T}}=0\), and \(f_{\mathcal{T}}=0\Longleftrightarrow f\) has no poles in \(\mathbb{K}^{\times}\). (2) and (5). For \(0\neq c\in\mathbb{K}\), \(cf_{\infty}\neq 0\) if and only if \(f_{\infty}\neq 0\), and \(f\) and \(cf\) have the same poles and the orders of these poles are the same, and therefore \(\operatorname{supp}(f)=\operatorname{supp}(cf)\) and \(\operatorname{ord}(f,\tau)=\operatorname{ord}(cf,\tau)\) for every \(\tau\in\mathcal{T}\). Moreover, \(\sigma(f_{\infty})\neq 0\) if and only if \(f_{\infty}\neq 0\), since \(\sigma\) is an injective endomorphism of \(\mathbb{K}(x)\), and \(\alpha\in\mathbb{K}^{\times}\) is a pole of \(\sigma(f)\) if and only if \(\alpha^{p}\) is a pole of \(f\), whence \(\tau\) contains a pole of \(f\) if and only if \(\tau\) contains a pole of \(\sigma(f)\). In this case, it is clear that \(\operatorname{ord}(\sigma(f),\tau)\leq\operatorname{ord}(f,\tau)\). Moreover, since \(f\) has only finitely many poles in \(\tau\) of maximal order \(m:=\operatorname{ord}(f,\tau)\), there exists \(\alpha\in\operatorname{sing}(\sigma(f),\tau)\) such that \(\operatorname{ord}_{\alpha^{p}}(f)=m>\operatorname{ord}_{\alpha}(f)\), and it follows that \(\operatorname{ord}_{\alpha}(\sigma(f))=m=\operatorname{ord}(\sigma(f),\tau)\). (3) and (6). If \(f_{\infty}+g_{\infty}\neq 0\) then at least one of \(f_{\infty}\neq 0\) or \(g_{\infty}\neq 0\). The set of poles of \(f+g\) is contained in the union of the set of poles of \(f\) and the set of poles of \(g\), and therefore if \(\tau\) contains a pole of \(f+g\) then \(\tau\) must contain a pole of \(f\) or a pole of \(g\). This shows that \(\operatorname{supp}(f+g)\subseteq\operatorname{supp}(f)\cup\operatorname{ supp}(g)\). For \(m\) the maximal order of a pole of \(f+g\) in \(\tau\) we see that at least one of \(f\) or \(g\) must contain a pole of order \(m\) in \(\tau\). This shows that \(\operatorname{ord}(f+g,\tau)\leq\max(\operatorname{ord}(f,\tau),\operatorname{ ord}(g,\tau))\). (4) and (7). By (2) and (3), \(\operatorname{supp}(\Delta_{\lambda}(g))\subseteq\operatorname{supp}(g)\), and by (5) and (6), \(\operatorname{ord}(\Delta_{\lambda}(g),\tau)\leq\operatorname{ord}(g,\tau)\). Suppose \(\tau\in\operatorname{supp}(g)\), and let \(\alpha_{1},\ldots,\alpha_{s}\in\operatorname{sing}(g,\tau)\) be all the elements, pairwise distinct, with \(\operatorname{ord}_{\alpha_{j}}(g)=\operatorname{ord}(g,\tau)=:m\geq 1\), and choose \(\gamma_{j}\in\tau\) such that \(\gamma_{j}^{p}=\alpha_{j}\), we find as in the proof of (5) that \(\operatorname{ord}_{\zeta_{p}^{i}\gamma_{j}}(\sigma(g))=m\) and the elements \(\zeta_{p}^{i}\gamma_{j}\) are pairwise distinct for \(0\leq i\leq p-1\) and \(1\leq j\leq s\), whence at least one of the \(\zeta_{p}^{i}\gamma_{j}\) is different from every \(\alpha_{j^{\prime}}\) for \(1\leq j^{\prime}\leq s\), and therefore \(\operatorname{ord}(\Delta_{\lambda}(g),\tau)=m\), which implies in particular that \(\tau\in\operatorname{supp}(\Delta_{\lambda}(g))\). ### Mahler dispersion We now recall from [1] the following Mahler variant of the notion of (polar) dispersion used in [15], following the original definitions in [1, 1]. **Definition 2.11**.: For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\), the _Mahler dispersion_ of \(f\) at \(\tau\), denoted by \(\operatorname{disp}(f,\tau)\), is defined as follows. If \(\tau\in\mathcal{T}\), \(\operatorname{disp}(f,\tau)\) is the largest \(d\in\mathbb{Z}_{\geq 0}\) (if it exists) for which there exists \(\alpha\in\operatorname{sing}(f,\tau)\) such that \(\alpha^{p^{d}}\in\operatorname{sing}(f,\tau)\). If there is no such \(d\in\mathbb{Z}_{\geq 0}\), then we set \(\operatorname{disp}(f,\tau)=\infty\). If \(\tau=\infty\), let us write \(f_{\infty}=\sum_{i=n}^{N}c_{i}x^{i}\in\mathbb{K}[x,x^{-1}]\) with \(c_{n}c_{N}\neq 0\). * If \(f_{\infty}=c_{0}\neq 0\) then we set \(\operatorname{disp}(f,\infty)=0\); otherwise * \(\operatorname{disp}(f,\infty)\) is the largest \(d\in\mathbb{Z}_{\geq 0}\) for which there exists an index \(i\neq 0\) such that \(c_{i}\neq 0\) and \(c_{ip^{d}}\neq 0\). For \(f\in\mathbb{K}(x)\) and \(\tau\in\mathcal{T}\cup\{\infty\}\) such that \(\tau\notin\operatorname{supp}(f)\), we do not define \(\operatorname{disp}(f,\tau)\) at all (cf. [1, 1, 1]). Similarly as in the shift and \(q\)-difference cases (cf. [12, Lemma 6.3] and [13, Lemma 2.4 and Lemma 2.9]), Mahler dispersions will play a crucial role in what follows. As we prove in Theorem 4.2, they already provide a partial obstruction to summability: if \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable then almost every Mahler dispersion of \(f\) is non-zero. Moreover, Mahler dispersions also detect whether \(f\) has any "bad" poles (_i.e._, at roots of unity of order coprime to \(p\)) according to the following result proved in [1, Lem. 2.16]. **Lemma 2.12** ([1, Lem. 2.16]).: _Let \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\). Then \(\operatorname{disp}(f,\tau)=\infty\) if and only if \(\operatorname{sing}(f,\tau)\cap\mathcal{C}(\tau)\neq\emptyset\)._ ### Mahler coefficients Here we extend our study of the effect of the Mahler operator \(\sigma\) on partial fraction decompositions initiated in [1, SS2.4]. For \(\alpha\in\mathbb{K}^{\times}\) and \(m,k,n\in\mathbb{Z}\) with \(n\geq 0\) and \(1\leq k\leq m\), we define the _Mahler coefficients_\(V_{k,n}^{m}(\alpha)\in\mathbb{K}\) implicitly by \[\sigma^{n}\left(\frac{1}{(x-\alpha^{p^{n}})^{m}}\right)=\frac{1}{(x^{p^{n}}- \alpha^{p^{n}})^{m}}=\sum_{k=1}^{m}\sum_{i=0}^{p^{n}-1}\frac{V_{k,n}^{m}(\zeta _{p^{n}}^{i}\alpha)}{(x-\zeta_{p^{n}}^{i}\alpha)^{k}}. \tag{2.4}\] These Mahler coefficients are computed explicitly with the following result, proved analogously to the similar [1, Lem. 2.17] in case \(n=1\). **Lemma 2.13**.: _For every \(\alpha\in\mathbb{K}^{\times}\), the Mahler coefficients_ \[V_{k,n}^{m}(\alpha)=\mathbb{V}_{k,n}^{m}\cdot\alpha^{k-mp^{n}},\] _where the universal coefficients \(\mathbb{V}_{k,n}^{m}\in\mathbb{Q}\) are the first \(m\) Taylor coefficients at \(x=1\) of_ \[(x^{p^{n}-1}+\cdots+x+1)^{-m}=\sum_{k=1}^{m}\mathbb{V}_{k,n}^{m}\cdot(x-1)^{m- k}+O((x-1)^{m}). \tag{2.5}\] Although Lemma 2.13 serves to compute the \(V_{k,n}^{m}(\alpha)\) for \(\alpha\in\mathbb{K}^{\times}\), \(n\in\mathbb{Z}_{\geq 0}\), and \(1\leq k\leq m\) efficiently in practice1, the following result provides an explicit symbolic expression for these Mahler coefficients as sums over certain integer partitions. Footnote 1: That is, by computing successive derivatives of the left-hand side and evaluating at \(x=1\). **Definition 2.14**.: For \(k,n\in\mathbb{Z}_{\geq 0}\), let \(\Pi_{n}(k)\) be the set of integer partitions \(\mu=(\mu_{1},\ldots,\mu_{\ell})\) of \(k\) with greatest part \(\mu_{1}<p^{n}\), and denote by \(\ell(\mu):=\ell\) the length of \(\mu\) and by \(\ell_{i}(\mu)\) the multiplicity of \(i\) in \(\mu\) for \(1\leq i\leq p^{n}-1\). We adopt the conventions that \(\Pi_{n}(0)=\{\emptyset\}\) for every \(n\geq 0\) and \(\Pi_{0}(k)=\emptyset\) for every \(k\geq 1\). The empty partition \(\mu=\emptyset\) has length \(\ell(\emptyset)=0\) and multiplicity \(\ell_{i}(\emptyset)=0\) for every \(1\leq i\leq p^{n}-1\) (vacuously so when \(n=0\)). **Proposition 2.15**.: _For \(n\geq 0\) and \(1\leq k\leq m\),_ \[\mathbb{V}_{k,n}^{m}=p^{-nm}\cdot\sum_{\mu\in\Pi_{n}(m-k)}(-p^{n})^{-\ell(\mu )}\begin{pmatrix}m-1+\ell(\mu)\\ m-1,\ell_{1}(\mu),\ldots,\ell_{p^{n}-1}(\mu)\end{pmatrix}\prod_{i=1}^{p^{n}-1 }\begin{pmatrix}p^{n}\\ i+1\end{pmatrix}^{\ell_{i}(\mu)}.\] Proof.: By Lemma 2.13, \(V_{k,n}^{m}(\alpha)=\mathbb{V}_{k,n}^{m}\cdot\alpha^{k-mp^{n}}\), where the \(\mathbb{V}_{k,n}^{m}\in\mathbb{Q}\) are given by (2.5). Writing \(f_{m}(x)=x^{-m}\) and \(g_{n}(x)=x^{p^{n}-1}+\cdots+x+1\), and letting \(W_{k,n}^{m}\in\mathbb{Q}\) be the coefficient of \((x-1)^{k}\) in the Taylor expansion of \((f_{m}\circ g_{n})(x)\) at \(x=1\) as in Lemma 2.13, we have that \(\mathbb{V}_{k,n}^{m}=W_{m-k,n}^{m}\) for every \(1\leq k\leq m\). By Faa di Bruno's formula [14], we have \[W_{k,n}^{m}=\frac{(f_{m}\circ g_{n})^{(k)}(1)}{k!}=\frac{1}{k!}\cdot\sum_{\mu\in \Pi(k)}\frac{k!}{\ell_{1}(\mu)!\cdots\ell_{k}(\mu)!}f_{m}^{(\ell(\mu))}(g_{n}( 1))\prod_{i=1}^{k}\left(\frac{g_{n}^{(i)}(1)}{i!}\right)^{\ell_{i}(\mu)}\] for every \(k\geq 0\), where \(\Pi(k)\) denotes the set of _all_ partitions of \(k\), and \(\ell(\mu)\) and \(\ell_{i}(\mu)\) are as in Definition 2.14. For every \(\ell,i\in\mathbb{Z}_{\geq 0}\), we compute \[f_{m}^{(\ell)}(g_{n}(1))=(-1)^{\ell}p^{-n(m+\ell)}\frac{(m-1+\ell)!}{(m-1)!} \qquad\text{and}\qquad g_{n}^{(i)}(1)=i!\left(\begin{array}{c}p^{n}\\ i+1\end{array}\right),\] where we adopt the usual convention that \(\left(\begin{smallmatrix}p^{n}\\ i+1\end{smallmatrix}\right)=0\) whenever \(i\geq p^{n}\). Therefore the partitions \(\mu\in\Pi(k)\backslash\Pi_{n}(k)\) with greatest part \(\mu_{1}\geq p^{n}\) do not contribute to the sum. We isolate the following special case for ease of reference (cf. [1, Cor. 2.18]), since it arises often. **Corollary 2.16**.: _Let \(\alpha\in\mathbb{K}^{\times}\), \(m\in\mathbb{N}\), and \(n\in\mathbb{Z}_{\geq 0}\). Then \(V_{m,n}^{m}(\alpha)=p^{-nm}\alpha^{m-p^{n}m}\)._ Proof.: In the special case where \(k=m\) in Proposition 2.15, the sum is over \(\mu\in\Pi(0)=\{\emptyset\}\), and \(\ell(\emptyset)=0=\ell_{i}(\emptyset)\) for every \(i\in\mathbb{N}\), whence \(V_{m,n}^{m}(\alpha)=p^{-nm}\alpha^{m-p^{n}m}\) by Lemma 2.13. The Mahler coefficients \(V_{k,n}^{m}(\alpha)\) defined above are the main ingredients in our definition of twisted Mahler discrete residues. Our proofs that these residues comprise a complete obstruction to \(\lambda\)-Mahler summability will rely on the following elementary computations, which we record here once and for all for future reference. **Lemma 2.17**.: _Let \(n\in\mathbb{Z}_{\geq 0}\), \(\alpha\in\mathbb{K}^{\times}\), and \(d_{1},\ldots,d_{m}\in\mathbb{K}\) for some \(m\in\mathbb{N}\). Then_ \[\sigma^{n}\left(\sum_{k=1}^{m}\frac{d_{k}}{(x-\alpha^{p^{n}})^{k}}\right)= \sum_{k=1}^{m}\sum_{i=0}^{p^{n}-1}\frac{\sum_{s=k}^{m}V_{k,n}^{s}(\zeta_{p^{n} }^{i}\alpha)d_{s}}{(x-\zeta_{p^{n}}^{i}\alpha)^{k}}.\] _For \(\lambda\in\mathbb{Z}\) and \(g\in\mathbb{K}(x)\), the element \(\Delta_{\lambda}^{(n)}(g):=p^{\lambda n}\sigma^{n}(g)-g\) is \(\lambda\)-Mahler summable._ Proof.: The claims are trivial if \(n=0\): \(\zeta_{1}=1\), \(V_{k,0}^{s}(\alpha)=\delta_{s,k}\) (Kronecker's \(\delta\)) for \(k\leq s\leq m\), and \(\Delta_{\lambda}^{(0)}(g)=0\) is \(\lambda\)-Mahler summable. Suppose that \(n\geq 1\). For \(1\leq s\leq m\) we have \[\sigma^{n}\left(\frac{d_{s}}{(x-\alpha^{p^{n}})^{s}}\right)=\sum_{k=1}^{s} \sum_{i=0}^{p^{n}-1}\frac{V_{k,n}^{s}(\zeta_{p^{n}}^{i}\alpha)d_{s}}{(x-\zeta_{ p^{n}}^{i}\alpha)^{k}}\] by definition (cf. (2.4)), and it follows that \[\sigma^{n}\left(\sum_{s=1}^{m}\frac{d_{s}}{(x-\alpha^{p^{n}})^{s}}\right)= \sum_{s=1}^{m}\sum_{k=1}^{s}\sum_{i=0}^{p^{n}-1}\frac{V_{k,n}^{s}(\zeta_{p^{n} }^{i}\alpha)d_{s}}{(x-\zeta_{p^{n}}^{i}\alpha)^{k}}=\sum_{k=1}^{m}\sum_{i=0}^{p ^{n}-1}\frac{\sum_{s=k}^{m}V_{k,n}^{s}(\zeta_{p^{n}}^{i}\alpha)d_{s}}{(x-\zeta_ {p^{n}}^{i}\alpha)^{k}}.\] Finally, since \[\Delta_{\lambda}^{(n)}(g)=p^{\lambda n}\sigma^{n}(g)-g=p^{\lambda}\sigma\left( \sum_{j=0}^{n-1}p^{\lambda j}\sigma^{j}(g)\right)-\left(\sum_{j=0}^{n-1}p^{ \lambda j}\sigma^{j}(g)\right)=\Delta_{\lambda}\left(\sum_{j=0}^{n-1}p^{ \lambda j}\sigma^{j}(g)\right),\] \(\Delta_{\lambda}^{(n)}(g)\) is \(\lambda\)-Mahler summable. ## 3. Cycle maps and their \(\omega\)-sections The goal of this section is to define and study the properties of two auxiliary maps \(\mathcal{D}_{\lambda,\tau}\) and \(\mathcal{I}_{\lambda,\tau}^{(\omega)}\) that will help us retain some control over the perverse periodic behavior of the roots of unity \(\gamma\in\mathcal{C}(\tau)\) under the \(p\)-power map \(\gamma\mapsto\gamma^{p}\). The following definitions and results are relevant only for torsion Mahler trees \(\tau\in\mathcal{T}_{+}\). **Definition 3.1**.: With notation as in Definition 2.7, let \(\tau\in\mathcal{T}_{+}\) be a torsion Mahler tree, let \(g\in\mathbb{K}(x)\), and let us write the \(\tau\)-component \(g_{\tau}\) of \(g\) from Definition 2.5 as \[g_{\tau}=\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau}\frac{d_{k}(\alpha)}{(x- \alpha)^{k}}.\] We define the _cyclic component_ of \(g_{\tau}\) by \[\mathcal{C}(g_{\tau}):=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau) }\frac{d_{k}(\gamma)}{(x-\gamma)^{k}}.\] **Definition 3.2**.: Let \(\mathcal{S}:=\bigoplus_{k\in\mathbb{N}}\mathbb{K}\) denote the \(\mathbb{K}\)-vector space of finitely supported sequences in \(\mathbb{K}\). For \(\tau\in\mathcal{T}_{+}\), we let \(\mathcal{S}^{\mathcal{C}(\tau)}:=\bigoplus_{\gamma\in\mathcal{C}(\tau)} \mathcal{S}\). For \(\lambda\in\mathbb{Z}\), we define _cycle map_\(\mathcal{D}_{\lambda,\tau}\) to be the \(\mathbb{K}\)-linear endomorphism \[\mathcal{D}_{\lambda,\tau}:\mathcal{S}^{\mathcal{C}(\tau)}\to\mathcal{S}^{ \mathcal{C}(\tau)}:(d_{k}(\gamma))_{\begin{subarray}{c}k\in\mathbb{N}\\ \gamma\in\mathcal{C}(\tau)\end{subarray}}\mapsto\left(-d_{k}(\gamma)+p^{ \lambda}\sum_{s\geq k}V_{k,1}^{s}(\gamma)\cdot d_{s}(\gamma^{p})\right)_{ \begin{subarray}{c}k\in\mathbb{N}\\ \gamma\in\mathcal{C}(\tau)\end{subarray}}, \tag{3.1}\] where the Mahler coefficients \(V_{k,1}^{s}(\gamma)\) are defined as in (2.4). We treat the \(\mathbb{K}\)-vector space \(\mathcal{S}^{\mathcal{C}(\tau)}\) introduced in the preceding Definition 3.2 as an abstract receptacle for the coefficients occurring in the partial fraction decomposition of \(\mathcal{C}(g_{\tau})\) for \(\tau\in\mathcal{T}_{+}\) and arbitrary elements \(g\in\mathbb{K}(x)\). Note that the infinite summation in (3.1) is harmless, since \(d_{s}(\gamma^{p})=0\) for every \(\gamma\in\mathcal{C}(\gamma)\) for large enough \(s\in\mathbb{N}\). The cycle map \(\mathcal{D}_{\lambda,\tau}\) for \(\lambda=0\) is the negative of the (truncated) linear map introduced in [1, Lemma 4.14]. The relevance of \(\mathcal{D}_{\lambda,\tau}\) to our study of \(\lambda\)-Mahler summability is captured by the following immediate computational result. **Lemma 3.3**.: _Let \(\lambda\in\mathbb{Z}\), \(g\in\mathbb{K}(x)\), and \(\tau\in\mathcal{T}_{+}\). Let us write the cyclic components_ \[\mathcal{C}(g_{\tau})=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau)} \frac{d_{k}(\gamma)}{(x-\gamma)^{k}}\qquad\text{and}\qquad\mathcal{C}\left( \Delta_{\lambda}(g_{\tau})\right)=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{ C}(\tau)}\frac{c_{k}(\gamma)}{(x-\gamma)^{k}}\] _as in Definition 3.1. Writing \(\mathbf{d}:=(d_{k}(\gamma))_{k,\gamma}\) and \(\mathbf{c}:=(c_{k}(\gamma))_{k,\gamma}\) as vectors in \(\mathcal{S}^{\mathcal{C}(\tau)}\) as in Definition 3.2, we have \(\mathbf{c}=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\)._ Proof.: It follows from Lemma 2.17 that \[\mathcal{C}(\sigma(g_{\tau}))=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}( \tau)}\frac{\sum_{s\geq k}V_{k,1}^{s}(\gamma)d_{s}(\gamma^{p})}{(x-\gamma)^{k }},\] and therefore, for every \(k\in\mathbb{N}\) and \(\gamma\in\mathcal{C}(\tau)\), \[c_{k}(\gamma)=-d_{k}(\gamma)+p^{\lambda}\sum_{s\geq k}V_{k,1}^{s}(\gamma)d_{s}( \gamma^{p}).\qed\] The following fundamental Lemma is essential to our study of \(\lambda\)-Mahler summability at torsion Mahler trees \(\tau\in\mathcal{T}_{+}\). **Lemma 3.4**.: _Let \(\lambda\in\mathbb{Z}\), \(\tau\in\mathcal{T}_{+}\), and set \(e:=|\mathcal{C}(\tau)|\) as in Definition 2.7. Let \(\mathcal{D}_{\lambda,\tau}\) be as in Definition 3.2._ 1. _If_ \(\lambda\leq 0\) _then_ \(\mathcal{D}_{\lambda,\tau}\) _is an isomorphism._ 2. _If_ \(\lambda\geq 1\) _then_ \(\operatorname{im}(\mathcal{D}_{\lambda,\tau})\) _has codimension_ \(1\) _in_ \(\mathcal{S}^{\mathcal{C}(\tau)}\) _and_ \(\ker(\mathcal{D}_{\lambda,\tau})=\mathbb{K}\cdot\mathbf{w}^{(\lambda)}\)_, where the vector_ \((w_{k}^{(\lambda)}(\gamma))=\mathbf{w}^{(\lambda)}\in\mathcal{S}^{\mathcal{C}( \tau)}\) _is recursively determined by the conditions_ \[w_{k}^{(\lambda)}(\gamma):=\begin{cases}0&\text{for }k>\lambda;\\ \gamma^{\lambda}&\text{for }k=\lambda;\\ \dfrac{p^{\lambda}\gamma^{k}}{1-p^{(\lambda-k)e}}\sum_{j=0}^{e-1}\sum_{s=k+1}^ {\lambda}p^{(\lambda-k)j}\mathbb{V}_{k,1}^{s}\gamma^{-sp^{j+1}}w_{s}^{(\lambda )}\big{(}\gamma^{p^{j+1}}\big{)}&\text{for any remaining }k<\lambda;\\ &\text{for each }\gamma\in\mathcal{C}(\tau)\text{, where the universal Mahler coefficients }\mathbb{V}_{k,1}^{s}\in\mathbb{Q}\text{ are as in Proposition \ref{prop:1}.}\end{cases}\] (3.2) Proof.: Let \((d_{k}(\gamma))=\mathbf{d}\in\mathcal{S}^{\mathcal{C}(\tau)}-\{\mathbf{0}\}\), let \(m\in\mathbb{N}\) be as large as possible such that \(d_{m}(\gamma)\neq 0\) for some \(\gamma\in\mathcal{C}(\tau)\), and let us write \((c_{k}(\gamma))=\mathbf{c}:=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\). Let us first assume that \(\mathbf{d}\in\ker(\mathcal{D}_{\lambda,\tau})\Leftrightarrow\mathbf{c}= \mathbf{0}\). Then by the Definition 3.2 and our choice of \(m\), for each \(\gamma\in\mathcal{C}(\tau)\), \[0=c_{m}(\gamma)=p^{\lambda}V_{m,1}^{m}(\gamma)d_{m}(\gamma^{p})-d_{m}(\gamma) =p^{\lambda-m}\gamma^{m-pm}d_{m}(\gamma^{p})-d_{m}(\gamma), \tag{3.3}\] where the second equality results from Corollary 2.16. Since (3.3) holds for every \(\gamma\in\mathcal{C}(\tau)\) simultaneously, it follows that \(d_{m}(\gamma^{p^{j+1}})=p^{m-\lambda}\gamma^{(p^{j+1}-p^{j})m}d_{m}(\gamma^{p ^{j}})\) for every \(j\geq 0\) and for each \(\gamma\in\mathcal{C}(\gamma)\), whence none of the \(d_{m}(\gamma^{p^{j}})\) can be zero. Since \(\gamma^{p^{e}}=\gamma\), we find that \[1=\dfrac{d_{m}(\gamma^{p^{e}})}{d_{m}(\gamma)}=\prod_{j=0}^{e-1}\dfrac{d_{m}( \gamma^{p^{j+1}})}{d_{m}(\gamma^{p^{j}})}=\prod_{j=0}^{e-1}p^{m-\lambda} \gamma^{(p^{j+1}-p^{j})m}=p^{(m-\lambda)e}\gamma^{(p^{e}-1)m}=p^{(m-\lambda) e},\] which is only possible if \(m=\lambda\). Therefore \(d_{k}(\gamma)=0\) for every \(k>\lambda\), whence \(\mathcal{D}_{\lambda,\tau}\) is injective in case \(\lambda\leq 0\). In case \(\lambda\geq 1\), it also follows from (3.3) with \(m=\lambda\) that \(\gamma^{-p\lambda}d_{\lambda}(\gamma^{p})=\gamma^{-\lambda}d_{\lambda}(\gamma)=\omega\) must be a constant that does not depend on \(\gamma\in\mathcal{C}(\gamma)\). We claim that if we further impose that this \(\omega=1\), then the remaining componenets of our vector \(\mathbf{d}\) are uniquely determined by the recursion (3.2). Indeed, if \(\lambda=1\) then there are no more components to determine, whereas if \(\lambda\geq 2\) then we must have, for \(1\leq k\leq\lambda-1\), \[0=-d_{k}(\gamma)+p^{\lambda}\sum_{s=k}^{\lambda}V_{k,1}^{s}(\gamma)d_{s}(\gamma^{ p})\qquad\Longleftrightarrow\] \[d_{k}(\gamma)-p^{\lambda-k}\gamma^{k-pk}d_{k}(\gamma^{p})=d_{k}(\gamma)-p^{ \lambda}V_{k,1}^{k}(\gamma)d_{k}(\gamma^{p})=p^{\lambda}\sum_{s=k+1}^{\lambda }V_{k,1}^{s}(\gamma)d_{s}(\gamma^{p}),\] where the first equality is obtained from Corollary 2.16 and the second is just a rearrangement. Replacing the arbitrary \(\gamma\) in the above equation with \(\gamma^{p^{j}}\) for \(j=0,\ldots,e-1\), we find that the telescoping sum \[\gamma^{-k}\big{(}1-p^{(\lambda-k)e}\big{)}d_{k}(\gamma)=\sum_{j =0}^{e-1}p^{(\lambda-k)j}\gamma^{-kp^{j}}\cdot\Big{(}d_{k}\big{(}\gamma^{p^{j} }\big{)}-p^{\lambda-k}\gamma^{kp^{j}-kp^{j+1}}d_{k}\big{(}\gamma^{p^{j+1}} \big{)}\Big{)}\\ =\sum_{j=0}^{e-1}p^{(\lambda-k)j}\gamma^{-kp^{j}}\cdot p^{ \lambda}\sum_{s=k+1}^{\lambda}V_{k,1}^{s}\big{(}\gamma^{p^{j}}\big{)}d_{s} \big{(}\gamma^{p^{j+1}}\big{)}=p^{\lambda}\sum_{j=0}^{e-1}\sum_{s=k+1}^{ \lambda}p^{(\lambda-k)j}\mathbb{V}_{k,1}^{s}\gamma^{-sp^{j+1}}d_{s}\big{(} \gamma^{p^{j+1}}\big{)},\] which is clearly equivalent to the expression defining the components \(w_{k}^{(\lambda)}(\gamma)\) for \(k<\lambda\) in (3.2), and where we have once again used Lemma 2.13 to obtain the last equality, since \(V_{k,1}^{s}(\gamma^{p^{j}})=\mathbb{V}_{k,1}^{s}\gamma^{kp^{j}-sp^{j+1}}\). This concludes the proof of the statements concerning \(\ker(\mathcal{D}_{\lambda,\tau})\). Let us now prove the statements concerning \(\operatorname{im}(\mathcal{D}_{\lambda,\tau})\). We see from Definition 3.2 that \(\mathcal{D}_{\lambda,\tau}\) preserves the increasing filtration of \(\mathcal{S}^{\mathcal{C}(\tau)}\) by the finite-dimensional subspaces \[\mathcal{S}^{\mathcal{C}(\tau)}_{<m}:=\left\{(d_{k}(\gamma))\in\mathcal{S}^{ \mathcal{C}(\tau)}\ \big{|}\ d_{k}(\gamma)=0\text{ for }k\geq m\text{ and every }\gamma\in\mathcal{C}(\tau)\right\}. \tag{3.4}\] In case \(\lambda\leq 0\), since \(\mathcal{D}_{\lambda,\tau}\) is injective, it must restrict to an automorphism of \(\mathcal{S}^{\mathcal{C}(\tau)}_{<m}\) for each \(m\in\mathbb{N}\), concluding the proof of (1). In case \(\lambda\geq 1\), since \(\ker(\mathcal{D}_{\lambda,\tau})\) is one dimensional, it follows that \(\operatorname{im}(\mathcal{D}_{\lambda,\tau})\cap\mathcal{S}^{\mathcal{C}(\tau)} _{<m}\) has codimension \(1\) in \(\mathcal{S}^{\mathcal{C}(\tau)}_{<m}\) for every \(m\geq\lambda+1\), and therefore \(\operatorname{im}(\mathcal{D}_{\lambda,\tau})\) has codimension \(1\) in all of \(\mathcal{S}^{\mathcal{C}(\tau)}\). This concludes the proof. **Definition 3.5**.: Let \(\lambda\in\mathbb{Z}\), \(\tau\in\mathcal{T}_{+}\), and set \(e:=|\mathcal{C}(\tau)|\) as in Definition 2.7. We define the \(0\)-_section_\(\mathcal{I}^{(0)}_{\lambda,\tau}\) (of the map \(\mathcal{D}_{\lambda,\tau}\) of Definition 3.2) as follows. For \((c_{k}(\gamma))=\mathbf{c}\in\mathcal{S}^{\mathcal{C}(\tau)}\), let us write \((d_{k}(\gamma))=\mathbf{d}=\mathcal{I}^{(0)}_{\lambda,\tau}(\mathbf{c})\in \mathcal{S}^{\mathcal{C}(\tau)}\). We set each \(d_{k}(\gamma)=0\) whenever \(k\in\mathbb{N}\) is such that \(c_{k}(\gamma)=0\) for every \(\gamma\in\mathcal{C}(\tau)\). For any remaining \(k\in\mathbb{N}\), we define recursively \[d_{k}(\gamma):=\frac{\gamma^{k}}{p^{(\lambda-k)e}-1}\sum_{j=0}^{e-1}p^{( \lambda-k)j}\gamma^{-kp^{j}}\left[c_{k}\big{(}\gamma^{p^{j}}\big{)}-p^{\lambda }\sum_{s\geq k+1}V_{k,1}^{s}\big{(}\gamma^{p^{j}}\big{)}d_{s}\big{(}\gamma^{p ^{j+1}}\big{)}\right]\qquad\text{for }k\neq\lambda; \tag{3.5}\] and, if \(\lambda\geq 1\), we set \[d_{\lambda}(\gamma):=\frac{\gamma^{\lambda}}{e}\sum_{j=0}^{e-1}(j+1-e)\gamma^{ -\lambda p^{j}}\left[c_{\lambda}\big{(}\gamma^{p^{j}}\big{)}-p^{\lambda}\sum _{s\geq\lambda+1}V_{\lambda,1}^{s}\big{(}\gamma^{p^{j}}\big{)}d_{s}\big{(} \gamma^{p^{j+1}}\big{)}\right]. \tag{3.6}\] More generally, for any \(\omega\in\mathbb{K}\) the _\(\omega\)-section_\(\mathcal{I}^{(\omega)}_{\lambda,\tau}\) (of \(\mathcal{D}_{\lambda,\tau}\)) is defined by setting \[\mathcal{I}^{(\omega)}_{\lambda,\tau}(\mathbf{c}):=\begin{cases}\mathcal{I}^{(0 )}_{\lambda,\tau}(\mathbf{c})&\text{if $\lambda\leq 0$};\\ \mathcal{I}^{(0)}_{\lambda,\tau}(\mathbf{c})+\omega\mathbf{w}^{(\lambda)}& \text{if $\lambda\geq 1$};\end{cases} \tag{3.7}\] for every \(\mathbf{c}\in\mathcal{S}^{\mathcal{C}(\tau)}\), where \(\mathbf{w}^{(\lambda)}\) is the vector defined in (3.2) for \(\lambda\geq 1\). **Proposition 3.6**.: _Let \(\lambda\in\mathbb{Z}\), \(\tau\in\mathcal{T}_{+}\), and set \(e:=|\mathcal{C}(\tau)|\) as in Definition 2.7. Let \(\omega\in\mathbb{K}\) and consider the \(\omega\)-section \(\mathcal{I}^{(\omega)}_{\lambda,\tau}\) of Definition 3.5. Let \(\mathbf{c}\in\mathcal{S}^{\mathcal{C}(\tau)}\), and let us write \(\mathbf{d}:=\mathcal{I}^{(\omega)}_{\lambda,\tau}(\mathbf{c})\) and \(\tilde{\mathbf{c}}:=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\) as in Definition 3.2. Then_ \[c_{k}(\gamma)=\tilde{c}_{k}(\gamma)\qquad\text{whenever $k\neq\lambda$},\text{ for every $\gamma\in\mathcal{C}(\gamma)$}; \tag{3.8}\] _and, in case \(\lambda\geq 1\),_ \[c_{\lambda}(\gamma)-\tilde{c}_{\lambda}(\gamma)=\frac{\gamma^{\lambda}}{e} \sum_{j=1}^{e}\gamma^{-\lambda p^{j}}\left(c_{\lambda}\Big{(}\gamma^{p^{j}} \Big{)}-p^{\lambda}\sum_{s\geq\lambda+1}V^{s}_{\lambda,1}\Big{(}\gamma^{p^{j}} \Big{)}d_{s}\Big{(}\gamma^{p^{j+1}}\Big{)}\right). \tag{3.9}\] _Moreover, \(\mathbf{c}\in\operatorname{im}(\mathcal{D}_{\lambda,\tau})\) if and only if \(\mathbf{c}=\tilde{\mathbf{c}}\)._ Proof.: The expression (3.5) arises from a similar computation as in the proof of Lemma 3.4. Let \(\mathbf{c}\in\mathcal{S}^{\mathcal{C}(\tau)}\) be arbitrary, and let us try (and maybe fail), to construct \(\mathbf{d}\in\mathcal{S}^{\mathcal{C}(\tau)}\) such that \(\mathcal{D}_{\lambda,\tau}(\mathbf{d})=\mathbf{c}\), that is, with \[c_{k}(\gamma)=-d_{k}(\gamma)+p^{\lambda}\sum_{s\geq k}V^{s}_{k,1}(\gamma)d_{s }(\gamma)\qquad\Longleftrightarrow \tag{3.10}\] \[p^{\lambda-k}\gamma^{k-pk}d_{k}(\gamma^{p})-d_{k}(\gamma)=c_{k}(\gamma)-p^{ \lambda}\sum_{s\geq k+1}V^{s}(\gamma)d_{s}(\gamma^{p}). \tag{3.11}\] Then we again have the telescoping sum \[\big{(}p^{(\lambda-k)e}-1\big{)}\gamma^{-k}d_{k}(\gamma)=\sum_{j =0}^{e-1}p^{(\lambda-k)j}\gamma^{-kp^{j}}\cdot\Big{(}p^{\lambda-k}\gamma^{kp^ {j}-kp^{j+1}}d_{k}\big{(}\gamma^{p^{j+1}}\big{)}-d_{k}\big{(}\gamma^{p^{j}} \big{)}\Big{)}\] \[=\sum_{j=0}^{e-1}p^{(\lambda-k)j}\gamma^{-kp^{j}}\cdot\left(c_{k }\big{(}\gamma^{p^{j}}\big{)}-p^{\lambda}\sum_{s\geq k+1}V^{s}_{k,1}\big{(} \gamma^{p^{j}}\big{)}d_{s}\big{(}\gamma^{p}\big{)}\right),\] which is equivalent to (3.5) provided precisely that \(k\neq\lambda\). Thus we see that (3.5) is a _necessary_ condition on the \(d_{k}(\gamma)\) in order to satisfy (3.8). In case \(\lambda\leq 0\), we know that \(\mathcal{D}_{\lambda,\tau}\) is an isomorphism by Lemma 3.4, in which case this condition must also be sufficient and we have nothing more to show. Let us assume from now on that \(\lambda\geq 1\). Since by Lemma 3.4 the restriction of \(\mathcal{D}_{\lambda,\tau}\) to \[\mathcal{S}^{\mathcal{C}(\tau)}_{>\lambda}:=\big{\{}\mathbf{d}\in\mathcal{S}^ {\mathcal{C}(\tau)}\ \big{|}\ d_{k}(\gamma)=0\text{ for every $k\leq\lambda$ and $\gamma\in\mathcal{C}(\gamma)$}\big{\}}\] is injective, and since it preserves the induced filtration (3.4), it follows that \(\operatorname{pr}_{\lambda}\circ\mathcal{D}_{\lambda,\tau}\) restricts to an automorphism of \(\mathcal{S}^{\mathcal{C}(\tau)}_{>\lambda}\), where \(\operatorname{pr}_{\lambda}:\mathcal{S}^{\mathcal{C}(\tau)}\twoheadrightarrow \mathcal{S}^{\mathcal{C}(\tau)}_{\lambda}\) denotes the obvious projection map. Therefore the necessary condition (3.5) must also be sufficient in order to satisfy (3.8) for \(k>\lambda\). Since \(\mathcal{D}_{\lambda,\tau}\) also restricts to an automorphism of \(\mathcal{S}^{\mathcal{C}(\tau)}_{<\lambda}\) (trivially so in case \(\lambda=1\), since \(\mathcal{S}^{\mathcal{C}(\tau)}_{<1}=\{\mathbf{0}\}\)), it similarly follows that the necessary condition (3.7) must also be sufficient in order to satisfy (3.8) for any \(k<\lambda\) also, regardless of how the \(d_{\lambda}(\gamma)\) are chosen. Now for the prescribed choice of \(d_{\lambda}(\gamma)\) in (3.6), we compute \[\tilde{c}_{\lambda}(\gamma)-p^{\lambda}\sum_{s\geq\lambda+1}V_{\lambda,1}^{s}( \gamma)d_{s}(\gamma^{p})=p^{\lambda}V_{\lambda,1}^{\lambda}(\gamma)d_{\lambda} (\gamma^{p})-d_{\lambda}(\gamma)=\gamma^{\lambda-p\lambda}d_{\lambda}(\gamma^{ p})-d_{\lambda}(\gamma), \tag{3.12}\] where the first equality follows from the definition of \(\tilde{\mathbf{c}}=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\), and the second equality from Corollary 2.16. On the other hand, after re-indexing the sum in (3.6), evaluated at \(\gamma^{p}\) instead of \(\gamma\), we find that and after subtracting \(d_{\lambda}(\gamma)\) exactly as given in (3.6) we find that \[\gamma^{\lambda-p\lambda}d_{\lambda}(\gamma^{p})-d_{\lambda}( \gamma)=-\frac{\gamma^{\lambda}}{e}\sum_{j=1}^{e-1}\gamma^{-\lambda p^{j}}\left[ c_{\lambda}\big{(}\gamma^{p^{j}}\big{)}-p^{\lambda}\sum_{s\geq\lambda+1}V_{ \lambda,1}^{s}\big{(}\gamma^{p^{j}}\big{)}d_{s}\big{(}\gamma^{p^{j+1}}\big{)}\right] \\ -\frac{\gamma^{\lambda}}{e}(1-e)\gamma^{-\lambda}\left[c_{\lambda }\big{(}\gamma\big{)}-p^{\lambda}\sum_{s\geq\lambda+1}V_{\lambda,1}^{s}(\gamma )d_{s}\big{(}\gamma^{p}\big{)}\right]\\ =-\frac{\gamma^{\lambda}}{e}\sum_{j=0}^{e-1}\gamma^{-\lambda p^{j} }\left[c_{\lambda}\big{(}\gamma^{p^{j}}\big{)}-p^{\lambda}\sum_{s\geq\lambda+ 1}V_{\lambda,1}^{s}\big{(}\gamma^{p^{j}}\big{)}d_{s}\big{(}\gamma^{p^{j+1}} \big{)}\right]+c_{\lambda}(\gamma)-p^{\lambda}\sum_{s\geq\lambda+1}V_{\lambda, 1}^{s}(\gamma)d_{s}(\gamma^{p}), \tag{3.13}\] with the convention that the sum \(\sum_{j=1}^{e-1}\) is empty in case \(e=1\). Putting (3.12) and (3.13) together establishes (3.9). Since \(\mathbf{c}=\tilde{\mathbf{c}}\) is a non-trivial sufficient for \(\mathbf{c}\in\operatorname{im}(\mathcal{D}_{\lambda,\tau})\), by Lemma 3.4 it must also be necessary, since \(\operatorname{im}(\mathcal{D}_{\lambda,\tau})\) has codimension \(1\) in \(\mathcal{S}^{\mathcal{C}(\tau)}\). This concludes the proof. ## 4. Mahler dispersion and \(\lambda\)-Mahler summability Our goal in this section is to prove Theorem 4.2: if \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable for some \(\lambda\in\mathbb{Z}\), then it has non-zero dispersion almost everywhere, generalizing to arbitrary \(\lambda\in\mathbb{Z}\) the analogous result for \(\lambda=0\) obtained in [2, Corollary 3.2]. In spite of the exceptions that occur for \(\lambda\geq 1\), this will be an essential tool in our proofs that twisted Mahler discrete residues comprise a complete obstruction to \(\lambda\)-Mahler summability. In the following preliminary result, which generalizes [2, Proposition 3.1] from the special case \(\lambda=0\) to arbitrary \(\lambda\in\mathbb{Z}\), we relate the Mahler dispersions of a \(\lambda\)-Mahler summable \(f\in\mathbb{K}(x)\) to those of a certificate \(g\in\mathbb{K}(x)\) such that \(f=\Delta_{\lambda}(g)\). **Proposition 4.1**.: _Let \(f,g\in\mathbb{K}(x)\) and \(\lambda\in\mathbb{Z}\) such that \(f=\Delta_{\lambda}(g)\)._ 1. _If_ \(\infty\in\operatorname{supp}(f)\)_, then_ \(\operatorname{disp}(f,\infty)=\operatorname{disp}(g,\infty)+1\)_, except in case_ \(\lambda\neq 0\) _and the Laurent polynomial component_ \(f_{\infty}=c_{0}\in\mathbb{K}^{\times}\)_, in which case we must have_ \(g_{\infty}=c_{0}/(p^{\lambda}-1)\)_._ 2. _If_ \(\infty\neq\tau\in\operatorname{supp}(f)\)_, then_ \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1\)_, with the convention that_ \(\infty+1=\infty\)_, except possibly in case that:_ \(\mathcal{C}(\tau)\) _is non-empty; and_ \(\lambda\geq 1\)_; and the order of every pole of_ \(g\) _in_ \(\mathcal{C}(\tau)\) _is exactly_ \(\lambda\)_._ Proof.: (1). First suppose that \(\{0\}\neq\theta\in\mathbb{Z}/\mathcal{P}\) is such that \(g_{\theta}\neq 0\), and let us write \[g_{\theta}=\sum_{j=0}^{d}c_{ip^{j}}x^{ip^{j}},\] where we assume that \(c_{i}c_{ip^{d}}\neq 0\), _i.e._, that \(\operatorname{disp}(g_{\theta},\infty)=d\). Then \[\Delta_{\lambda}(g_{\theta})=p^{\lambda}c_{ip^{d}}x^{ip^{d+1}}-c_{i}x^{i}+\sum _{j=1}^{d}(p^{\lambda}c_{ip^{j-1}}-c_{ip^{j}})x^{ip^{j}},\] from which it follows that \(0\neq f_{\theta}=\Delta_{\lambda}(g_{\theta})\) and \(\operatorname{disp}(f_{\theta},\infty)=\operatorname{disp}(\Delta_{\lambda}( g_{\theta}),\infty)=d+1\). Since in this case Definition 2.11 gives that \[\operatorname{disp}(f,\infty)=\max\left\{\operatorname{disp}\left(f_{\theta}, \infty\right)\ |\ \{0\}\neq\theta\in\mathbb{Z}/\mathcal{P},f_{\theta}\neq 0 \right\},\] and similarly for \(\operatorname{disp}(g,\infty)\), we find that \(\operatorname{disp}(f,\infty)=\operatorname{disp}(g,\infty)+1\) provided that the Laurent component \(g_{\infty}\in\mathbb{K}[x,x^{-1}]\) is not constant. In any case, by Lemma 2.10, if \(\infty\in\operatorname{supp}(f)\) then \(\infty\in\operatorname{supp}(g)\). In this case, we have \(0\neq f_{\infty}=\Delta_{\lambda}(g_{\infty})\), since \(\infty\in\operatorname{supp}(f)\), and if \(\lambda=0\) it follows in particular \(g_{\infty}\notin\mathbb{K}\). In case \(\lambda\neq 0\) and \(f_{\infty}=c_{0}\in\mathbb{K}^{\times}\), the computation above shows that \(g_{\theta}=0\) for every \(\{0\}\neq\theta\in\mathbb{Z}/\mathcal{P}\), and we see that \(g_{\infty}=g_{\{0\}}=c_{0}/(p^{\lambda}-1)\). (2). Suppose \(\tau\in\operatorname{supp}(f)\), and therefore \(\tau\in\operatorname{supp}(g)\) by Lemma 2.10. We consider two cases, depending on whether \(\operatorname{disp}(g,\tau)\) is finite or not. If \(\operatorname{disp}(g,\tau)=:d<\infty\), let \(\alpha\in\tau\) be such that \(\alpha\) and \(\alpha^{p^{d}}\) are poles of \(g\). Choose \(\gamma\in\tau\) such that \(\gamma^{p}=\alpha\). Then \(\gamma\) is a pole of \(\sigma(g)\) but not of \(g\) (by the maximality of \(d\)), and therefore \(\gamma\) is a pole of \(f\). On the other hand, \(\gamma^{p^{d+1}}=\alpha^{p^{d}}\) is a pole of \(g\) but not of \(\sigma(g)\), for if \(\alpha^{p^{d}}\) were a pole of \(\sigma(g)\) then \(\alpha^{p^{d+1}}\) would be a pole of \(g\), contradicting the maximality of \(d\). Therefore \(\gamma^{p^{d+1}}\) is a pole of \(f\). It follows that \(\operatorname{disp}(f,\tau)\geq d+1\). One can show equality by contradiction: if \(\alpha\in\tau\) is a pole of \(f\) such that \(\alpha^{p^{s}}\) is also a pole of \(f\) for some \(s>d+1\), then each of \(\alpha\) and \(\alpha^{p^{s}}\) is either a pole of \(g\) or a pole of \(\sigma(g)\). If \(\alpha^{p^{s}}\) is a pole of \(g\), then \(\alpha\) cannot also be a pole of \(g\), for this would contradict the maximality of \(d\), whence \(\alpha\) must be a pole of \(\sigma(g)\), but then \(\alpha^{p}\) would have to be a pole of \(g\), still contradicting the maximality of \(d\). Hence \(\alpha^{p^{s}}\) must be a pole of \(\sigma(g)\). But then \(\alpha^{p^{s+1}}\) is a pole of \(g\), which again contradicts the maximality of \(d\) whether \(\alpha\) is a pole of \(\sigma(g)\) or of \(g\). This concludes the proof that \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1\) in this case where \(\operatorname{disp}(g,\tau)<\infty\). If \(\operatorname{disp}(g,\tau)=\infty\) then \(g\) has a pole in \(\mathcal{C}(\tau)\) by Lemma 2.12. If \(f\) also has a pole in \(\mathcal{C}(\tau)\) then \(\operatorname{disp}(f,\tau)=\infty=\operatorname{disp}(g,\tau)+1\) and we are done. So let us suppose \(\operatorname{disp}(f,\tau)<\infty\) and conclude that \(g\) has a pole of order exactly \(\lambda\) at every \(\gamma\in\mathcal{C}(\tau)\). In this case, writing \[0\neq\mathcal{C}(g_{\tau})=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau )}\frac{d_{k}(\gamma)}{(x-\gamma)^{k}}\qquad\text{and}\qquad 0=\mathcal{C}(f_{ \tau})=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau)}\frac{c_{k}( \gamma)}{(x-\gamma)^{k}}\] as in Definition 3.1, it follows from Lemma 3.3 that \(\mathcal{D}_{\lambda,\tau}(\mathbf{d})=\mathbf{c}\), where \(\mathbf{d}:=(d_{k}(\gamma))\) and \(\mathbf{c}:=(c_{k}(\gamma))\). By Lemma 3.4, \(\mathbf{d}=\omega\mathbf{w}^{(\lambda)}\) for some \(0\neq\omega\in\mathbb{K}\), where \(\mathbf{w}^{(\lambda)}=(w_{k}^{(\lambda)}(\gamma)\) is the unique vector specified in Lemma 3.4, which has every component \(w_{k}^{(\lambda)}(\gamma)=0\) for \(k>\lambda\) and each component \(w_{\lambda}^{(\lambda)}(\gamma)=\gamma^{\lambda}\neq 0\) for \(\gamma\in\mathcal{C}(\tau)\). In the next result we deduce from Proposition 4.1 that if \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable then \(f\) has non-zero dispersion almost everywhere. For the applications in the sequel, it will be essential for us to have these restrictions be defined intrinsically in terms of \(f\), with no regard to any particular choice of certificate \(g\in\mathbb{K}(x)\) such that \(f=\Delta_{\lambda}(g)\). **Theorem 4.2**.: _Let \(\lambda\in\mathbb{Z}\) and and suppose that \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable._ 1. _If_ \(\infty\in\operatorname{supp}(f)\) _and either_ \(\lambda=0\) _or_ \(f_{\infty}\notin\mathbb{K}\) _then_ \(\operatorname{disp}(f,\infty)>0\)_._ 2. _If_ \(\lambda\leq 0\) _then_ \(\operatorname{disp}(f,\tau)>0\) _for every_ \(\infty\neq\tau\in\operatorname{supp}(f)\)_._ 3. _If_ \(\lambda\geq 1\) _and_ \(\infty\neq\tau\in\operatorname{supp}(f)\) _is such that either_ \(\tau\in\mathcal{T}_{0}\) _or_ \(\operatorname{ord}(f,\tau)\neq\lambda\) _then_ \(\operatorname{disp}(f,\tau)>0\)_._ Proof.: Suppose \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable and let \(g\in\mathbb{K}(x)\) such that \(f=\Delta_{\lambda}(g)\). (1) and (2). If \(\infty\in\operatorname{supp}(f)\) then by Proposition 4.1\(\operatorname{disp}(f,\infty)=\operatorname{disp}(g,\infty)+1>0\) provided that either \(\lambda=0\) or \(f_{\infty}\notin\mathbb{K}\). If \(\lambda\leq 0\) then \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1>0\) for all \(\infty\neq\tau\in\operatorname{supp}(f)\) by Proposition 4.1. (3). Assuming that \(\lambda\geq 1\), we know by Proposition 4.1 that \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1>0\) for every \(\infty\neq\tau\in\operatorname{supp}(f)\), except possibly in case \(\tau\in\mathcal{T}_{+}\) and every pole of \(g\) in \(\mathcal{C}(\tau)\) has order exactly \(\lambda\). Thus our claim is already proved for \(\tau\in\mathcal{T}_{0}\). So from now on we suppose \(\tau\in\mathcal{T}_{+}\). By Lemma 2.10(7), \(\operatorname{ord}(f,\tau)=\operatorname{ord}(g,\tau)\), and therefore if \(\operatorname{ord}(f,\tau)<\lambda\), there are no poles of \(g\) of order \(\lambda\) anywhere in \(\tau\), let alone in \(\mathcal{C}(\tau)\), so \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1>0\) by Proposition 4.1 in this case also. Moreover, if \(f\) has a pole of any order in \(\mathcal{C}(\tau)\), then \(\operatorname{disp}(f,\tau)=\infty>0\) by Lemma 2.12. It remains to show that if \(m:=\operatorname{ord}(f,\tau)>\lambda\) then \(\operatorname{disp}(f,\tau)>0\). In this case, even though \(\operatorname{ord}(g,\tau)=m>\lambda\) by Lemma 2.10 it may still happen that \(g\) has a pole of order exactly \(\lambda\) at every \(\gamma\in\mathcal{C}(\tau)\) and yet the higher-order poles of \(g\) lie in the complement \(\tau-\mathcal{C}(\tau)\), in which case Proposition 4.1 remains silent. So let \(\alpha_{1},\dots,\alpha_{s}\in\operatorname{sing}(g,\tau)\) be all the pairwise-distinct elements at which \(g\) has a pole of order \(m>\lambda\). Choose \(\beta_{j}\in\tau\) such that \(\beta_{j}^{p}=\alpha_{j}\) for each \(j=1,\dots,s\), and let us write \[g_{\tau}=\sum_{j=1}^{s}\frac{d_{j}}{(x-\alpha_{j})^{m}}+(\text{lower-order terms}),\quad\text{so that}\] \[f_{\tau}=\sum_{j=1}^{s}\left(\sum_{i=0}^{p-1}\frac{p^{\lambda}V_{m,1}^{m}(\zeta_ {p}^{i}\beta_{j})\cdot d_{j}}{(x-\zeta_{p}^{i}\beta_{j})^{m}}-\frac{d_{j}}{(x- \alpha_{j})^{m}}\right)+(\text{lower-order-terms})\] by Lemma 2.17. If any \(\alpha_{j}\in\mathcal{C}(\tau)\), then we already have \(\operatorname{disp}(f,\tau)=\operatorname{disp}(g,\tau)+1>0\) by Proposition 4.1. So we can assume without loss of generality that no \(\alpha_{j}\) belongs to \(\mathcal{C}(\tau)\) which implies that the \((p+1)\cdot s\) apparent poles \(\zeta_{p}^{i}\beta_{j}\) and \(\alpha_{j}\) of \(f_{\tau}\) of order \(m\) are pairwise distinct, and in particular no cancellations occur and these are all true poles of \(f\) of order \(m\). Hence \(\operatorname{disp}(f,\tau)\geq 1\) also in this last case where \(\operatorname{ord}(f,\tau)=m>\lambda\). _Remark 4.3_.: The exceptions in Theorem 4.2 cannot be omitted. If \(\lambda\neq 0\) then every \(\Delta_{\lambda}(\frac{c}{p^{\lambda}-1})=c\in\mathbb{K}\) is \(\lambda\)-Mahler summable and has \(\operatorname{disp}(c,\infty)=0\) whenever \(c\neq 0\). If \(\lambda\geq 1\) then for any \(\gamma\in\mathcal{C}(\tau)\) with \(\varepsilon(\tau)=:e\geq 1\) one can construct (cf. Section 5.3) \(g=\sum_{k=1}^{\lambda}\sum_{\ell=0}^{e-1}c_{k,\ell}\cdot(x-\gamma^{p^{\ell}}) ^{-k}\) such that \(\operatorname{disp}(\Delta_{\lambda}(g),\tau)=0\). The simplest such example is with \(\lambda,\gamma,e=1\) (and \(p\in\mathbb{Z}_{\geq 2}\) still arbitrary): \[f:=\Delta_{1}\left(\frac{1}{x-1}\right)=\frac{p}{x^{p}-1}-\frac{1}{x-1}=\frac {pV_{1,1}^{1}(1)-1}{x-1}+\sum_{i=1}^{p-1}\frac{pV_{1,1}^{1}(\zeta_{p}^{i})}{x- \zeta_{p}^{i}}=\sum_{i=1}^{p-1}\frac{\zeta_{p}^{i}}{x-\zeta_{p}^{i}},\] which is \(1\)-Mahler summable but has \(\operatorname{disp}(f,\tau(1))=0\). More generally, all other such examples for arbitrary \(\lambda\geq 1\) and \(\tau\in\mathcal{T}_{+}\), of \(f\in\mathbb{K}(x)\) such that \(f_{\tau}\) is \(\lambda\)-Mahler summable but \(\operatorname{disp}(f,\tau)=0\), arise essentially from the basic construction \(f_{\tau}:=\Delta_{\lambda}(g_{\tau})\) with \[g_{\tau}=\sum_{k=1}^{\lambda}\sum_{\gamma\in\mathcal{C}(\tau)}\frac{\omega \cdot w_{k}^{(\lambda)}(\gamma)}{(x-\gamma)^{k}}\] for an arbitrary constant \(0\neq\omega\in\mathbb{K}\) and the vector \(\mathbf{w}^{(\lambda)}=(w_{k}^{(\lambda)}(\gamma))\) defined in Lemma 3.4. ## 5. Twisted Mahler discrete residues Our goal in this section is to define the \(\lambda\)-Mahler discrete residues of \(f(x)\in\mathbb{K}(x)\) for \(\lambda\in\mathbb{Z}\) and prove our Main Theorem in Section 5.4, that these \(\lambda\)-Mahler discrete residues comprise a complete obstruction to \(\lambda\)-Mahler summability. We begin with the relatively simple construction of \(\lambda\)-Mahler discrete residues at \(\infty\) (for Laurent polynomials), followed by the construction of \(\lambda\)-Mahler discrete residues at Mahler trees \(\tau\in\mathcal{T}=\mathcal{T}_{0}\cup\mathcal{T}_{+}\) (see Definition 2.7), first for non-torsion \(\tau\in\mathcal{T}_{0}\), and finally for torsion \(\tau\in\mathcal{T}_{+}\), in increasing order of complexity, and prove separately in each case that these \(\lambda\)-Mahler discrete residues comprise a complete obstruction to the \(\lambda\)-Mahler summability of the corresponding components of \(f\). ### Twisted Mahler discrete residues at infinity We now define the \(\lambda\)-Mahler discrete residue of \(f\in\mathbb{K}(x)\) at \(\infty\) in terms of the Laurent polynomial component \(f_{\infty}\in\mathbb{K}[x,x^{-1}]\) of \(f\) in (2.1), and show that it forms a complete obstruction to the \(\lambda\)-Mahler summability of \(f_{\infty}\). The definition and proof in this case are both straightforward, but they provide helpful moral guidance for the analogous definitions and proofs in the case of \(\lambda\)-Mahler discrete residues at Mahler trees \(\tau\in\mathcal{T}\). **Definition 5.1**.: For \(f\in\mathbb{K}(x)\) and \(\lambda\in\mathbb{Z}\), the \(\lambda\)-_Mahler discrete residue_ of \(f\) at \(\infty\) is the vector \[\operatorname{dres}_{\lambda}(f,\infty)=\Bigl{(}\operatorname{dres}_{\lambda}( f,\infty)_{\theta}\Bigr{)}_{\theta\,\in\,\mathbb{Z}/\mathcal{P}}\in\bigoplus_{ \theta\,\in\,\mathbb{Z}/\mathcal{P}}\mathbb{K}\] defined as follows. Write \(f_{\infty}=\sum_{\theta\,\in\,\mathbb{Z}/\mathcal{P}}f_{\theta}\) as in Definition 2.2, and write each component \(f_{\theta}=\sum_{j=0}^{h_{\theta}}c_{ip^{j}}{x^{ip^{j}}}\) with \(p\nmid i\) whenever \(i\neq 0\) (that is, with each \(i\) initial in its maximal \(\mathcal{P}\)-trajectory \(\theta\)), and where \(h_{\theta}=0\) if \(f_{\theta}=0\) and otherwise \(h_{\theta}\in\mathbb{Z}_{\geq 0}\) is as large as possible such that \(c_{ip^{h_{\theta}}}\neq 0\). Then we set \[\operatorname{dres}_{\lambda}(f,\infty)_{\theta}:=p^{\lambda h_{\theta}}\sum_{j=0 }^{h_{\theta}}p^{-\lambda j}c_{ip^{j}}\quad\text{for }\theta\neq\{0\};\qquad\text{and}\qquad \operatorname{dres}_{\lambda}(f,\infty)_{\{0\}}:=\begin{cases}c_{0}&\text{if } \lambda=0;\\ 0&\text{if }\lambda\neq 0.\end{cases}\] **Proposition 5.2**.: _For \(f\in\mathbb{K}(x)\) and \(\lambda\in\mathbb{Z}\), the component \(f_{\infty}\in\mathbb{K}[x,x^{-1}]\) in (2.1) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\infty)=\mathbf{0}\)._ Proof.: By Lemma 2.3, \(f_{\infty}\) is \(\lambda\)-Mahler summable if and only if \(f_{\theta}\) is \(\lambda\)-Mahler summable for all \(\theta\in\mathbb{Z}/\mathcal{P}\). We shall show that \(f_{\theta}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\infty)_{\theta}=0\). If \(\lambda\neq 0\) then \(f_{\{0\}}=\Delta_{\lambda}(\frac{c_{0}}{p^{\lambda}-1})\) is always \(\lambda\)-Mahler summable, whilst we have defined \(\operatorname{dres}_{\lambda}(f,\infty)_{\{0\}}=0\) in this case. On the other hand, for \(\lambda=0\), \(f_{\{0\}}=\operatorname{dres}_{0}(f,\infty)_{\{0\}}\), and \(\operatorname{disp}(f_{\{0\}},\infty)=0\) if \(f_{\{0\}}\neq 0\), whilst if \(f_{\{0\}}=0\) then it is clearly \(\lambda\)-Mahler summable. By Theorem 4.2 in case \(\lambda=0\), and trivially in case \(\lambda\neq 0\), we conclude that \(f_{\{0\}}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\infty)_{\{0\}}=0\). Now let us assume \(\{0\}\neq\theta\in\mathbb{Z}/\mathcal{P}\) and let us write \(f_{\theta}=\sum_{j\geq 0}c_{ip^{j}}x^{ip^{j}}\in\mathbb{K}[x,x^{-1}]_{\theta}\), for the unique minimal \(i\in\theta\) such that \(p\nmid i\). If \(f_{\theta}=0\) then we have nothing to show, so suppose \(f_{\theta}\neq 0\) and let \(h_{\theta}\in\mathbb{Z}_{\geq 0}\) be maximal such that \(c_{ip^{h_{\theta}}}\neq 0\). Letting \(\Delta_{\lambda}^{(n)}:=p^{\lambda n}\sigma^{n}-\operatorname{id}\) as in Lemma 2.17, we find that \[\bar{f}_{\lambda,\theta}:=f_{\theta}+\sum_{j=0}^{h_{\theta}}\Delta_{\lambda}^{ (h_{\theta}-j)}(c_{ip^{j}}x^{ip^{j}})=\sum_{j=0}^{h_{\theta}}p^{\lambda(h_{ \theta}-j)}c_{ip^{j}}x^{ip^{h_{\theta}}}+0=\operatorname{dres}_{\lambda}(f, \infty)_{\theta}\cdot x^{ip^{h_{\theta}}}.\] By Lemma 2.17, we see that \(f_{\theta}\) is \(\lambda\)-Mahler summable if and only if \(\bar{f}_{\lambda,\theta}\) is \(\lambda\)-Mahler summable. Clearly, \(\bar{f}_{\lambda,\theta}=0\) if and only if \(\operatorname{dres}(f,\infty)_{\theta}=0\). We also see that \(\operatorname{disp}(\bar{f}_{\lambda,\theta},\infty)=0\) if \(\operatorname{dres}_{\lambda}(f,\infty)_{\theta}\neq 0\), in which case \(\bar{f}_{\lambda,\theta}\) cannot be \(\lambda\)-Mahler summable by Theorem 4.2, and so \(f_{\theta}\) cannot be \(\lambda\)-Mahler summable either. On the other hand, if \(\bar{f}_{\lambda,\theta}=0\) then \(f_{\theta}\) is \(\lambda\)-Mahler summable by Lemma 2.17. _Remark 5.3_.: The factor of \(p^{\lambda h_{\theta}}\) in the Definition 5.1 of \(\operatorname{dres}_{\lambda}(f,\infty)_{\theta}\) for \(\{0\}\neq\theta\in\mathbb{Z}/\mathcal{P}\) plays no role in deciding whether \(f_{\infty}\) is \(\lambda\)-Mahler summable, but this normalization allows us to define uniformly the \(\bar{f}_{\lambda,\theta}=\operatorname{dres}_{\lambda}(f,\infty)_{\theta}\cdot x ^{ip^{h_{\theta}}}\) as the \(\theta\)-component of the \(\bar{f}_{\lambda}\in\mathbb{K}(x)\) in the \(\lambda\)-_Mahler reduction_ (1.2). For every \(\{0\}\neq\theta\in\mathbb{Z}/\mathcal{P}\), we set \(h_{\theta}(f)\) to be the \(h_{\theta}\) defined in the course of the proof of Proposition 5.2 in case \(f_{\theta}\neq 0\), and in all other cases we set \(h_{\theta}(f):=0\). ### Twisted Mahler discrete residues at Mahler trees: the non-torsion case We now define the \(\lambda\)-Mahler discrete residues of \(f\in\mathbb{K}(x)\) at non-torsion Mahler trees \(\tau\in\mathcal{T}_{0}\) in terms of the partial fraction decomposition of the component \(f_{\tau}\in\mathbb{K}(x)_{\tau}\) in Definition 2.5, and show that it forms a complete obstruction to the \(\lambda\)-Mahler summability of \(f_{\tau}\). We begin by introducing some auxiliary notion, which already appeared in [1], but with an unfortunately different choice of notation. **Definition 5.4**.: Let \(\tau\in\mathcal{T}_{0}\), \(\gamma\in\tau\), and \(h\in\mathbb{Z}_{\geq 0}\). The _bouquet_ of height \(h\) rooted at \(\gamma\) is \[\beta_{h}(\gamma):=\left\{\alpha\in\tau\ |\ \alpha^{p^{n}}=\gamma\text{ for some }0\leq n\leq h\right\}.\] **Lemma 5.5** (cf. [1, Lem. 4.4]).: _Let \(\tau\in\mathcal{T}_{0}\) and \(S\subset\tau\) be a finite non-empty subset. Then there exists a unique \(\gamma\in\tau\) such that \(S\subseteq\beta_{h}(\gamma)\) with \(h\) as small as possible._ Proof.: This is an immediate consequence of the proof of [1, Lem. 4.4], whose focus and notation was rather different from the one adopted here, so let us complement it here with an alternative and more conceptual argument. As explained in [1, Remark 2.7 and Example 2.9], we can introduce a digraph structure on \(\tau\) in which we have a directed edge \(\alpha\to\xi\) whenever \(\alpha^{p}=\xi\), resulting in an infinite (directed) tree. The "meet" of the elements of \(S\) is the unique \(\gamma\in\tau\) such that \(S\subseteq\beta_{h}(\gamma)\) with \(h\) as small as possible. **Definition 5.6** (cf. [1, Def. 4.6]).: For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{0}\), the _height_ of \(f\) at \(\tau\), denoted by \(\operatorname{ht}(f,\tau)\), is the smallest \(h\in\mathbb{Z}_{\geq 0}\) such that \(\operatorname{sing}(f,\tau)\subseteq\beta_{h}(\gamma)\) for the unique \(\gamma\in\tau\) identified in Lemma 5.5 with \(S=\operatorname{sing}(f,\tau)\subset\tau\). We write \(\beta(f,\tau):=\beta_{h}(\gamma)\), the _bouquet_ of \(f\) in \(\tau\). For \(\alpha\in\beta(f,\tau)\), the _height_ of \(\alpha\) in \(f\), denoted by \(\eta(\alpha|f)\), is the unique \(0\leq n\leq h\) such that \(\alpha^{p^{n}}=\gamma\). In [1, Def. 4.10] we gave a recursive definition in the \(\lambda=0\) case of Mahler discrete residues for non-torsion \(\tau\in\mathcal{T}_{0}\). Here we provide a non-recursive definition for \(\lambda\in\mathbb{Z}\) arbitrary, which can be shown to agree with the one from [1] in the special case \(\lambda=0\). **Definition 5.7**.: For \(f\in\mathbb{K}(x)\), \(\lambda\in\mathbb{Z}\), and \(\tau\in\mathcal{T}_{0}\), the \(\lambda\)-_Mahler discrete residue_ of \(f\) at \(\tau\) of degree \(k\in\mathbb{N}\) is the vector \[\operatorname{dres}_{\lambda}(f,\tau,k)=\Big{(}\operatorname{dres}_{\lambda}( f,\tau,k)_{\alpha}\Big{)}_{\alpha\in\tau}\in\bigoplus_{\alpha\in\tau} \mathbb{K}\] defined as follows. We set \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) if either \(\tau\notin\operatorname{supp}(f)\) or \(k>\operatorname{ord}(f,\tau)\) as in Definition 2.9. For \(\tau\in\operatorname{supp}(f)\), let us write \[f_{\tau}=\sum_{k\in\mathbb{N}}\ \sum_{\alpha\in\tau}\frac{c_{k}(\alpha)}{(x- \alpha)^{k}}. \tag{5.1}\] We set \(\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}=0\) for every \(k\in\mathbb{N}\) whenever \(\alpha\in\tau\) is such that either \(\alpha\notin\beta(f,\tau)\) or, for \(\alpha\in\beta(f,\tau)\), such that \(\eta(\alpha|f)\neq h\), where \(h:=\operatorname{ht}(f,\tau)\) and \(\beta(f,\tau)\) are as in Definition 5.6. Finally, for the remaining \(\alpha\in\beta(f,\tau)\) with \(\eta(\alpha|f)=h\) and \(1\leq k\leq\operatorname{ord}(f,\tau)=:m\), we define \[\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}:=\sum_{s=k}^{m}\sum_{n=0}^{h} p^{\lambda n}V_{k,n}^{s}(\alpha)c_{s}(\alpha^{p^{n}}), \tag{5.2}\] where the Mahler coefficients \(V_{k,n}^{s}(\alpha)\) are as in Proposition 2.15. **Proposition 5.8**.: _For \(f\in\mathbb{K}(x)\), \(\lambda\in\mathbb{Z}\), and \(\tau\in\mathcal{T}_{0}\), the component \(f_{\tau}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\)._ Proof.: The statement is trivial for \(\tau\notin\operatorname{supp}(f)\Leftrightarrow f_{\tau}=0\). So let us suppose \(\tau\in\operatorname{supp}(f)\), and let \(h:=\operatorname{ht}(f,\tau)\), \(m:=\operatorname{ord}(f,\tau)\), and \(\eta(\alpha):=\eta(\alpha|f)\) for each \(\alpha\in\beta(f,\tau)\). Writing \(f_{\tau}\) as in (5.1), let us also write, for \(0\leq n\leq h\), \[f_{\tau}^{(n)}:=\sum_{k=1}^{m}\sum_{\begin{subarray}{c}\alpha\in\beta(f,\tau)\\ \eta(\alpha)=n\end{subarray}}\frac{c_{k}(\alpha)}{(x-\alpha)^{k}}\qquad\text{ so that }\qquad f_{\tau}=\sum_{n=0}^{h}f_{\tau}^{(n)}.\] By Lemma 2.17, for each \(0\leq n\leq h\) we have \[\sigma^{n}\left(f_{\tau}^{(h-n)}\right)=\sum_{k=1}^{m}\sum_{\begin{subarray}{ c}\alpha\in\beta(f,\tau)\\ \eta(\alpha)=h\end{subarray}}\frac{\sum_{s=k}^{m}V_{k,n}^{s}(\alpha)c_{s}( \alpha^{p^{n}})}{(x-\alpha)^{k}},\] and therefore \[\Delta_{\lambda}^{(n)}\left(f_{\tau}^{(h-n)}\right)=-f_{\tau}^{(h-n)}+\sum_{k =1}^{m}\sum_{\begin{subarray}{c}\alpha\in\beta(f,\tau)\\ \eta(\alpha)=h\end{subarray}}\frac{p^{\lambda n}\sum_{s=k}^{m}V_{k,n}^{s}( \alpha)c_{s}(\alpha^{p^{n}})}{(x-\alpha)^{k}}.\] It follows from the Definition 5.7 that \[\bar{f}_{\tau}:=f_{\tau}+\sum_{n=0}^{n}\Delta_{\lambda}^{(n)}\left(f_{\tau}^{ (h-n)}\right)=\sum_{k=1}^{m}\sum_{\alpha\in\tau}\frac{\operatorname{dres}_{ \lambda}(f,\tau,k)_{\alpha}}{(x-\alpha)^{k}}. \tag{5.3}\] By Lemma 2.17, \(\bar{f}_{\lambda,\tau}-f_{\tau}\) is \(\lambda\)-Mahler summable, and therefore \(f_{\tau}\) is \(\lambda\)-Mahler summable if and only if \(\bar{f}_{\lambda,\tau}\) is \(\lambda\)-Mahler summable. If \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(1\leq k\leq m\), then \(\bar{f}_{\lambda,\tau}=0\) and therefore \(f_{\tau}\) is \(\lambda\)-Mahler summable. On the other hand, if some \(\operatorname{dres}_{\lambda}(f,\tau,k)\neq\mathbf{0}\), then \(0\neq\bar{f}_{\lambda,\tau}\) has \(\operatorname{disp}(\bar{f}_{\lambda,\tau},\tau)=0\) (see Definition 2.11), whence by Theorem 4.2\(\bar{f}_{\lambda,\tau}\) could not possibly be \(\lambda\)-Mahler summable, and therefore neither could \(f_{\tau}\). This concludes the proof that \(f_{\tau}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(l\in\mathbb{N}\). _Remark 5.9_.: For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{0}\), the element \(\bar{f}_{\lambda,\tau}\) in (5.3) is the \(\tau\)-component of the \(\bar{f}_{\lambda}\in\mathbb{K}(x)\) in the \(\lambda\)-_Mahler reduction_ (1.2). ### Twisted Mahler discrete residues at Mahler trees: the torsion case We now define the \(\lambda\)-Mahler discrete residues of \(f\in\mathbb{K}(x)\) at torsion trees \(\tau\in\mathcal{T}_{+}\) (see Definition 2.7) in terms of the partial fraction decomposition of the component \(f_{\tau}\in\mathbb{K}(x)_{\tau}\) in Definition 2.5, and show that it forms a complete obstruction to the \(\lambda\)-Mahler summability of \(f_{\tau}\). The definitions and proofs in this case are more technical than in the non-torsion case, involving the cycle map \(\mathcal{D}_{\lambda,\tau}\) of Definition 3.2 and its \(\omega\)-section \(\mathcal{I}_{\lambda,\tau}^{(\omega)}\) from Definition 3.5, for a particular choice of constant \(\omega\in\mathbb{K}\) associated to \(f\), which we construct in Definition 5.11. We begin by recalling the following definition from [1], which is the torsion analogue of Definition 5.6. **Definition 5.10** (cf. [1, Def. 4.6]).: For \(\tau\in\mathcal{T}_{+}\) and \(\alpha\in\tau\), the _height_ of \(\alpha\), denoted by \(\eta(\alpha)\), is the smallest \(n\in\mathbb{Z}_{\geq 0}\) such that \(\alpha^{p^{n}}\in\mathcal{C}(\tau)\) (cf. Definition 2.7). For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{+}\), the _height_ of \(f\) at \(\tau\) is \[\operatorname{ht}(f,\tau):=\max\{\eta(\alpha)\ |\ \alpha\in\operatorname{ sing}(f,\tau)\},\] or equivalently, the smallest \(h\in\mathbb{Z}_{\geq 0}\) such that \(\alpha^{p^{h}}\in\mathcal{C}(\tau)\) for every pole \(\alpha\) of \(f\) in \(\tau\). The following definition will allow us to use the correct \(\omega\)-section \(\mathcal{I}^{(\omega)}_{\lambda,\tau}\) from Definition 3.5 in our construction of \(\lambda\)-Mahler discrete residues in the torsion case. **Definition 5.11**.: For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{+}\), let us write \[f_{\tau}=\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau}\frac{c_{k}(\alpha)}{(x- \alpha)^{k}}.\] For \(\lambda\in\mathbb{Z}\), we define the _residual average_\(\omega_{\lambda,\tau}(f)\in\mathbb{K}\) of \(f\) (relative to \(\lambda\) and \(\tau\)) as follows. If \(\lambda\leq 0\) or if \(h:=\operatorname{ht}(f,\tau)=0\) (cf. Definition 5.10), we simply set \(\omega_{\lambda,\tau}(f)=0\). In case both \(\lambda,h\geq 1\), let \(\tau_{h}:=\{\alpha\in\tau\mid\eta(\alpha)=h\}\) be the set of elements of \(\tau\) of height \(h\). Let us write \(\mathbf{c}=(c_{k}(\gamma))\), for \(\gamma\) ranging over \(\mathcal{C}(\tau)\) only, and let \((d^{(0)}_{k}(\gamma))=\mathbf{d}^{(0)}:=\mathcal{I}^{(0)}_{\lambda,\tau}( \mathbf{c})\) as in Definition 3.5 and \((\tilde{c}_{k}(\gamma))=\tilde{\mathbf{c}}=\mathcal{D}_{\lambda,\tau}( \mathbf{d}^{(0)})\), as in Definition 3.5. Then we define \[\omega_{\lambda,\tau}(f):=\frac{1}{(p^{h}-p^{h-1})e}\sum_{\alpha \in\tau_{h}}\sum_{s\geq\lambda}\sum_{n=0}^{h-1}p^{\lambda n}\mathbb{V}_{ \lambda,n}^{s}\alpha^{-sp^{n}}c_{s}(\alpha^{p^{n}})\\ -\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\sum_ {s\geq\lambda}\mathbb{V}_{\lambda,h-1}^{s}\gamma^{-s}(\tilde{c}_{s}(\gamma)+d ^{(0)}_{s}(\gamma)), \tag{5.4}\] where the universal Mahler coefficients \(\mathbb{V}_{\lambda,n}^{s}\in\mathbb{Q}\) are defined as in Section 2.5. The significance of this definition of the residual average \(\omega_{\lambda,\tau}(f)\) and our choice of nomenclature is explained in the proof of Proposition 5.17 below (with the aid of Lemma 5.16). We are now ready to define the \(\lambda\)-Mahler discrete residues at torsion Mahler trees. In [1, Def. 4.16] we gave a _recursive_ definition of Mahler discrete residues for torsion \(\tau\in\mathcal{T}_{+}\) in the \(\lambda=0\) case. Here we provide a less recursive definition for \(\lambda\in\mathbb{Z}\) arbitrary, which can be shown to agree with the one from [1] in the special case \(\lambda=0\). This new definition is only _less_ recursive than that of [1] because of the intervention of the map \(\mathcal{I}^{(\omega)}_{\lambda,\tau}\), for which we have not found a closed form and whose definition is still essentially recursive. **Definition 5.12**.: For \(f\in\mathbb{K}(x)\), \(\lambda\in\mathbb{Z}\), and \(\tau\in\mathcal{T}\) with \(\tau\subset\mathbb{K}_{t}^{\times}\), the \(\lambda\)-_Mahler discrete residue_ of \(f\) at \(\tau\) of degree \(k\in\mathbb{N}\) is the vector \[\operatorname{dres}_{\lambda}(f,\tau,k)=\Big{(}\operatorname{dres}_{\lambda}( f,\tau,k)_{\alpha}\Big{)}_{\alpha\in\tau}\in\bigoplus_{\alpha\in\tau} \mathbb{K}\] defined as follows. We set \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) if either \(\tau\notin\operatorname{supp}(f)\) or \(k>\operatorname{ord}(f,\tau)\) as in Definition 2.9. For \(\tau\in\operatorname{supp}(f)\), let us write \[f_{\tau}=\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau}\frac{c_{k}(\alpha)}{(x- \alpha)^{k}}. \tag{5.5}\] We set \(\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}=0\) for every \(k\in\mathbb{N}-\{\lambda\}\) whenever \(\alpha\in\tau\) is such that \(\eta(\alpha)\neq h\), where \(h:=\operatorname{ht}(f,\tau)\) and \(\eta(\alpha)\) are as in Definition 5.10. In case \(\lambda\geq 1\), we set \(\operatorname{dres}_{\lambda}(f,\tau,\lambda)_{\alpha}=0\) also whenever \(\eta(\alpha)\notin\{0,h\}\). In case \(h=0\), so that \(\operatorname{sing}(f,\tau)\subseteq\mathcal{C}(\tau)\), we simply set \[\operatorname{dres}_{\lambda}(f,\tau,k)_{\gamma}:=c_{k}(\gamma)\] for every \(1\leq k\leq\operatorname{ord}(f,\tau)\) and \(\gamma\in\mathcal{C}(\tau)\). In case \(h\geq 1\), let us write \(\mathbf{c}=(c_{k}(\gamma))\) for \(\gamma\) ranging over \(\mathcal{C}(\tau)\) only, and let \((d_{k}(\gamma))=\mathbf{d}:=\mathcal{I}_{\lambda,\tau}^{(\omega)}(\mathbf{c})\) as in Definition 3.5, where \(\omega:=\omega_{\lambda,\tau}(f)\) (cf. Definition 5.11), and \((\tilde{c}_{k}(\gamma))=\tilde{\mathbf{c}}:=\mathcal{D}_{\lambda,\tau}( \mathbf{d})\) as in Definition 3.2. For \(\alpha\in\tau\) such that \(\eta(\alpha)=h\) and for \(1\leq k\leq\operatorname{ord}(f,\tau)=:m\), we define \[\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}:=\sum_{s=k}^{m} \sum_{n=0}^{h-1}p^{\lambda n}V_{k,n}^{s}(\alpha)c_{s}(\alpha^{p^{n}})\\ -p^{\lambda(h-1)}\sum_{s=k}^{m}\mathbb{V}_{k,h-1}^{s}\alpha^{k- sp^{h+e-1}}\left(\tilde{c}_{s}\left(\alpha^{p^{h+e-1}}\right)+d_{s}\left( \alpha^{p^{h+e-1}}\right)\right). \tag{5.6}\] In case \(\lambda\geq 1\), for \(\gamma\in\mathcal{C}(\tau)\) we set \[\operatorname{dres}_{\lambda}(f,\tau,\lambda)_{\gamma}:=c_{\lambda}(\gamma)- \tilde{c}_{\lambda}(\gamma)=\frac{\gamma^{\lambda}}{e}\sum_{j=1}^{e}\gamma^{- \lambda p^{j}}\left(c_{\lambda}(\gamma^{p^{j}})-p^{\lambda}\sum_{s\geq\lambda+ 1}V_{\lambda,1}^{s}(\gamma^{p^{j}})d_{s}(\gamma^{p^{j+1}})\right). \tag{5.7}\] _Remark 5.13_.: The Definition 5.12 can be expressed equivalently in ways that are easier to compute, but which require a lot of hedging. We cannot improve on the definition in case \(h=0\); so let us address the case \(h\geq 1\). The different ingredients used in Definition 5.12 are best computed in the following order. In every case, one should first compute the vector \(\mathbf{d}^{(0)}:=\mathcal{I}_{\lambda,\tau}^{(0)}(\mathbf{c})\) of Proposition 3.6. Every instance of \(\tilde{c}_{s}\) in (5.4) and in (5.6) can (and should) be replaced with \(c_{s}\), with the single exception of \(\tilde{c}_{\lambda}\) (if it happens to occur), which should be rewritten in terms of the \(c_{s}\) and \(d_{s}^{(0)}\) using (3.6). There is no need to find \(\tilde{\mathbf{c}}\) by applying \(\mathcal{D}_{\lambda,\tau}\) to anything. Having made these replacements, and only then, one should then compute the residual average \(\omega\) from Definition 5.11. If this \(\omega\) happens to be \(0\) then we already have all the required ingredients to compute our discrete residues. Only in case \(\omega\neq 0\), we then proceed to compute the vector \(\mathbf{w}^{(\lambda)}\) of Lemma 3.4, and by Definition 3.5 we can replace the \(d_{s}\) in (5.6) with \(d_{s}^{(0)}+\omega\cdot w_{s}^{(\lambda)}\), all of which have already been computed, and now we are once again in possession of all the required ingredients. We next present several preparatory Lemmas that will aid us in streamlining our proof of Proposition 5.17 below that the \(\lambda\)-Mahler discrete residues just defined comprise a complete obstruction to the \(\lambda\)-Mahler summability of \(f_{\tau}\) for \(\tau\in\mathcal{T}_{+}\). We hope that the reader who, like us, finds the above Definition 5.12 painfully complicated, especially in comparison with the relatively simpler Definition 5.7 in the non-torsion case, can begin to glimpse in the statements of the following preliminary results the reasons for the emergence of the additional ingredients in Definition 5.12 that are absent from Definition 5.7. This is why we have chosen to present them first, and postpone their proofs until after their usefulness has become apparent in the proof of Proposition 5.17. **Lemma 5.14**.: _Let \(f\in\mathbb{K}(x)\), \(\lambda\in\mathbb{Z}\), and \(\tau\in\operatorname{supp}(f,\tau)\cap\mathcal{T}_{+}\). If \(\operatorname{ht}(f,\tau)=0\) then \(f_{\tau}\) is not \(\lambda\)-Mahler summable._ **Lemma 5.15**.: _Let \(\lambda\in\mathbb{Z}\) and \(\tau\in\mathcal{T}_{+}\), and set \(e:=|\mathcal{C}(\tau)|\) as in Definition 2.7. Let \(f\in\mathbb{K}(x)\), and let us write the cyclic component_ \[\mathcal{C}(f_{\tau})=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau)} \frac{c_{k}(\gamma)}{(x-\gamma)^{k}},\] _as in Definition 3.1, and let us write \(\mathbf{c}=(c_{k}(\gamma))\in\mathcal{S}^{\mathcal{C}(\tau)}\). Let \(\omega\in\mathbb{K}\) be arbitrary, and let us write \(\mathbf{d}=(d_{k}(\gamma))=\mathcal{I}_{\lambda,\tau}^{(\omega)}(\mathbf{c})\) as in Definition 3.5 and \(\tilde{\mathbf{c}}=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\) as in Definition 3.2. Set_ \[g_{0}:=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau)}\frac{d_{k}( \gamma)}{(x-\gamma)^{k}}\qquad\text{and}\qquad g_{1}:=-\sum_{k\in\mathbb{N}} \sum_{\gamma\in\mathcal{C}(\tau)}\sum_{i=1}^{p-1}\frac{\zeta_{p}^{ki}(\tilde{ c}_{k}(\gamma)+d_{k}(\gamma))}{(x-\zeta_{p}^{i}\gamma)^{k}}. \tag{5.8}\] _Then_ \[\mathcal{C}(f_{\tau})-\Delta_{\lambda}(g_{0})=\begin{cases}g_{1}&\text{if } \lambda\leq 0;\\ g_{1}+\sum_{\gamma\in\mathcal{C}(\tau)}\frac{c_{\lambda}(\gamma)-\tilde{c}( \gamma)}{(x-\gamma)^{\lambda}}&\text{if }\lambda\geq 1.\end{cases} \tag{5.9}\] _Moreover, for any \(h\geq 1\), writing \(\tau_{h}:=\{\alpha\in\tau\ |\ \eta(\alpha)=h\}\), we have_ \[\sigma^{h-1}(g_{1})=-\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau_{h}}\frac{\sum_ {s\geq k}\mathbb{V}_{k,h-1}^{s}\alpha^{k-sp^{h+e-1}}\left(\tilde{c}_{s}\left( \alpha^{p^{h+e-1}}\right)+d_{s}\left(\alpha^{p^{h+e-1}}\right)\right)}{(x- \alpha)^{k}}. \tag{5.10}\] **Lemma 5.16**.: _Let \(\lambda\geq 1\), \(h\geq 1\), \(\bar{f}_{\tau}\in\mathbb{K}(x)_{\tau}\), and \(\tau\in\operatorname{supp}(\bar{f})\cap\mathcal{T}_{+}\) such that \(\operatorname{ord}(\bar{f},\tau)=\lambda\) and \(\operatorname{sing}(f,\tau)\subseteq\tau_{h}=\{\alpha\in\tau\ |\ \eta(\alpha)=h\}\), so that we can write_ \[\bar{f}_{\tau}=\sum_{k=1}^{\lambda}\sum_{\alpha\in\tau_{h}}\frac{\bar{c}_{k}( \alpha)}{(x-\alpha)^{k}}.\] _If \(\bar{f}_{\tau}\) is \(\lambda\)-Mahler summable then all the elements \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)\) are equal to the constant_ \[\bar{\omega}=\frac{1}{|\tau_{h}|}\sum_{\alpha\in\tau_{h}}\alpha^{-\lambda} \bar{c}_{\lambda}(\alpha),\] _which is their arithmetic average. Letting \(e:=|\mathcal{C}(\tau)|\), we have \(|\tau_{h}|=(p^{h}-p^{h-1})e\)._ **Proposition 5.17**.: _For \(f\in\mathbb{K}(x)\), \(\lambda\in\mathbb{Z}\), and \(\tau\in\mathcal{T}_{+}\), the component \(f_{\tau}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\)._ Proof.: The statement is trivial for \(\tau\notin\operatorname{supp}(f)\Leftrightarrow f_{\tau}=0\). If \(\operatorname{ht}(f,\tau)=0\) then \(0\neq f_{\tau}\) cannot be \(\lambda\)-Mahler summable by Lemma 5.14, whereas in this case we defined \(\operatorname{dres}(f,\tau,k)_{\gamma}=c_{k}(\gamma)\) in Definition 5.12, and we obtain our conclusion vacuously in this case. From now on we assume \(\tau\in\operatorname{supp}(f)\), and let \(h:=\operatorname{ht}(f,\tau)\geq 1\), \(m:=\operatorname{ord}(f,\tau)\), and \(\omega:=\omega_{\lambda,\tau}(f)\). Writing \(f_{\tau}\) as in (5.5), let \(\tau_{n}:=\{\alpha\in\tau\ |\ \eta(\alpha)=n\}\) for \(n\in\mathbb{Z}_{\geq 0}\) and let us also write \[f_{\tau}^{(n)}:=\sum_{k=1}^{m}\sum_{\alpha\in\tau_{n}}\frac{c_{k}(\alpha)}{(x- \alpha)^{k}}\qquad\text{so that}\qquad f_{\tau}=\sum_{n=0}^{h}f_{\tau}^{(n)}.\] The same computation as in the proof of Proposition 5.8 yields \[\tilde{f}_{\lambda,\tau}:=f_{\tau}+\sum_{n=0}^{h-1}\Delta_{\lambda}^{(n)}(f_{\tau }^{(h-n)})=\sum_{k=1}^{m}\sum_{\alpha\in\tau_{h}}\frac{\sum_{s\geq k}\sum_{n=0}^{ h-1}p^{\lambda n}V_{k,n}^{s}(\alpha)c_{s}(\alpha^{p^{n}})}{(x-\alpha)^{k}}+\sum_{k=1}^{m} \sum_{\gamma\in\mathcal{C}(\tau)}\frac{c_{k}(\gamma)}{(x-\gamma)^{k}}. \tag{5.11}\] Let us now write, as in Definition 5.12, \(\mathbf{c}=(c_{k}(\gamma))\) for \(\gamma\) ranging over \(\mathcal{C}(\tau)=\tau_{0}\) only, \((d_{k}(\gamma))=\mathbf{d}:=\mathcal{I}_{\lambda,\tau}^{(\omega)}(\mathbf{c})\), and \((\tilde{c}_{k}(\gamma))=\tilde{\mathbf{c}}:=\mathcal{D}_{\lambda,\tau}( \mathbf{d})\). Writing \(g_{0}\) and \(g_{1}\) as in (5.8), it follows from Lemma 5.15 and Definition 5.12 that \[\bar{f}_{\lambda,\tau}:=\tilde{f}_{\lambda,\tau}-\Delta_{\lambda}(g_{0})+ \Delta_{\lambda}^{(h-1)}(g_{1})=\sum_{k=1}^{m}\sum_{\alpha\in\tau}\frac{ \operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}}{(x-\alpha)^{k}}. \tag{5.12}\] By a twofold application of Lemma 2.17, to (5.11) and to (5.12), we find that \[f_{\tau}\] is \[\lambda\] -Mahler summable \[\Longleftrightarrow\bar{f}_{\lambda,\tau}\] is \[\lambda\] -Mahler summable. On the other hand, we see from (5.12) that \(\bar{f}_{\lambda,\tau}=0\) if and only if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\). Therefore we immediately conclude that if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\) then \(f_{\tau}\) is \(\lambda\)-Mahler summable. Moreover, in case \(\lambda\leq 0\), if \(f_{\tau}\) is \(\lambda\)-Mahler summable, so that \(\bar{f}_{\lambda,\tau}\) is also \(\lambda\)-Mahler summable, then we must have \(\bar{f}_{\lambda,\tau}=0\), for otherwise we would have \(\operatorname{disp}(\bar{f}_{\lambda,\tau},\tau)=0\), contradicting Theorem 4.2(2). This concludes the proof of the Propostion in case \(\lambda\leq 0\). It remains to prove the converse in the case where \(\lambda\geq 1\): assuming \(f_{\tau}\) is \(\lambda\)-Mahler summable, we must have \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\). By Proposition 3.6, we must have \(\mathbf{c}=\tilde{\mathbf{c}}\), and therefore \(\operatorname{dres}_{\lambda}(f,\tau,k)_{\gamma}=c_{\lambda}(\gamma)-\tilde{c}_ {\lambda}(\gamma)=0\) for every \(\gamma\in\mathcal{C}(\tau)\), whence \(\operatorname{sing}(\bar{f}_{\lambda,\tau},\tau)\subseteq\tau_{h}\) by the Definition 5.12 of \(\operatorname{dres}_{\lambda}(f,\tau,k)\). Moreover, if we had \(\bar{f}_{\lambda,\tau}\neq 0\), contrary to our contention, then we would have \(\operatorname{disp}(\bar{f}_{\lambda,\tau},\tau)=0\), and by Theorem 4.2(3) this can only happen in case \(\operatorname{ord}(\bar{f}_{\lambda,\tau},\tau)=\lambda\). So we already conclude that \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k>\lambda\) if \(f_{\tau}\) is \(\lambda\)-Mahler summable. If we can further show that \(\operatorname{dres}_{\lambda}(f,\tau,\lambda)=\mathbf{0}\) also, then this will force \(\operatorname{ord}(\bar{f}_{\lambda,\tau},\tau)\neq\lambda\) and we will be able to conclude that actually \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\), as we contend, by another application of Theorem 4.2. Thus it remains to show that if \(f_{\tau}\) is \(\lambda\)-Mahler summable then \(\operatorname{dres}_{\lambda}(f,\tau,\lambda)=\mathbf{0}\), which task will occupy us for the rest of the proof. We already know that \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k>\lambda\) and \(\operatorname{dres}_{\lambda}(f,\tau,\lambda)_{\gamma}=0\) for every \(\gamma\in\mathcal{C}(\tau)\), and therefore \(\bar{f}_{\lambda,\tau}\) satisfies the hypotheses of Lemma 5.16 by (5.12) and the Definition 5.12. So let us write \(\bar{c}_{k}(\alpha):=\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}\) as in Lemma 5.16, so that \[\bar{f}_{\lambda,\tau}=\sum_{k=1}^{\lambda}\sum_{\alpha\in\tau_{h}}\frac{\bar {c}_{k}(\alpha)}{(x-\alpha)^{k}},\] and compute the arithmetic average \(\bar{\omega}\) of the elements \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)\) for \(\alpha\) ranging over \(\tau_{h}\), which must be equal to \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)\) for each \(\alpha\in\tau_{h}\) by Lemma 5.16. Firstly, we see that \[\frac{1}{|\tau_{h}|}\sum_{\alpha\in\tau_{h}}\alpha^{-\lambda}\left(\sum_{s\geq \lambda}\sum_{n=0}^{h-1}p^{\lambda n}V_{\lambda,n}^{s}(\alpha)c_{s}(\alpha^{p^{n }})\right)=\frac{1}{(p^{h}-p^{h-1})e}\sum_{\alpha\in\tau_{h}}\sum_{s\geq \lambda}\sum_{n=0}^{h-1}p^{\lambda n}\mathbb{V}_{\lambda,n}^{s}\alpha^{-sp^{n }}c_{s}(\alpha^{p^{n}}),\] since \(V^{s}_{\lambda,n}(\alpha)=\mathbb{V}^{s}_{\lambda,n}\cdot\alpha^{\lambda-sp^{n}}\) by Lemma 2.13. Secondly, we find that in the remaining portion of the average of \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)=\alpha^{-\lambda}\mathrm{dres}_{ \lambda}(f,\tau,\lambda)_{\alpha}\) for \(\alpha\) ranging over \(\tau_{h}\), \[\frac{1}{|\tau_{h}|}\sum_{\alpha\in\tau_{h}}\alpha^{-\lambda} \left(-p^{\lambda(h-1)}\sum_{s\geq\lambda}\mathbb{V}^{s}_{\lambda,h-1}\alpha^{ \lambda-sp^{h+e-1}}\left(\tilde{c}_{s}\left(\alpha^{p^{h+e-1}}\right)+d_{s} \left(\alpha^{p^{h+e-1}}\right)\right)\right)\\ =\frac{-p^{\lambda(h-1)}}{(p^{h}-p^{h-1})e}\sum_{\alpha\in\tau_{h }}\sum_{s\geq\lambda}\mathbb{V}^{s}_{\lambda,h-1}\left(\left(\alpha^{p^{h}} \right)^{p^{e-1}}\right)^{-s}\left(\tilde{c}_{s}\left(\left(\alpha^{p^{h}} \right)^{p^{e-1}}\right)+d_{s}\left(\left(\alpha^{p^{h}}\right)^{p^{e-1}} \right)\right), \tag{5.13}\] the summands depend only on \(\alpha^{p^{h}}=\gamma\in\mathcal{C}(\tau)\). For each \(\gamma\in\mathcal{C}(\tau)\), the set \(\{\alpha\in\tau_{h}\mid\alpha^{p^{h}}=\gamma\}\) has \(p^{h}-p^{h-1}\) elements: there are \((p-1)\) distinct \(p^{\mathrm{th}}\)-roots of \(\gamma\) that do not belong to \(\mathcal{C}(\tau)\), and then there are \(p^{h-1}\) distinct \((p^{h-1})^{\mathrm{th}}\) roots of each of those elements. Therefore the expression in (5.13) is equal to the simpler \[-\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\sum_{s\geq\lambda }\mathbb{V}^{s}_{\lambda,h-1}\gamma^{-s}(\tilde{c}_{s}(\gamma)+d_{s}(\gamma)),\] whence the average \[\bar{\omega}:=\frac{1}{|\tau_{h}|}\sum_{\alpha\in\tau_{h}}\alpha^ {-\lambda}\bar{c}_{\lambda}(\alpha)=\frac{1}{(p^{h}-p^{h-1})e}\sum_{\alpha\in \tau_{h}}\sum_{n=0}^{h-1}\sum_{s\geq\lambda}p^{\lambda n}\mathbb{V}^{s}_{ \lambda,n}\alpha^{-sp^{n}}c_{s}(\alpha^{p^{n}})\\ -\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\sum _{s\geq\lambda}\mathbb{V}^{s}_{\lambda,h-1}\gamma^{-s}(\tilde{c}_{s}(\gamma)+ d_{s}(\gamma))). \tag{5.14}\] Note that this is not necessarily the same as the similar expression for the residual average \(\omega_{\lambda,\tau}(f)\) from Definition 5.11, which was defined with respect to \((d^{(0)}_{k}(\gamma))=\mathbf{d}^{(0)}:=\mathcal{I}^{(0)}_{\lambda,\tau}( \mathbf{c})\) as \[\omega_{\lambda,\tau}(f)=\frac{1}{(p^{h}-p^{h-1})e}\sum_{\alpha\in \tau_{h}}\sum_{s\geq\lambda}\sum_{n=0}^{h-1}p^{\lambda n}\mathbb{V}^{s}_{ \lambda,n}\alpha^{-sp^{n}}c_{s}(\alpha^{p^{n}})\\ -\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\sum _{s\geq\lambda}\mathbb{V}^{s}_{\lambda,h-1}\gamma^{-s}(\tilde{c}_{s}(\gamma)+ d^{(0)}_{s}(\gamma)).\] And yet, \(d_{s}(\gamma)=d^{(0)}_{s}(\gamma)\) for every \(s>\lambda\) and \(\gamma\in\mathcal{C}(\tau)\) by Proposition 3.6 and \[d_{\lambda}(\gamma)=\omega_{\lambda,\tau}(f)\cdot\gamma^{\lambda}+d^{(0)}_{ \lambda}(\gamma)\] for each \(\gamma\in\mathcal{C}(\tau)\) by the Definition 3.5 of \(\mathcal{I}^{(0)}_{\lambda,\tau}\) and of \(\mathcal{I}^{(\omega)}_{\lambda,\tau}\) with \(\omega=\omega_{\lambda,\tau}(f)\). By Corollary 2.16, \(\mathbb{V}^{\lambda}_{\lambda,h-1}=p^{-\lambda(h-1)}\), and therefore we find from (5.14) that \[\bar{\omega}=\frac{1}{(p^{h}-p^{h-1})e}\sum_{\alpha\in\tau_{h}} \sum_{n=0}^{h-1}\sum_{s\geq\lambda}p^{\lambda n}\mathbb{V}^{s}_{\lambda,n} \alpha^{-sp^{n}}c_{s}(\alpha^{p^{n}})\\ -\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\sum _{s\geq\lambda+1}\mathbb{V}^{s}_{\lambda,h-1}\gamma^{-s}(\tilde{c}_{s}(\gamma) +d^{(0)}_{s}(\gamma)))-\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}( \tau)}\mathbb{V}^{\lambda}_{\lambda,h-1}\gamma^{-\lambda}(\tilde{c}_{\lambda} (\gamma)+\omega\gamma^{\lambda}+d^{(0)}_{\lambda}(\gamma))\\ =\omega-\frac{p^{\lambda(h-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau )}p^{-\lambda(h-1)}\gamma^{-\lambda}\gamma^{\lambda}\omega=\omega-\omega=0.\] Since we must have \(\bar{c}_{\lambda}(\alpha)=\operatorname{dres}_{\lambda}(f,\tau,\lambda)_{ \alpha}=\alpha^{\lambda}\bar{\omega}\) in (5.14) for each \(\alpha\in\tau_{h}\) by Lemma 5.16, it follows that \(\operatorname{dres}_{\lambda}(f,\tau,\lambda)=\mathbf{0}\), concluding the proof of Proposition 5.17. _Remark 5.18_.: For \(f\in\mathbb{K}(x)\) and \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{+}\), the element \(\bar{f}_{\lambda,\tau}\) in (5.12) is the \(\tau\)-component of the \(\bar{f}_{\lambda}\in\mathbb{K}(x)\) in the \(\lambda\)-_Mahler reduction_ (1.2). We conclude this section by providing the proofs of the preliminary Lemmas that we used in the proof of Proposition 5.17. Proof of Lemma 5.14.: It suffices to show that for any \(g\in\mathbb{K}(x)\) such that \(g_{\tau}\neq 0\) we have \(\operatorname{ht}(\Delta_{\lambda}(g),\tau)\geq 1\). So let us write \(m:=\operatorname{ord}(g,\tau)\), \(h:=\operatorname{ht}(g,\tau)\), \(\tau_{n}:=\{\alpha\in\tau\ |\ \eta(\alpha)=n\}\) for \(n\in\mathbb{Z}_{\geq 0}\), and \[0\neq g_{\tau}=\sum_{k=1}^{m}\sum_{n=0}^{h}\sum_{\alpha\in\tau_{n}}\frac{d_{k} (\alpha)}{(x-\alpha)^{k}}.\] Then \[\Delta_{\lambda}(g)=\sum_{\alpha\in\tau_{h+1}}\frac{p^{\lambda}V^{m}_{m,1}( \alpha)d_{m}(\alpha^{p})}{(x-\alpha)^{m}}+\text{(lower-order or lower- height terms)},\] and since \(p^{\lambda}V^{m}_{m,1}(\alpha)=p^{\lambda-m}\alpha^{m-pm}\) by Corollary 2.16 and at least one \(d_{m}(\alpha^{p})\neq 0\) for some \(\alpha\in\tau_{h+1}\) by assumption, we conclude that \(\Delta_{\lambda}(g)\) has at least one pole in \(\tau_{h+1}\) and therefore \(\operatorname{ht}(\Delta_{\lambda}(g),\tau)=h+1\geq 1\), as claimed. Proof of Lemma 5.15.: It follows from (2.4) and Lemma 3.3 that \[\Delta_{\lambda}(g_{0})=\sum_{k\in\mathbb{N}}\sum_{\gamma\in\mathcal{C}(\tau)} \frac{\tilde{c}_{k}(\gamma)}{(x-\gamma)^{k}}+\sum_{k\in\mathbb{N}}\sum_{ \gamma\in\mathcal{C}(\tau)}\sum_{i=1}^{p-1}\frac{p^{\lambda}\sum_{s\geq k}V^{s }_{k,1}(\zeta^{i}_{p}\gamma)d_{s}(\gamma^{p})}{(x-\zeta^{i}_{p}\gamma)^{k}}.\] To see that \[p^{\lambda}\sum_{s\geq k}V^{s}_{k}(\zeta^{i}_{p}\gamma)d_{s}(\gamma^{p})= \zeta^{ki}_{p}(\tilde{c}_{k}(\gamma)+d_{k}(\gamma)),\] note that by Lemma 2.13 \[V^{s}_{k,1}(\zeta^{i}_{p}\gamma)=(\zeta^{i}_{p}\gamma)^{k-sp}\cdot\mathbb{V}^{ s}_{k,1}=\zeta^{ki}_{p}V^{s}_{k,1}(\gamma)\] for every \(s\geq k\) simultaneously, and \[p^{\lambda}\sum_{s\geq k}V^{s}_{k,1}(\gamma)d_{s}(\gamma^{p})=\tilde{c}_{k}( \gamma)+d_{k}(\gamma)\] by the definition of \(\tilde{\mathbf{c}}=\mathcal{D}_{\lambda,\tau}(\mathbf{d})\) and that of the map \(\mathcal{D}_{\lambda,\tau}\) in Definition 3.2. For \(\gamma\in\mathcal{C}(\tau)\) and \(1\leq i\leq p-1\), let \(S(\gamma,i):=\left\{\alpha\in\tau\ \big{|}\ \alpha^{p^{h-1}}=\zeta_{p}^{i}\gamma\right\}\). Then \(\tau_{h}\) is the disjoint union of the sets \(S(\gamma,i)\), and it follows from Lemma 2.17 that, for each \(\gamma\in\mathcal{C}(\tau)\) and \(1\leq i\leq p-1\), \[\sigma^{h-1}\left(\sum_{k\in\mathbb{N}}\frac{\zeta_{p}^{ik}(\tilde{c}_{k}( \gamma)+d_{k}(\gamma))}{(x-\zeta_{p}^{i}\gamma)^{k}}\right)=\sum_{k\in\mathbb{ N}}\sum_{\alpha\in S(\gamma,i)}\frac{\sum_{s\geq k}V^{s}_{k,h-1}(\alpha)\zeta_{p}^{is} (\tilde{c}_{s}(\gamma)+d_{s}(\gamma))}{(x-\alpha)^{k}}. \tag{5.15}\] For each \(\alpha\in S(\gamma,i)\Leftrightarrow\alpha^{p^{h-1}}=\zeta_{p}^{i}\gamma\), we compute \[\alpha^{p^{h+e-1}}=\left(\alpha^{p^{h-1}}\right)^{p^{e}}=\left(\zeta_{p}^{i} \right)^{p^{e}}=\gamma\qquad\text{and}\qquad\zeta_{p}^{is}=\left(\alpha^{p^{h -1}}\gamma^{-1}\right)^{s}=\alpha^{sp^{h-1}(1-p^{e})},\] and therefore we can rewrite each summand \[V^{s}_{k,h-1}(\alpha)\zeta_{p}^{is}(\tilde{c}_{s}(\gamma)+d_{s}(\gamma))=V^{s }_{k,h-1}(\alpha)\alpha^{sp^{h-1}(1-p^{e})}\left(\tilde{c}_{s}\left(\alpha^{p^ {h+e-1}}\right)+d_{s}\left(\alpha^{p^{h+e-1}}\right)\right).\] By Lemma 2.13, \(V^{s}_{k,h-1}(\alpha)=\mathbb{V}^{s}_{k,h-1}\cdot\alpha^{k-sp^{h-1}}\), and therefore \[V^{s}_{k,h-1}(\alpha)\alpha^{sp^{h-1}(1-p^{e})}=\mathbb{V}^{s}_{k,h-1}\cdot \alpha^{k-sp^{h-1}}\cdot\alpha^{sp^{h-1}(1-p^{e})}=\mathbb{V}^{s}_{k,h-1} \alpha^{k-sp^{h+e-1}}.\] Hence (5.15) is equal to \[\sum_{k\in\mathbb{N}}\sum_{\alpha\in S(\gamma,i)}\frac{\sum_{s\geq k}\mathbb{ V}^{s}_{k,h-1}\alpha^{k-sp^{h+e-1}}\left(\tilde{c}_{s}\left(\alpha^{p^{h+e-1}} \right)+d_{s}\left(\alpha^{p^{h+e-1}}\right)\right)}{(x-\alpha)^{k}},\] and our result follows by summing over \(\gamma\in\mathcal{C}(\gamma)\) and \(1\leq i\leq p-1\). Proof of Lemma 5.16.: First of all, \(|\tau_{h}|=(p^{h}-p^{h-1})e\) because there are \(e\) elements in \(\mathcal{C}(\tau)\), each of which has \((p-1)\) distinct \(p^{\text{th}}\) roots (of height \(1\)) that do not belong to \(\mathcal{C}(\tau)\), and each of these latter elements has \(p^{h-1}\) distinct \((p^{h-1})^{\text{th}}\) distinct roots - it follows from the Definition 5.10 that \(\alpha\in\tau\) has height \(\eta(\alpha)=h\) if and only if \(\alpha\) is a \((p^{h-1})^{\text{th}}\) root of an element of height \(1\). Moreover, the elements \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)\) are all equal to one another if and only if they are all equal to their arithmetic average. So it remains to show that \(\alpha^{-\lambda}\bar{c}_{\lambda}(\alpha)\) is independent of \(\alpha\). Now let \(g_{\tau}\in\mathbb{K}(x)_{\tau}\) such that \(\bar{f}_{\tau}=\Delta_{\lambda}(g_{\tau})\). By Lemma 2.10(7), \(\operatorname{ord}(g,\tau)=\operatorname{ord}(f,\tau)=\lambda\), so we can write \[g_{\tau}=\sum_{k=1}^{\lambda}\sum_{n=0}^{h-1}\sum_{\alpha\in\tau_{n}}\frac{d_ {k}(\alpha)}{(x-\alpha)^{k}},\] because if \(g\) had a pole in \(\tau_{n}\) for some \(n\geq h\) then \(\Delta_{\lambda}(g_{\tau})=f_{\tau}\) would have a pole in \(\tau_{n+1}\), contradicting our assumptions. Let \(\mathbf{d}=(d_{k}(\gamma))\) for \(\gamma\) ranging over \(\mathcal{C}(\tau)\) only. Since \(\Delta_{\lambda}(g_{\tau})=f_{\tau}\) has no poles in \(\mathcal{C}(\tau)\), we must have \(\mathbf{d}\in\ker(\mathcal{D}_{\lambda,\tau})\) by Lemma 3.3. In particular, for each \(\gamma\in\mathcal{C}(\tau)\) we must have \[0=c_{\lambda}(\gamma)=(\mathcal{D}_{\lambda,\tau}(\mathbf{d}))_{\lambda,\gamma} =-d_{\lambda}(\gamma)+\sum_{s\geq\lambda}p^{\lambda}V_{\lambda,1}^{s}(\gamma)= \gamma^{\lambda-p\lambda}d_{\lambda}(\gamma^{p})-d_{\lambda}(\gamma),\] since \(d_{s}(\gamma)=0\) for every \(s>\lambda\) and \(\gamma\in\mathcal{C}(\tau)\) and \(V_{\lambda,1}^{\lambda}(\gamma)=p^{-\lambda}\gamma^{\lambda-p\lambda}\) by Corollary 2.16, and therefore \(\gamma^{-\lambda}d_{\lambda}(\gamma)=\bar{\omega}\) is a constant that does not depend on \(\gamma\in\mathcal{C}(\tau)\). This is the base case \(n=0\) of an induction argument showing that \(\alpha^{-\lambda}d_{\lambda}(\alpha)=\bar{\omega}\) is independent of \(\alpha\in\tau_{n}\) for \(0\leq n\leq h-1\). Indeed, it follows from Lemma 2.17 and our assumption that \(\operatorname{sing}(f,\tau)\cap\mathcal{C}(\tau)=\emptyset\) that \[\Delta_{\lambda}\left(\sum_{n=0}^{h-1}\sum_{\alpha\in\tau_{n}} \frac{d_{\lambda}(\alpha)}{(x-\alpha)^{\lambda}}\right)=\sum_{n=0}^{h-1}\sum_{ \alpha\in\tau_{n+1}}\frac{p^{\lambda}V_{\lambda,1}^{\lambda}(\alpha)d_{\lambda }(\alpha^{p})-d_{\lambda}(\alpha)}{(x-\alpha)^{\lambda}}+(\text{lower-order terms})\\ =\sum_{n=0}^{h-1}\sum_{\alpha\in\tau_{n+1}}\frac{\alpha^{\lambda} \cdot((\alpha^{p})^{-\lambda}d_{\lambda}(\alpha^{p}))-d_{\lambda}(\alpha)}{(x -\alpha)^{\lambda}}+(\text{lower-order terms})\\ =\sum_{\alpha\in\tau_{h}}\frac{\bar{c}_{\lambda}(\alpha)}{(x- \alpha)^{\lambda}}+(\text{lower-order terms}), \tag{5.16}\] where the second equality follows from the computation \(V_{\lambda,1}^{\lambda}(\alpha)=p^{-\lambda}\alpha^{\lambda-p\lambda}\) in Corollary 2.16. In case \(h=1\) we have already concluded our induction argument. In case \(h\geq 2\), we proceed with our induction argument and find from (5.16) that we must have \[\alpha^{\lambda}\cdot((\alpha^{p})^{-\lambda}d_{\lambda}(\alpha^{p}))-d_{ \lambda}(\alpha)=0\qquad\Longleftrightarrow\qquad\alpha^{-\lambda}d_{\lambda }(\alpha)=(\alpha^{p})^{-\lambda}d_{\lambda}(\alpha^{p})=\bar{\omega}\] for each \(\alpha\in\tau_{n+1}\) whenever \(n+1\leq h-1\), since \(\alpha^{p}\in\tau_{n}\) for such an \(\alpha\), concluding our induction argument. Finally, since \(d_{\lambda}(\alpha)=0\) for \(\alpha\in\tau_{h}\), we find again that \[\bar{c}_{\lambda}(\alpha)=\alpha^{\lambda}\cdot((\alpha^{p})^{-\lambda}d_{ \lambda}(\alpha^{p}))=\alpha^{\lambda}\bar{\omega}\] for \(\alpha\in\tau_{h}\), since \(d_{\lambda}(\alpha)=0\) and \(\alpha^{p}\in\tau_{h-1}\) for such \(\alpha\), whence each \(d_{\lambda}(\alpha^{p})=\alpha^{p\lambda}\bar{\omega}\). ### Proof of the Main Theorem Let us now gather our earlier results into a formal proof of the Main Theorem stated in the introduction, that the \(\lambda\)-Mahler discrete residue at \(\infty\) constructed in Definition 5.1 for the Laurent polynomial component \(f_{\infty}\), together with the \(\lambda\)-Mahler discrete residues at Mahler trees \(\tau\in\mathcal{T}\) constructed in Definition 5.7 for non-torsion \(\tau\in\mathcal{T}_{0}\) and in Definition 5.12 for torsion \(\tau\in\mathcal{T}_{+}\), comprise a complete obstruction to the \(\lambda\)-Mahler summability problem. **Theorem 1.1**.: _For \(\lambda\in\mathbb{Z}\), \(f\in\mathbb{K}(x)\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\infty)=\mathbf{0}\) and \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(\tau\in\mathcal{T}\) and every \(k\in\mathbb{N}\)._ Proof.: Let \(f\in\mathbb{K}(x)\). By Lemma 2.1, \(f\) is \(\lambda\)-Mahler summable if and only if both \(f_{\infty}\) and \(f_{\mathcal{T}}\) are Mahler summable. By Proposition 5.2, \(f_{\infty}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}(f,\infty)=\mathbf{0}\). By Lemma 2.6, \(f_{\mathcal{T}}\) is \(\lambda\)-Mahler summable if and only if \(f_{\tau}\) is \(\lambda\)-Mahler summable for each \(\tau\in\mathcal{T}=\mathcal{T}_{0}\cup\mathcal{T}_{+}\). By Proposition 5.8 in the non-torsion case \(\tau\in\mathcal{T}_{0}\), and by Proposition 5.17 in the torsion case \(\tau\in\mathcal{T}_{+}\), \(f_{\tau}\) is \(\lambda\)-Mahler summable if and only if \(\operatorname{dres}_{\lambda}(f,\tau,k)=\mathbf{0}\) for every \(k\in\mathbb{N}\). ### Mahler reduction We can now define the \(\lambda\)-Mahler reduction \(\bar{f}_{\lambda}\) of \(f\in\mathbb{K}(x)\) in (1.2), in terms of the local reductions constructed in the proofs of Proposition 5.2, Proposition 5.8, and Proposition 5.17: \[\bar{f}_{\lambda}:=\sum_{\theta\in\mathbb{Z}/\mathcal{P}}\bar{f}_{\lambda, \theta}+\sum_{\tau\in\mathcal{T}}\bar{f}_{\lambda,\tau}=\sum_{\theta\in \mathbb{Z}/\mathcal{P}}\operatorname{dres}_{\lambda}(f,\infty)_{\theta}\cdot x ^{i_{\theta}h_{\theta}(f)}+\sum_{k\in\mathbb{N}}\sum_{\tau\in\mathcal{T}}\sum_ {\alpha\in\tau}\frac{\operatorname{dres}_{\lambda}(f,\tau,k)_{\alpha}}{(x- \alpha)^{k}}. \tag{5.17}\] We refer to Remark 5.3, Remark 5.9, and Remark 5.18 for more details. In the un-twisted case where \(\lambda=0\), we had already defined \(0\)-Mahler discrete residues in [1], where we proved that they comprise a complete obstruction to what we call here the \(0\)-Mahler summability problem. That the \(\operatorname{dres}(f,\infty)\) of [1, Def. 4.1] agrees with the \(\operatorname{dres}_{0}(f,\infty)\) of Definition 5.1 is immediately clear from the formulas. In contrast, the Mahler discrete residues \(\operatorname{dres}(f,\tau,k)\) at non-torsion Mahler trees \(\tau\in\mathcal{T}_{0}\) in [1, Def. 4.10] were defined recursively, using the Mahler coefficients \(V^{s}_{k,1}(\alpha)\) only, whereas here we provide closed formulas using the full set of Mahler coefficients \(V^{s}_{k,n}\) with \(n\geq 1\) for \(\operatorname{dres}_{0}(f,\tau,k)\) in Definition 5.7. Similarly, the Mahler discrete residues at torsion Mahler trees \(\tau\in\mathcal{T}_{+}\) in [1, Def. 4.16] are defined recursively and in terms of an auxiliary \(\mathbb{K}\)-linear map (see [1, Def. 4.15]), whereas here we provide a closed formulas in terms of a different2 auxiliary \(\mathbb{K}\)-linear map \(\mathcal{I}^{(0)}_{0,\tau}\) in Definition 5.12. It is not clear at all (to us) from their respective definitions that the \(\operatorname{dres}(f,\tau,k)\) of [1] should agree with the \(\operatorname{dres}_{0}(f,\tau,k)\) defined here. And yet, they do. Footnote 2: The auxiliary \(\mathbb{K}\)-linear map in [1, Def. 4.15] is essentially a truncated version of the map \(\mathcal{I}^{(0)}_{0,e}\) of Definition 3.5, in terms of the latter of which we defined \(\mathcal{I}^{(0)}_{0,\tau}\) (cf. Corollary 27). **Proposition 5.19**.: _The Mahler discrete residues \(\operatorname{dres}(f,\tau,k)\) of [1] coincide with the \(0\)-Mahler discrete residues \(\operatorname{dres}_{0}(f,\tau,k)\) in Definitions 5.7 and 5.12._ Proof.: It is clear from [1, Defs. 4.10 and 4.16] and Definitions 5.7 and 5.12 that the support of both vectors \(\operatorname{dres}(f,\tau,k)\) and \(\operatorname{dres}_{0}(f,\tau,k)\) is contained in the set of \(\alpha\in\tau\) such that \(\eta(\alpha|f)=\operatorname{ht}(f,\tau)\) in the non-torsion case (see Definition 5.6) and such that \(\eta(\alpha)=\operatorname{ht}(f,\tau)\) in the torsion case (see Definition 5.10). In the torsion case \(\tau\in\mathcal{T}_{+}\) such that \(\operatorname{ht}(f,\tau)=0\), it is immediately clear from the definitions that \(\operatorname{dres}(f,\tau,k)=\operatorname{dres}_{0}(f,\tau,k)\), so we can assume without loss of generality that either \(\tau\in\mathcal{T}_{0}\) or \(\operatorname{ht}(f,\tau)\geq 1\). In [1, Equation (4.16)] we constructed a Mahler reduction \[\bar{f}_{\tau}=\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau}\frac{\operatorname{ dres}(f,\tau,k)_{\alpha}}{(x-\alpha)^{k}}\] such that \(\bar{f}_{\tau}-f\) is Mahler summable (see [1, SS4.4]), whereas here we have constructed an analogous \(\bar{f}_{0,\tau}\) in (5.17) with the same property that \(\bar{f}_{0,\tau}-f_{\tau}\) is \(0\)-Mahler summable. Therefore \[(\bar{f}_{0,\tau}-f_{\tau})-(\bar{f}_{\tau}-f_{\tau})=\bar{f}_{0,\tau}-\bar{f} _{\tau}=\sum_{k\in\mathbb{N}}\sum_{\alpha\in\tau}\frac{\operatorname{dres}_{0} (f,\tau,k)_{\alpha}-\operatorname{dres}(f,\tau,k)_{\alpha}}{(x-\alpha)}\] is \(0\)-Mahler summable. If we had \(\bar{f}_{0,\tau}\neq\bar{f}_{\tau}\) then \(\operatorname{disp}(\bar{f}_{0,\tau}-\bar{f}_{\tau},\tau)=0\) would contradict Theorem 4.2, so we conclude that \(\operatorname{dres}_{0}(f,\tau,k)=\operatorname{dres}(f,\tau,k)\) for every \(\tau\in\mathcal{T}\) and \(k\in\mathbb{N}\). ## 6. Differential relations among solutions of first-order Mahler equations Let us now consider the differential structures that we shall consider for the most immediate applications of our \(\lambda\)-Mahler discrete residues. We denote by \[\partial:=x\frac{d}{dx}\] the unique \(\mathbb{K}\)-linear derivation on \(\mathbb{K}(x)\) such that \(\partial(x)=x\). We immediately compute that \(p\sigma\circ\partial=\partial\circ\delta\) as derivations on \(\mathbb{K}(x)\). In order to remedy this, one can proceed as proposed by Michael Singer (see [1]), to work in the overfield \(\mathbb{K}(x,\log x)\) and introduce the derivation \[\delta=x\log x\frac{d}{dx}=\log x\cdot\partial.\] We insist that the notation \(\log x\) is meant to be suggestive only: here \(\log x\) is a new transcendental element satisfying \(\sigma(\log x)=p\cdot\log x\) and \(\partial(\log x)=1\). Using these properties alone, one can verify that \(\delta\circ\sigma=\sigma\circ\delta\) as derivations on all of \(\mathbb{K}(x,\log x)\). The following computational result is a Mahler analogue of [1, Lem. 3.4], and of an analogous and more immediate computation in the shift case, which occurs in the proof of [1, Cor. 2.1]. We wish to emphasize that the computation is actually quite straightforward in the case of \(\lambda\)-Mahler discrete residues at non-torsion Mahler trees \(\tau\in\mathcal{T}_{0}\), and in contrast, rather involved for torsion Mahler trees \(\tau\in\mathcal{T}_{+}\), to to the additional ingredients involved in that case. **Lemma 6.1**.: _Let \(0\neq a\in\mathbb{K}(x)\). For \(\lambda\geq 1\), \(\tau\in\mathcal{T}\), and \(\alpha\in\tau\),_ \[\operatorname{dres}_{\lambda}\left(\partial^{\lambda-1}\left(\frac{\partial(a )}{a}\right),\tau,\lambda\right)_{\alpha}=(-1)^{\lambda-1}(\lambda-1)!\alpha^{ \lambda-1}\cdot\operatorname{dres}_{1}\left(\frac{\partial(a)}{a},\tau,1 \right)_{\alpha}\in\mathbb{Q}\cdot\alpha^{\lambda}.\] Proof.: Let \(a=b\prod_{\alpha\in\mathbb{K}}(x-\alpha)^{m(\alpha)}\), where \(0\neq b\in\mathbb{K}\) and \(m(\alpha)\in\mathbb{Z}\), almost all zero, and let \[f:=\frac{\partial(a)}{a}=c(0)+\sum_{\alpha\in\mathbb{K}^{\times}}\frac{m( \alpha)x}{x-\alpha}=\sum_{\alpha\in\mathbb{K}}m(\alpha)+\sum_{\alpha\in \mathbb{K}^{\times}}\frac{\alpha\cdot m(\alpha)}{x-\alpha}.\] Then we compute, using a similar induction argument as in [1, Lem. 3.4], that for \(\tau\in\mathcal{T}\) and \(\lambda\geq 1\): \[\partial^{\lambda-1}(f)_{\tau}=\sum_{\alpha\in\tau}\frac{(-1)^{\lambda-1}( \lambda-1)!\alpha^{\lambda}m(\alpha)}{(x-\alpha)^{\lambda}}+\text{(lower- order terms)}=\sum_{k=1}^{\lambda}\sum_{\alpha\in\tau}\frac{c_{k}^{[\lambda]}(\alpha)}{(x- \alpha)^{k}}, \tag{6.1}\] where the notation \(c_{k}^{[\lambda]}(\alpha)\) is meant to let us directly apply the definitions of \(\lambda\)-Mahler discrete residues of degree \(\lambda\) of \(\partial^{\lambda-1}(f)\) and more easily compare them with one another. In fact, as we shall see, we will only need to know that \(c_{1}^{[1]}(\alpha)=\alpha\cdot m(\alpha)\), and more generally \[c_{\lambda}^{[\lambda]}(\alpha)=(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda}m (\alpha)=(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda-1}c_{1}^{[1]}(\alpha). \tag{6.2}\] We shall also repeatedly use the results from Lemma 2.13 and Corollary 2.16, that \[V_{\lambda,n}^{\lambda}(\alpha)=\mathbb{V}_{\lambda,n}^{\lambda}\alpha^{ \lambda-\lambda p^{n}}=p^{-\lambda n}\alpha^{\lambda-\lambda p^{n}},\] without further comment. For \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{0}\), let \(h:=\operatorname{ht}(f,\tau)\), and let \(\alpha\in\beta(f,\tau)\) such that \(\eta(\alpha|f)=h\) (cf. Definition 5.6). Then by Definition 5.7 \[\operatorname{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda )_{\alpha}=\sum_{n=0}^{h}p^{\lambda n}V_{\lambda,n}^{\lambda}(\alpha)c_{ \lambda}^{[\lambda]}(\alpha^{p^{n}})=\sum_{n=0}^{h}p^{\lambda n}p^{-n\lambda} \alpha^{\lambda-\lambda p^{n}}c_{\lambda}^{[\lambda]}(\alpha^{p^{n}})\\ =(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda}\sum_{n=0}^{h}m( \alpha^{p^{n}})=(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda-1}\mathrm{dres}_{1} (f,\tau,1)_{\alpha}\in\mathbb{Q}\cdot\alpha^{\lambda}.\] For \(\tau\in\operatorname{supp}(f)\cap\mathcal{T}_{+}\), let us first suppose \(\operatorname{ht}(f,\tau)=0\) as in Definition 5.10, and compute immediately for \(\gamma\in\mathcal{C}(\tau)\), \[\operatorname{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda)_{\gamma}= c_{\lambda}^{[\lambda]}(\gamma)=(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda}m( \gamma)=(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda-1}\mathrm{dres}_{1}(f, \tau,1)_{\gamma},\] which clearly belongs to \(\mathbb{Q}\cdot\gamma^{\lambda}\). On the other hand, if \(h:=\operatorname{ht}(f,\tau)\geq 1\), we compute for \(\gamma\in\mathcal{C}(\tau)\) using (5.7) \[\operatorname{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda )_{\gamma}=\frac{\gamma^{\lambda}}{e}\sum_{j=1}^{e}\gamma^{-\lambda p^{j}}c_{ \lambda}^{[\lambda]}(\gamma^{p^{j}})=\frac{\gamma^{\lambda}}{e}\sum_{j=1}^{e} \gamma^{-\lambda p^{j}}(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda p^{j}}m( \gamma^{p^{j}})\\ =(-1)^{\lambda-1}(\lambda-1)!\frac{\gamma^{\lambda}}{e}\sum_{j=1} ^{e}m(\gamma^{p^{j}})=(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda-1}\mathrm{dres }_{1}(f,\tau,1)_{\gamma}\in\mathbb{Q}\cdot\gamma^{\lambda} \tag{6.3}\] Before computing the \(\alpha\)-component of \(\operatorname{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda)\) for \(\alpha\in\tau\) such that \(\eta(\alpha)=h\), we must first compute a few preliminary objects (cf. Remark 5.13). Consider the vector \(\mathbf{d}^{[\lambda]}:=\mathcal{I}_{\lambda,\tau}^{(0)}(\mathbf{c}^{[\lambda]})\) as in Definition 3.5, and let us compute in particular as in (3.6): \[d_{\lambda}^{[\lambda]}(\gamma)=\frac{\gamma^{\lambda}}{e}\sum_{j =0}^{e-1}(j+1-e)\gamma^{-\lambda p^{j}}c_{\lambda}^{[\lambda]}(\gamma^{p^{j}} )=\frac{\gamma^{\lambda}}{e}\sum_{j=0}^{e-1}(j+1-e)\gamma^{-\lambda p^{j}} \cdot(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda p^{j}}m(\gamma^{p^{j}})\\ =(-1)^{\lambda-1}(\lambda-1)!\frac{\gamma^{\lambda}}{e}\sum_{j=0 }^{e-1}(j+1-e)m(\gamma^{p^{j}})=(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda-1 }d_{1}^{[1]}(\gamma) \tag{6.4}\] The \(\lambda\)-components of \(\tilde{\mathbf{c}}^{[\lambda]}:=\mathcal{D}_{\lambda,\tau}(\mathbf{d}^{[ \lambda]})\) are simply given by \[\tilde{c}_{\lambda}^{[\lambda]}(\gamma)=c_{\lambda}^{[\lambda]}(\gamma)- \operatorname{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda)_{\gamma}\] by Proposition 3.6 and (5.7). Therefore, for each \(\gamma\in\mathcal{C}(\tau)\), \[\tilde{c}_{\lambda}^{[\lambda]}(\gamma)+d_{\lambda}^{[\lambda]}(\gamma)=\frac {(-1)^{\lambda-1}(\lambda-1)!\gamma^{\lambda}}{e}\sum_{j=1}^{e}(j-e)m(\gamma^{ p^{j}}). \tag{6.5}\] With this, we next compute the residual average (cf. Definition 5.11), for which we compute separately the two long sums appearing in (5.4). First, the sum over elements of positive height \[\omega^{(+)}_{\lambda,\tau}(\partial^{\lambda-1}(f))=\frac{1}{(p^{h}-p ^{h-1})e}\sum_{\alpha\in\tau_{h}}\sum_{n=0}^{h-1}p^{\lambda n}\mathbb{V}^{\lambda }_{\lambda,n}\alpha^{-\lambda p^{n}}c^{[\lambda]}_{\lambda}(\alpha^{p^{n}})\\ =\frac{(-1)^{\lambda-1}(\lambda-1)!}{(p^{h}-p^{h-1})e}\sum_{ \alpha\in\tau_{h}}\sum_{n=0}^{h-1}m(\alpha^{p^{n}})=(-1)^{\lambda-1}(\lambda-1 )!\cdot\omega^{(+)}_{1,\tau}(f). \tag{6.6}\] Second, the sum over the elements of zero height \[\omega^{(0)}_{\lambda,\tau}(\partial^{\lambda-1}(f))=\frac{p^{ \lambda(e-1)}}{e}\sum_{\gamma\in\mathcal{C}(\tau)}\mathbb{V}^{\lambda}_{ \lambda,h-1}\gamma^{-\lambda}(\tilde{c}^{[\lambda]}(\gamma)+d^{[\lambda]}_{ \lambda}(\gamma))\\ =\frac{(-1)^{\lambda-1}(\lambda-1)!}{e^{2}}\sum_{\gamma\in \mathcal{C}(\tau)}\sum_{j=1}^{e}(j-e)m(\gamma^{p^{j}})=(-1)^{\lambda-1}( \lambda-1)!\cdot\omega^{(0)}_{1,\tau}(f). \tag{6.7}\] Now putting together (6.6) and (6.7) we obtain \[\omega_{\lambda,\tau}(\partial^{\lambda-1}(f))=\omega^{(+)}_{\lambda,\tau}( \partial^{\lambda-1}(f))-\omega^{(0)}_{\lambda,\tau}(\partial^{\lambda-1}(f)) =(-1)^{\lambda-1}(\lambda-1)!\cdot\omega_{1,\tau}(f), \tag{6.8}\] where \[\omega_{1,\tau}(f)=\omega^{(+)}_{1,\tau}(f)-\omega^{(0)}_{1,\tau}(f)=\frac{1} {(p^{h}-p^{h-1})e}\sum_{\begin{subarray}{c}\alpha\in\tau\\ \eta(\alpha>0)\end{subarray}}m(\alpha)-\frac{e-e^{2}}{2e^{2}}\sum_{\gamma\in \mathcal{C}(\tau)}m(\gamma)\in\mathbb{Q}. \tag{6.9}\] Since the vector \(\mathbf{w}^{(\lambda)}\) of Lemma 3.4 satisfies \(w^{(\lambda)}_{\lambda}(\gamma)=\gamma^{\lambda}=\gamma^{\lambda-1}w^{(1)}_{1 }(\gamma)\), we finally compute \[\mathrm{dres}_{\lambda}(\partial^{\lambda-1}(f),\tau,\lambda)_{ \alpha}=\sum_{n=0}^{h-1}p^{n\lambda}V^{\lambda}_{\lambda,n}(\alpha)c^{[ \lambda]}_{\lambda}(\alpha^{p^{n}})\\ -p^{\lambda(h-1)}\mathbb{V}^{\lambda}_{\lambda,h-1}\alpha^{\lambda -\lambda p^{h+e-1}}(\tilde{c}^{[\lambda]}_{\lambda}(\alpha^{p^{h+e-1}})+d^{[ \lambda]}_{\lambda}(\alpha^{p^{h+e-1}})+\omega_{\lambda,\tau}(\partial^{ \lambda-1}(f))w^{(\lambda)}_{\lambda}(\alpha^{p^{h+e-1}}))\\ =(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda}\left[\sum_{n=0}^{h-1 }m(\alpha^{p^{n}})-\frac{1}{e}\sum_{j=1}^{e}(j-e)m(\alpha^{p^{h+j-1}})+\omega_ {1,\tau}(f)\right]\\ =(-1)^{\lambda-1}(\lambda-1)!\alpha^{\lambda-1}\mathrm{dres}_{1}( f,\tau,1)_{\alpha}\in\mathbb{Q}\cdot\alpha^{\lambda}. \tag{6.10}\] This concludes the proof of the Lemma. With this preliminary computation now out of the way, we can prove our first application of \(\lambda\)-Mahler discrete residues in the following result, which is a Mahler analogue of [1, Cor. 2.1] in the shift case and [1, Prop. 3.5] in the \(q\)-dilation case. **Proposition 6.2**.: _Let \(U\) be a \(\sigma\partial\mathbb{\char 60}(x,\log x)\)-algebra such that \(U^{\sigma}=\mathbb{K}\). Let \(a_{1},\dots,a_{t}\in\mathbb{K}(x)-\{0\},\) and suppose \(y_{1},\dots,y_{t}\in U^{\times}\) satisfy_ \[\sigma(y_{i})=a_{i}y_{i}\qquad\text{for}\qquad i=1,\dots,t.\] _Then \(y_{1},\ldots,y_{t}\) are \(\partial\)-dependent over \(\mathbb{K}(x)\) if and only if there exist \(k_{1},\ldots,k_{t}\in\mathbb{Z}\), not all zero, and \(g\in\mathbb{K}(x)\), such that_ \[\sum_{i=1}^{t}k_{i}\frac{\partial a_{i}}{a_{i}}=p\sigma(g)-g. \tag{6.11}\] Proof.: First, suppose there exist \(k_{1},\ldots,k_{t}\in\mathbb{Z}\) and \(g\in\mathbb{K}(x)\) satisfying (6.11). Consider \[\sigma\left(\sum_{i=1}^{t}\frac{\delta y_{i}}{y_{i}}-g\log x\right)-\left(\sum _{i=1}^{t}\frac{\delta y_{i}}{y_{i}}-g\log x\right)=\log x\left(\sum_{i=1}^{t} \frac{\partial a_{i}}{a_{i}}-(p\sigma(g)-g)\right)=0,\] and therefore \[\sum_{i=1}^{t}\frac{\delta y_{i}}{y_{i}}-g\log x\in U^{\sigma}=\mathbb{K},\] and therefore \(y_{1},\ldots,y_{t}\) are \(\delta\)-dependent over \(\mathbb{K}(x,\log x)\), which is equivalent to them being \(\partial\)-dependent over \(\mathbb{K}(x)\), since \(\log x\) is \(\partial\)-algebraic over \(\mathbb{K}(x)\). Now suppose \(y_{1},\ldots,y_{t}\) are \(\partial\)-dependent over \(\mathbb{K}(x)\). Then there exist linear differential operators \(\mathcal{L}_{i}\in\mathbb{K}[\delta]\), not all zero, such that \[\sum_{i=1}^{t}\mathcal{L}_{i}\left(\frac{\delta(a_{i})}{a_{i}}\right)=\sigma( G)-G\] for some \(G\in\mathbb{K}(x,\log x)\). Let \(\lambda\geq 1\) be as small as possible such that \(\operatorname{ord}(\mathcal{L}_{i})\leq\lambda-1\) for every \(1\leq i\leq t\). Then we must have \[G=\sum_{\ell=1}^{\lambda}g_{\ell}\log^{\ell}x\qquad\text{with}\qquad g_{1}, \ldots,g_{\lambda}\in\mathbb{K}(x).\] Moreover, writing each \(\mathcal{L}_{i}=\sum_{j=0}^{\lambda-1}k_{i,j}\delta^{j}\), we must also have \[\sum_{i=1}^{t}k_{i,\lambda-1}\partial^{\lambda-1}\left(\frac{\partial a_{i}}{ a_{i}}\right)=p^{\lambda}\sigma(g_{\lambda})-g_{\lambda}. \tag{6.12}\] Without loss of generality we can reduce to the situation where, for each \(\tau\in\mathcal{T}_{0}\) and for each \(1\leq i\leq t\) such that \(\tau\in\operatorname{supp}(\frac{\partial a_{i}}{a_{i}})\) we have the same bouquet \(\beta(\frac{\partial a_{i}}{a_{i}},\tau)\) (cf. Definition 5.6, and similarly that for each \(\tau\in\mathcal{T}_{+}\) and for each \(1\leq i\leq t\) such that \(\tau\in\operatorname{supp}(\frac{\partial a_{i}}{a_{i}})\) we have \(\operatorname{ht}(\frac{\partial a_{i}}{a_{i}},\tau)\) the same constant for each \(i=1,\ldots,t\). Under these conditions, (6.12) implies that \[\sum_{i=1}^{t}k_{i,\lambda-1}\text{dres}_{\lambda}\left(\partial^{\lambda-1} \left(\frac{\partial a_{i}}{a_{i}}\right),\tau,\lambda\right)=\mathbf{0}\] for every \(\tau\in\mathcal{T}\). But by Lemma 6.1, this is equivalent to \[\sum_{i=1}^{t}k_{i,\lambda-1}\text{dres}_{1}\left(\frac{\partial a_{i}}{a_{i}},\tau,1\right)=\mathbf{0},\] and since each \(\text{dres}_{1}(\frac{\partial a_{i}}{a_{i}},\tau,1)_{\alpha}\in\mathbb{Q} \cdot\alpha\) uniformly in \(1\leq i\leq t\) and \(\alpha\in\tau\) (again by Lemma 6.1), we may further take the \(k_{i,\lambda-1}\in\mathbb{Z}\) ## 7. Examples In [1, Section 5], the authors provide two small examples for the \(\lambda\)-Mahler discrete residues for \(\lambda=0\). Here, we illustrate \(\lambda\)-Mahler discrete residues for \(\lambda=\pm 1\) in several examples. Example 7.1 gives a \(1\)-Mahler summable \(f\) in the non-torsion case \(\tau\subset\mathcal{T}_{0}\). Example 7.2 gives a \(1\)-Mahler non-summable \(f\) in the torsion case \(\tau\subset\mathcal{T}_{+}\). Moreover, Example 7.3 gives a \((-1)\)-Mahler summable \(f\) in the non-torsion case \(\tau\subset\mathcal{T}_{0}\). Example 7.4 gives a \((-1)\)-Mahler non-summable \(f\) in the torsion case \(\tau\subset\mathcal{T}_{+}\). **Example 7.1**.: Let \(p=3,\lambda=1\), and \(\tau=\tau(2)\). Consider the following \(f=f_{\tau}\) with \(\operatorname{sing}(f,\tau)=\{2,\sqrt[3]{2},\zeta_{3}\sqrt[3]{2},\zeta_{3}^{2} \sqrt[3]{2}\}\) : \[f =\frac{-x^{6}+4x^{3}+3x^{2}-12x+8}{\left(x-2\right)^{2}\left(x^{3 }-2\right)^{2}}\] \[=\sum_{k=1}^{2}\sum_{\alpha\in\beta(f,\tau)}\frac{c_{k}(\alpha)}{ (x-\alpha)^{k}},\] where \(\beta(f,\tau)=\{2,\gamma,\zeta_{3}\gamma,\zeta_{3}^{2}\gamma\}\) with \(\gamma:=\sqrt[3]{2}\). By Definition 5.6, we have \(\operatorname{ht}(f,\tau)=1\). It follows from Definition 5.7 that for \(i\in\{0,1,2\}\): \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{3}^{i}\gamma} =V_{1,0}^{1}(\zeta_{3}^{i}\gamma)c_{1}(\zeta_{3}^{i}\gamma)+3V_{1,1}^{1}(\zeta_{3}^{i}\gamma)c_{1}(2)+V_{1,0}^{2}(\zeta_{3}^{i}\gamma)c_{1}( \zeta_{3}^{i}\gamma)+3V_{1,1}^{2}(\zeta_{3}^{i}\gamma)c_{2}(2)\] \[=1\cdot(-\frac{\zeta_{3}^{i}}{3\sqrt[3]{4}})+(-3)\cdot\left(- \frac{\zeta_{3}^{i}\gamma}{2\cdot 3^{2}}\right)\] \[=0,\] and \[\operatorname{dres}_{1}(f,\tau,2)_{\zeta_{3}^{i}\gamma} =V_{2,0}^{2}(\zeta_{3}^{i}\gamma)c_{2}(\zeta_{3}^{i}\gamma)+3V_{2,1}^{2}(\zeta_{3}^{i}\gamma)c_{2}(2)\] \[=\frac{\zeta_{3}^{2i}}{6\sqrt[3]{2}}+(-3)\cdot\left(\frac{\zeta_ {3}^{2i}}{2\cdot 3^{2}\cdot\sqrt[3]{2}}\right)\] \[=0.\] Thus, we see from Proposition 5.8 that \(f\) is \(1\)-Mahler summable. And indeed, \[f=\Delta_{1}\left(\frac{1}{(x-2)^{2}}\right).\] **Example 7.2**.: Let \(p=3,\lambda=1\), and \(\tau=\tau(\zeta_{4})\). Consider the following \(f=f_{\tau}\) with \(\operatorname{sing}(f,\tau)=\{\zeta_{4}^{\pm 1},\zeta_{12}^{\pm 1},\zeta_{12}^{\pm 5}\}\): \[f =\frac{-2x^{4}+2x^{2}+1}{\left(x^{2}+1\right)\left(x^{4}-x^{2}+1 \right)}\] \[=\frac{1}{2}\left(-\frac{\zeta_{4}^{3}}{x-\zeta_{4}}-\frac{\zeta _{4}}{x-\zeta_{4}^{3}}+\frac{\zeta_{12}^{7}}{x-\zeta_{12}}+\frac{\zeta_{12}^{1 1}}{x-\zeta_{12}^{5}}+\frac{\zeta_{12}}{x-\zeta_{12}^{7}}+\frac{\zeta_{12}^{5} }{x-\zeta_{12}^{11}}\right)\] \[=\sum_{\alpha\in\operatorname{sing}(f,\tau)}\frac{c_{k}(\alpha)}{ x-\alpha}.\] By Definition 5.10, we see that \(\operatorname{ht}(f,\tau)=1\). Furthermore, by Definition 3.5, 5.11, and 3.2, we find that \[\omega :=\omega_{1,\tau}(f)=-1/4,\] \[\mathcal{I}_{1,\tau}^{(\omega)}(\mathbf{c}) =\left(d_{1}(\zeta_{4}),d_{1}(\zeta_{4}^{3})\right)=-\frac{1}{4}( \zeta_{4}+\zeta_{4}^{3})\left(1,1\right),\] \[\mathcal{D}_{1,\tau}(\mathbf{d}) =\left(\tilde{c}_{1}(\zeta_{4}),\tilde{c}_{1}(\zeta_{4}^{3}) \right)=(0,0).\] Thus, it follows from Definition 5.12 that \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{12}} =V_{1,0}^{1}(\zeta_{12})\cdot c_{1}(\zeta_{12})-\mathbb{V}_{1,0} ^{1}\cdot(\zeta_{12})^{-8}\cdot d_{1}(\zeta_{12}^{9})\] \[=c_{1}(\zeta_{12})-\zeta_{3}\cdot d_{1}(\zeta_{4}^{3})\] \[=\zeta_{12}^{7}-\zeta_{3}\cdot(-\frac{1}{4})\cdot(\zeta_{4}^{3}- \zeta_{4})\] \[=\frac{1}{4}\zeta_{12}+\frac{3}{4}\zeta_{12}^{7}\neq 0.\] Similarly, a direct calculation shows that \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{12}^{7}} =\frac{3}{4}\zeta_{12}+\frac{1}{4}\zeta_{12}^{7}\neq 0,\] \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{12}^{5}} =\frac{1}{4}\zeta_{12}^{5}+\frac{3}{4}\zeta_{12}^{11}\neq 0,\] \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{12}^{5}} =\frac{3}{4}\zeta_{12}^{5}+\frac{1}{4}\zeta_{12}^{11}\neq 0,\] and \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{4}} =c_{1}(\zeta_{4})-\tilde{c}_{1}(\zeta_{4})=c_{1}(\zeta_{4})=- \frac{1}{2}\zeta_{4}^{3}\neq 0,\] \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{4}^{3}} =c_{1}(\zeta_{4}^{3})-\tilde{c}_{1}(\zeta_{4}^{3})=c_{1}(\zeta_{4} ^{3})=-\frac{1}{2}\zeta_{4}\neq 0.\] Thus, it follows from Proposition 5.17 that \(f\) is not \(1\)-Mahler summable. **Example 7.3**.: Let \(p=3,\lambda=-1\), and \(\tau=\tau(5)\). Consider the following \(f=f_{\tau}\) with \(\operatorname{sing}(f,\tau)=\{5,\sqrt[3]{5},\zeta_{3}\sqrt[3]{5},\zeta_{3}^{2} \sqrt[3]{5}\}\) : \[f =\frac{-3x^{6}+30x^{3}+x^{2}-10x-50}{3(x-5)^{2}\left(x^{3}-5 \right)^{2}}\] \[=\frac{-1}{(x-5)^{2}}+\frac{1}{135\sqrt[3]{5}}\cdot\sum_{i=0}^{2 }\frac{\zeta_{3}^{2i}}{(x-\zeta_{3}^{i}\sqrt[3]{5})^{2}}-\frac{2}{135\sqrt[3] {25}}\cdot\sum_{i=0}^{2}\frac{\zeta_{3}^{i}}{x-\zeta_{3}^{i}\sqrt[3]{5}}\] \[=\sum_{k=1}^{2}\sum_{\alpha\in\beta(f,\tau)}\frac{c_{k}(\alpha)}{ (x-\alpha)^{k}},\] where \(\beta(f,\tau)=\{2,\gamma,\zeta_{3}\gamma,\zeta_{3}^{2}\gamma\}\) with \(\gamma:=\sqrt[3]{5}\). By Definition 5.6, we have \(\operatorname{ht}(f,\tau)=1\). It follows from Definition 5.7 that for \(i\in\{0,1,2\}\): \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{3}^{i}\gamma} =V_{1,0}^{1}(\zeta_{3}^{i}\gamma)c_{1}(\zeta_{3}^{i}\gamma)+3^{-1 }V_{1,1}^{1}(\zeta_{3}^{i}\gamma)c_{1}(2)+V_{1,0}^{2}(\zeta_{3}^{i}\gamma)c_{1 }(\zeta_{3}^{i}\gamma)+3^{-1}V_{1,1}^{2}(\zeta_{3}^{i}\gamma)c_{2}(2)\] \[=1\cdot(-\frac{2\zeta_{3}^{i}}{135\sqrt[3]{25}})+(-\frac{1}{3}) \cdot\left(-\frac{2\zeta_{3}^{i}\gamma}{3^{2}\cdot 5^{2}}\right)\] \[=0,\] and \[\operatorname{dres}_{1}(f,\tau,2)_{\zeta_{3}^{i}\gamma} =V_{2,0}^{2}(\zeta_{3}^{i}\gamma)c_{2}(\zeta_{3}^{i}\gamma)+3^{- 1}V_{2,1}^{2}(\zeta_{3}^{i}\gamma)c_{2}(2)\] \[=\frac{\zeta_{3}^{2i}}{135\sqrt[3]{2}}+(-\frac{1}{3})\cdot\left( \frac{\zeta_{3}^{2i}}{3^{2}\cdot 5\cdot\sqrt[3]{5}}\right)\] \[=0.\] Thus, we see from Proposition 5.8 that \(f\) is \((-1)\)-Mahler summable. And indeed, \[f=\Delta_{-1}\left(\frac{1}{(x-5)^{2}}\right).\] **Example 7.4**.: Let \(p=2,\lambda=-1\), and \(\tau=\tau(\zeta_{3})\). Consider the following \(f=f_{\tau}\) with \(\operatorname{sing}(f,\tau)=\{\zeta_{3}^{\pm 1},\zeta_{6}^{\pm 1}\}\): \[f =\frac{1}{2\left(x^{4}+x^{2}+1\right)}\] \[=-\frac{1}{2}\left(\frac{\zeta_{3}}{x-\zeta_{3}}+\frac{\zeta_{3}^ {-1}}{x-\zeta_{3}^{-1}}+\frac{\zeta_{6}}{x-\zeta_{6}}+\frac{\zeta_{6}^{-1}}{x -\zeta_{6}^{-1}}\right)\] \[=\sum_{\alpha\in\operatorname{sing}(f,\tau)}\frac{c_{k}(\alpha)}{ x-\alpha}.\] By Definition 5.10, we see that \(\operatorname{ht}(f,\tau)=1\). Furthermore, by Definition 3.5, 5.11, and 3.2, we find that \[\omega :=\omega_{1,\tau}(f)=0,\] \[\mathcal{I}_{1,\tau}^{(\omega)}(\mathbf{c}) =\left(d_{1}(\zeta_{3}),d_{1}(\zeta_{3}^{-1})\right)=\frac{2}{3} \left(\zeta_{3},\zeta_{3}^{-1}\right),\] \[\mathcal{D}_{1,\tau}(\mathbf{d}) =\left(\tilde{c}_{1}(\zeta_{3}),\tilde{c}_{1}(\zeta_{3}^{-1}) \right)=-\frac{1}{3}\left(\zeta_{3},\zeta_{3}^{-1}\right).\] Thus, it follows from Definition 5.12 that \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{6}} =V_{1,0}^{1}(\zeta_{6})\cdot c_{1}(\zeta_{6})-\mathbb{V}_{1,0}^{1 }\cdot(\zeta_{6})^{-3}\cdot\left(\tilde{c}_{1}(\zeta_{3}^{-1})+d_{1}(\zeta_{3} ^{-1})\right)\] \[=c_{1}(\zeta_{6})+\tilde{c}_{1}(\zeta_{3}^{-1})+d(\zeta_{3}^{-1})\] \[=-\frac{1}{2}\zeta_{6}-\frac{1}{3}\zeta_{3}^{-1}+\frac{2}{3} \zeta_{3}^{-1}\] \[=\frac{1}{3}\zeta_{3}^{-1}-\frac{1}{2}\zeta_{6}\neq 0.\] Similarly, a direct computation shows that \[\operatorname{dres}_{1}(f,\tau,1)_{\zeta_{6}^{-1}}=\frac{1}{3}\zeta_{3}-\frac{ 1}{2}\zeta_{6}^{-1}\neq 0.\] Therefore, it follows from Proposition 5.17 that \(f\) is not \((-1)\)-Mahler summable.
2310.03122
SPH-based framework for modelling fluid-structure interaction problems with finite deformation and fracturing
Understanding crack propagation in structures subjected to fluid loads is crucial in various engineering applications, ranging from underwater pipelines to aircraft components. This study investigates the dynamic response of structures, including their damage and fracture behaviour under hydrodynamic load, emphasizing the fluid-structure interaction (FSI) phenomena by applying Smoothed Particle Hydrodynamics (SPH). The developed framework employs weakly compressible SPH (WCSPH) to model the fluid flow and a pseudo-spring-based SPH solver for modelling the structural response. For improved accuracy in FSI modelling, the $\delta$-SPH technique is implemented to enhance pressure calculations within the fluid phase. The pseudo-spring analogy is employed for modelling material damage, where particle interactions are confined to their immediate neighbours. These particles are linked by springs, which don't contribute to system stiffness but determine the interaction strength between connected pairs. It is assumed that a crack propagates through a spring connecting a particle pair when the damage indicator of that spring exceeds a predefined threshold. The developed framework is extensively validated through a dam break case, oscillation of a deformable solid beam, dam break through a deformable elastic solid, and breaking dam impact on a deformable solid obstacle. Numerical outcomes are subsequently compared with the findings from existing literature. The ability of the framework to accurately depict material damage and fracture is showcased through a simulation of water impact on a deformable solid obstacle with an initial notch.
Md Rushdie Ibne Islam
2023-09-22T08:47:37Z
http://arxiv.org/abs/2310.03122v1
SPH-based framework for modelling fluid-structure interaction problems with finite deformation and fracturing ###### Abstract Understanding crack propagation in structures subjected to fluid loads is crucial in various engineering applications, ranging from underwater pipelines to aircraft components. This study investigates the dynamic response of structures, including their damage and fracture behaviour under hydrodynamic load, emphasizing the fluid-structure interaction (FSI) phenomena by applying Smoothed Particle Hydrodynamics (SPH). The developed framework employs weakly compressible SPH (WCSPH) to model the fluid flow and a pseudo-spring-based SPH solver for modelling the structural response. For improved accuracy in FSI modelling, the \(\delta\)-SPH technique is implemented to enhance pressure calculations within the fluid phase. The pseudo-spring analogy is employed for modelling material damage, where particle interactions are confined to their immediate neighbours. These particles are linked by springs, which don't contribute to system stiffness but determine the interaction strength between connected pairs. It is assumed that a crack propagates through a spring connecting a particle pair when the damage indicator of that spring exceeds a predefined threshold. The developed framework is extensively validated through a dam break case, oscillation of a deformable solid beam, dam break through a deformable elastic solid, and breaking dam impact on a deformable solid obstacle. Numerical outcomes are subsequently compared with the findings from existing literature. The ability of the framework to accurately depict material damage and fracture is showcased through a simulation of water impact on a deformable solid obstacle with an initial notch. keywords: Smoothed particle hydrodynamics, fluid-structure interaction, material damage and fracture, pseudo-spring analogy. + Footnote †: journal: ## 1 Introduction In recent years, there has been a growing focus on fluid-structure interaction (FSI), which plays a prominent role in numerous engineering and industrial contexts. Examples of these applications include coastal engineering, the shipbuilding industry, and aviation. Understanding the crack propagation in structures under fluid load is critical for enhancing safety, preventing environmental disasters, reducing economic losses, and advancing engineering innovation in complex fluid-structure interaction scenarios. The intricacies of fluid-structure interaction (FSI) problems often render them beyond the reach of analytical solutions. Furthermore, the high cost and logistical difficulties associated with experimental studies in FSI have spurred the adoption of numerical modelling as an attractive alternative. Over recent decades, various methodologies have been developed to tackle the complex challenges of fluid-structure interaction (FSI) problems. Although mesh-based methods such as finite difference method (FDM), finite volume method (FVM), and finite element method (FEM) [1; 2; 3] have achieved a degree of success for FSI simulations, they often require additional computationally intensive numerical schemes (e.g., interface tracking or re-meshing etc.) when dealing with free surfaces, moving boundaries, and deformable structures. The computation becomes even more complex with propagating cracks and material separation as field variables exhibit discontinuities. Traditional mesh-based methods such as FEM are unsuitable [4], and while discontinuous enrichment helps in modelling the cracks [5], implementing these additional numerical strategies is not only intricate but also computationally intensive, and they can frequently introduce instability issues. As a promising alternative, Lagrangian particle-based methods have gained increasing favour in FSI modelling. These methods are becoming more popular due to their meshless and Lagrangian nature, which makes them well-suited for representing free-surface fluid flow and the substantial deformation of solid structures. Moreover, Lagrangian particle-based meshless approaches offer a natural and efficient means of capturing moving interfaces and finite deformation in structures encountered in FSI problems. Smoothed Particle Hydrodynamics (SPH), initially designed to address challenges in astrophysical contexts, has gained widespread acclaim as a leading meshless method [6; 7]. SPH operates as a truly meshless and Lagrangian particle-based approach, where individual particles represent the material points and carry the field variables[8; 9]. Each particle exclusively engages with its neighbouring counterparts in this method through a kernel function. The extent of this interaction is governed by the smoothing length, which defines the dimensions of the local neighbourhood, also referred to as the influence domain. Notably, the kernel function exhibits a characteristic bell-shaped profile designed to maximize interaction strength with immediate neighbours and progressively diminish it as the distance between interacting particles increases. SPH offers distinct advantages in handling scenarios involving free-surface flow, finite material deformation, moving interfaces and boundaries. Its wide range of applications can be found in dynamic fluid flow [10; 11; 12], geotechnical simulations [13; 14; 15; 16], explosive and impact events [17; 18; 19; 20]. SPH also plays a prominent role in FSI simulations [21; 22; 23]. SPH methodologies can be of different types, such as weakly compressible SPH (WCSPH), incompressible SPH (ISPH), total Lagrangian SPH (TLSPH) etc. WCSPH and ISPH are the most prominent techniques for fluid flow, whereas standard SPH and TLSPH are used for modelling the deformation of solids. In WCSPH, the time step size used for numerical integration is quite small, whereas ISPH allows a larger time step size for integration. Another advantage of ISPH is that it produces smooth pressure fields compared to WCSPH simulations [24; 25; 26; 27]. However, the computational cost per step is relatively low in WCSPH. For large-scale simulations, implementing parallelized techniques for ISPH [28; 29; 30] is more challenging than its WCSPH counterpart. Nonetheless, the conventional WCSPH method is hindered by substantial pressure fluctuations. While these fluctuations have a limited impact on flow kinematics, they pose significant challenges in Fluid-Structure Interaction (FSI) modelling. This is because pressure fluctuations can lead to inaccuracies in assessing the interacting forces at the fluid-structure interface. To address this issue, several numerical schemes (e.g. \(\delta\)-SPH method [31; 32], Rusanov flux [33] etc.) have been introduced to get smooth pressure fields. As for the solid phase, two methods are mainly used, i.e., the conventional SPH based on the Eulerian kernel and TLSPH. In the conventional SPH approach, particle positions are updated at each computational step, and kernel functions are computed based on these updated positions [8]. Consequently, the traditional SPH kernel function is often called the Eulerian because particles can enter and exit its influence domain. This Eulerian kernel function is known to introduce a well-recognized issue called tensile instability [34], leading to local particle clustering and the formation of unphysical numerical fractures. To mitigate the tensile instability, a commonly used correction method is the artificial pressure/stress technique [35; 36], which can effectively alleviate the issue. The tensile instability can be circumvented by calculating kernel functions using reference particle positions [37]. The kernel function based on the reference configuration is the Lagrangian kernel, and the corresponding SPH formulation is called TLSPH. TLSPH eliminates tensile instability if computations are consistently based on the initial configuration [38; 39]. However, the original TLSPH method faces limitations in modelling scenarios involving significant material distortion and separation due to negative Jacobians [20]. Moreover, the traditional SPH-based frameworks provide better agreements than TLSPH when compared with the experimental and other numerical results for finite deformation and failure of materials [40; 41; 20]. Most SPH-based FSI modelling uses different combinations of these methods [21; 22; 23]. Despite the success of SPH in modelling FSI problems, however, important issues on material damage and fracture are yet to be addressed. Limited studies can be found in the literature on a stable, accurate, efficient SPH framework for modelling FSI problems involving material damage and fracture [42]. In this context, the research community increasingly embraces SPH and its extensions, primarily due to their innate ability to model crack propagation [37; 43; 44]. Among these extensions, the General Particle Dynamics (GPD) framework, built upon SPH, has gained widespread adoption for simulating progressive failure in slopes and the fracturing of rocks [45; 46; 47]. Another notable SPH extension is the pseudo-spring augmented SPH [48], which establishes connections between each particle and its immediate neighbours through pseudo-springs. The advancement of a crack front occurs when the pseudo-spring linking any two immediate neighbours is disrupted. The crack paths can be modelled in this approach without refinement, enrichment or visibility criteria. A slightly adapted version of this methodology has proven effective in simulating failures in both brittle and ductile materials [18; 19]. This work presents a coupled WCSPH framework for fluid-structure interaction with deformable structures undergoing damage and fracture. The WCSPH enhanced with numerical schemes to improve accuracy and stability is used for simulating the fluid flow. A pseudo-spring analogy in traditional SPH has been adopted for modelling crack initiation and propagation in structures. The effectiveness of the proposed approach is demonstrated through several numerical illustrations. The paper is organized as follows. Section 2 discusses the governing equations, their discretization, boundary conditions and stability schemes for fluid simulation using WCSPH. Section 3 discusses the SPH method for solid deformation and the pseudo-spring analogy for modelling material damage and fracture. Section 4 discusses the coupling strategy of WCSPH and pseudo spring-based SPH. Section 5 presents some numerical examples for verification and validation purposes. The crack initiation and propagation in elastic obstacles with a notch due to water impact is also demonstrated through an example. Finally, some conclusions are drawn in section 6. ## 2 Weakly compressible smoothed particle hydrodynamics (WCSPH) for fluid flow The foundational equations governing the dynamic fluid flow encompass the principles of mass conservation and momentum preservation principles. These equations encompass: \[\frac{d\rho}{dt}=-\rho\frac{\partial v^{\beta}}{\partial x^{\beta}}, \tag{1}\] \[\frac{d\nu^{\alpha}}{dt}=-\frac{1}{\rho}\frac{\partial p}{\partial x^{\alpha} }+\frac{1}{\rho}\frac{\partial\tau^{\alpha\beta}}{\partial x^{\alpha}}+g^{ \alpha}, \tag{2}\] where \(\rho\) is the material density. In the moving Lagrangian frame, we represent the time derivative as \(\frac{d}{dt}\). At an individual material point, we describe the spatial coordinates using the notation \(x^{\alpha}\) for the component indexed as \(\alpha\), while the velocity at this point is denoted as \(\nu^{\alpha}\). \(\tau^{\alpha\beta}\) denotes the \(\alpha\) and \(\beta\) elements of the viscous stress: \[\tau^{\alpha\beta}=\mu_{f}\left(\frac{\partial v^{\alpha}}{\partial x^{\beta} }+\frac{\partial v^{\beta}}{\partial x^{\alpha}}\right), \tag{3}\] where \(\mu_{f}\) is the dynamic viscosity of the fluid. \(g^{\alpha}\) is the \(\alpha\) component of the body force. \(p\) is the pressure, and we derive the value of \(p\) by employing a weakly compressible equation of state model [10], formulated as follows in this study: \[p=p_{0}\left[\left(\frac{\rho}{\rho_{0}}\right)^{\gamma}-1\right], \tag{4}\] where \(\gamma=7\), \(\rho_{0}\) represents the reference material density and \(p_{0}=\frac{c_{0}^{2}\rho_{0}}{\gamma}\) with \(c_{0}\) representing the speed of sound. SPH, classified as a collocation method, divides the domain into a collection of material points, commonly known as particles, whether they are distributed regularly or irregularly. The partial differential equations associated with the conservation equations are converted into a set of equivalent ordinary differential equations. These transformed equations are then solved using one of the numerous numerical integration techniques. Field variables pertaining to a particle situated at the material point \(x_{i}^{\alpha}\) are determined by considering neighboring particles positioned at \(x_{j}^{\alpha}\). This computation relies on the utilization of a kernel function denoted as \(W(q,h)\). Notably, this kernel function serves as an approximation of the Dirac-delta function. The parameter \(q\) is defined as the normalized distance between \(x_{i}^{\alpha}\) and \(x_{j}^{\alpha}\), with this normalization being relative to the smoothing length \(h\) i.e., \(q=(||x_{i}^{\alpha}-x_{j}^{\alpha}||)/h\). In this work, we use the Wendland C2 kernel function [49]: \[W(q,h)=\alpha_{d}\begin{cases}(q+0.5)(2-q)^{4},&\text{if }q\leq 2\\ 0,&\text{otherwise}\end{cases} \tag{5}\] where \(\alpha_{d}=7/(32\pi h^{2})\) in 2D. To maintain simplicity in our discussion, from this point onward, we will denote the kernel function \(W(q,h)\) for a particle pair \(i\) and \(j\) as \(W_{ij}\), and its derivative as \(W_{ij,\beta}\). The discretized forms of the conservation equations are as follows: \[\frac{d\rho_{i}}{dt} =\sum_{j}m_{j}v_{ij}^{\beta}W_{ij,\beta}+\delta hc_{0}\sum_{j}2 \frac{m_{j}}{\rho_{j}}(\rho_{i}-\rho_{j})\frac{x_{ij}^{\beta}}{\|x_{i}^{\beta} -x_{j}^{\beta}\|^{2}+0.01h^{2}}W_{ij,\beta}, \tag{6}\] \[\frac{dv_{i}^{\alpha}}{dt} =\sum_{j}m_{j}\left(\frac{\tau_{i}^{\alpha\beta}}{\rho_{i}^{2}}+ \frac{\tau_{j}^{\alpha\beta}}{\rho_{j}^{2}}-\pi_{ij}\delta^{\alpha\beta} \right)W_{ij,\beta}-\sum_{j}m_{j}\left(\frac{p_{i}}{\rho_{i}^{2}}+\frac{p_{j} }{\rho_{j}^{2}}\right)W_{ij,\beta}+g^{\alpha}, \tag{7}\] where \(v_{ij}^{\beta}=v_{i}^{\beta}-v_{j}^{\beta}\). There are two distinct terms on the right-hand side of the equation 6. The first term signifies the SPH discretization of equation 1, while the second term introduces an additional numerical diffusion component referred to as \(\delta\)-SPH [31]. We use \(\delta=0.1\) in this paper. The \(\delta\)-SPH term ensures a smooth pressure field in WCSPH simulations. \(\pi_{ij}\) is the artificial correction term and maintains numerical stability in the presence of shock. The following form is used in this work [50]: \[\pi_{ij}=\begin{cases}\frac{-\beta_{1}\bar{c}_{ij}\mu_{ij}+\beta_{2}\mu_{ij}^ {2}}{\bar{\rho}_{ij}},&\text{if }v_{ij}^{\alpha}x_{ij}^{\alpha}\leq 0\\ 0,&\text{otherwise}\end{cases} \tag{8}\] where, \[\mu_{ij}=\frac{hv_{ij}^{\alpha}x_{ij}^{\alpha}}{\|x_{i}^{\alpha}-x_{j}^{ \alpha}\|^{2}+0.01h^{2}}, \tag{9}\] where \(x_{ij}^{\alpha}=x_{i}^{\alpha}-x_{j}^{\alpha}\), \(\bar{c}_{ij}\) represents the average sound speed calculated across particles \(i\) and \(j\) and \(\bar{\rho}_{ij}=0.5(\rho_{i}+\rho_{j})\). ### No-slip boundary condition To maintain the no-slip solid boundary condition along solid walls, we introduce boundary particles that contain extrapolated information about velocity and pressure [11]. We represent the solid wall boundaries using distinct boundary particles spanning \(2h\) (Fig. 1). The particles at the walls are assigned the same initial properties (inter-particle distance, particle mass and density) as the fluid particles. The field variables of the solid boundary wall particles are extrapolated from the adjacent fluid particles and remain fixed in their initial positions. These solid boundary wall particles participate like regular particles for field variable computation of fluid particles. The pressure values at the solid boundary wall particles are obtained using the information from neighbouring fluid particles in the following form: \[p_{w}=\frac{\sum_{f}p_{f}W(x_{wf})+(g^{\beta}-a_{w}^{\beta})\sum_{f}\rho_{f}x_{wf }^{\beta}W(x_{wf})}{\sum_{f}W(x_{wf})}, \tag{10}\] where the solid boundary wall particles are denoted by subscript \(w\) and \(f\) represents the fluid particles. \(a_{w}^{\beta}\) represents the \(\beta\) component of the specified acceleration of the solid boundary wall particles. The following equation is used to calculate the density of the solid boundary wall particles: \[\rho_{w}=\rho_{0}\left(\frac{p_{w}}{p_{0}}+1\right)^{\frac{1}{\gamma}}. \tag{11}\] ## 3 Pseudo-spring based SPH for solid deformation The conservation equations for material deformation due to external loading are \[\frac{d\rho}{dt}=-\rho\frac{\partial v^{\beta}}{\partial x^{\beta}}, \tag{12}\] \[\frac{dv^{\alpha}}{dt}=\frac{1}{\rho}\frac{\partial\sigma^{\alpha\beta}}{ \partial x^{\beta}}, \tag{13}\] where the Cauchy stress tensor's component corresponding to indices \(\alpha\) and \(\beta\) is symbolized as \(\sigma^{\alpha\beta}\). The above conservation equations are discretized as: \[\frac{d\rho_{i}}{dt}=\sum_{j}m_{j}v_{ij}^{\beta}W_{i,\beta}, \tag{14}\] Figure 1: Diagram of the boundary treatment at the solid wall \[\frac{d\nu_{i}^{\alpha}}{dt}=\sum_{j}m_{j}\left(\frac{\sigma_{i}^{\alpha\beta}}{ \rho_{i}^{2}}+\frac{\sigma_{j}^{\alpha\beta}}{\rho_{j}^{2}}-\pi_{ij}\delta^{ \alpha\beta}-P_{ij}^{a}\delta^{\alpha\beta}\right)W_{ij,\beta}, \tag{15}\] where \(P_{ij}^{a}\) is the artificial pressure correction term, and its purpose is to prevent the occurrence of tensile instability, which occurs when particles tend to cluster together, leading to the development of unrealistic cracks. This adjustment introduces a short-range repulsive force using the given expression [35]: \[P_{ij}^{a}=\gamma\left(\frac{|P_{r_{i}}|}{\rho_{i}^{2}}+\frac{|P_{r_{j}}|}{ \rho_{j}^{2}}\right)\left[\frac{W(d_{ij})}{W(\Delta p)}\right]^{\bar{n}}, \tag{16}\] where \(\gamma\) symbolizes the adjustment parameter, and \(\bar{n}\) is defined as \(W(0)/W(\Delta p)\), where \(\Delta p\) signifies the average particle spacing in the initial configuration. ### Constitutive model for elastic structure The Cauchy stress tensor, denoted as \(\sigma^{\alpha\beta}\), consists of two main components: the hydrostatic pressure \(p\) and the deviatoric stress \(S^{\alpha\beta}\) (\(\sigma^{\alpha\beta}=S^{\alpha\beta}-p\delta^{\alpha\beta}\)). We have employed a linear equation of state to calculate the hydrostatic pressure in deformable solids [51] as \(p=K\left(\frac{\rho}{\rho_{0}}-1\right)\), \(K\) being the bulk modulus. The rate of change of the deviatoric stress \(S^{\alpha\beta}\) is calculated is determined by the following equation: \[\dot{S}^{\alpha\beta}=2\mu\left(\dot{\epsilon}_{\alpha\beta}-\frac{1}{3} \delta^{\alpha\beta}\dot{\epsilon}^{\gamma\gamma}\right)+S^{\alpha\gamma} \omega^{\beta\gamma}+S^{\gamma\beta}\omega^{\alpha\gamma}, \tag{17}\] Here, \(\mu\) is the shear modulus. The above Jaumann stress rate is used to ensure frame independence. The strain rate tensor and spin tensor are represented as \(\dot{\epsilon}^{\alpha\beta}\) and \(\omega^{\alpha\beta}\), respectively. These tensors can be calculated as follows: \[\dot{\epsilon}^{\alpha\beta}=\frac{1}{2}\left(l^{\alpha\beta}+l^{\beta\alpha} \right)\ \,\ \ \omega^{\alpha\beta}=\frac{1}{2}\left(l^{\alpha\beta}-l^{\beta\alpha}\right), \tag{18}\] On the other hand, the velocity gradient tensor \(l^{\alpha\beta}\) is determined by: \[l^{\alpha\beta}_{ES\,PH}=-\sum_{j}(v_{i}^{\alpha}-v_{j}^{\alpha})W_{ij,\beta} \frac{m_{j}}{\rho_{j}}. \tag{19}\] ### Definition of immediate neighbour particles for approximation The kernel functions employed in SPH exhibit their maximum values near the centre particle of their compact support. As one moves away from this centre particle, the magnitude of these functions rapidly diminishes. Consequently, particles located close to a reference particle denoted as \(i\) have a significantly greater impact on the approximation than those positioned near the outer boundary of the kernel support. Considering this behaviour, the field variables at particle \(i\) are estimated by summing the contributions solely from its immediate neighbouring particles in our work. Therefore, only those particles that can be directly connected to particle \(i\) through straight lines, without intersecting any other particles within the domain, are considered for the approximation [48; 18]. When employing a rectangular particle distribution, this criterion implies that any interior particle is influenced by its eight nearest neighbours, a boundary particle is affected by five nearest neighbours, and three nearest neighbours influence a corner particle. Here, a gradient correction method [52] is employed to mitigate the truncation errors emerging due to the incomplete or partial support domain. In SPH, the inclusion of a gradient correction method serves the purpose of attaining both zeroth-order consistency (\(C^{0}\) consistency near boundary particles) and first-order consistency (\(C^{1}\) consistency in interior particles). In this study, we substitute \(W_{ij,\beta}\) with \(\hat{W}ij,\beta\) in order to accomplish this, with \(\hat{W}ij,\beta\) being computed as follows:: \[\hat{W}_{ij,\beta}=B_{i}^{\beta\alpha}W_{ij,\alpha}\ \ \text{with}\ \ \mathbf{B_{i}}=\mathbf{A_{i}^{-1}}\ \ \text{and}\ \ \ A_{i}^{\beta\alpha}=-\sum_{j}\frac{m_{j}}{\rho_{j}}x_{ij}^{\beta}W_{ij, \alpha}. \tag{20}\] We checked the error caused by this reduction in the number of interacting particles by approximating the function \(sin\frac{\pi x}{2},0<x<1\) and its derivatives. This exercise revealed that the use of only neighbouring particles introduced negligible error. ### Pesudo-spring analogy in SPH In our framework, it is presumed that the closest neighbouring particles are linked to the \(i^{th}\) particle through what we refer to as pseudo springs. These pseudo springs are introduced solely for the purpose of modelling interactions between connecting particles and do not impart any additional stiffness to the system. This technique is more detailed in [48] and [18]. These pseudo springs are responsible for defining the level of interaction denoted as \(f_{ij}\) between the connected particles. Specifically, when the material connecting particles \(i\) and \(j\) is undamaged, the value of \(f_{ij}\) is set to 1. However, it is assumed that these pseudo springs will fail when certain predetermined criteria are met, such as reaching a critical axial stress or strain along the \(ij\) line or some other relevant parameter. When such failure occurs, we set the value of \(f_{ij}\) to 0, and this change from 1 to 0 is considered permanent. Consequently, the presence of these permanently damaged or failed pseudo springs allows for tracking the crack path within the domain. To accommodate the evolving interactions between particles as a result of these pseudo springs, kernel functions utilized in SPH, denoted as \(\hat{W}_{ij}\), along with their respective derivatives \(\hat{W}_{ij,\beta}\) used in approximating field variables, are replaced by modified versions incorporating the interaction level \(f_{ij}\). These modified functions are expressed as \(f_{ij}\hat{W}_{ij}\) and \(f_{ij}\hat{W}_{ij,\beta}\) reflecting the changing influence of particle connections due to the presence of damaged or failed pseudo springs. To visualize the path of a crack, we employ fringe plots of a damage variable denoted as \(D\). This variable is defined for a given particle, say particle \(i\), as the ratio of the count of pseudo springs for which \(f_{ij}=0\) (indicating failure) to the total count of initial pseudo springs connected to that particle. When \(D=1\), it signifies that all the pseudo springs linked to particle \(i\) have experienced permanent failure or damage, essentially representing the complete failure of particle \(i\). On the other hand, values of \(D\) within the range \(0<D<1\) imply that the material associated with particle \(i\) has suffered partial damage. It is important to note that a crack can propagate even when \(D_{i}<1\), underscoring the idea that damage can extend beyond individual particles, affecting their connections and interactions. ## 4 Coupling of WCSPH and Pseudo-spring based SPH This section discusses the coupling strategy of the WCSPH and pseudo-spring SPH. In this coupling methodology, we tackle the fluid flow problem by applying WCSPH, bolstered by incorporating \(\delta\)-SPH techniques, as comprehensively discussed in Section 2. Concurrently, our approach to solving structural deformation and failure leverages the pseudo-spring SPH, elaborated in Section 3. We employ particles with identical initial spacing for the discretisation to maintain a seamless and harmonious treatment of fluid and solid phases. An explicit contact force algorithm is essential for accurately simulating the complex multi-body interactions between the fluid and deformable structure. In this study, we employ a soft repulsive particle contact model [53]. This model incorporates a distance-dependent repulsive force, characterized by a finite magnitude, acting upon particles (both fluid and deformable structure) as they approach each other. This force is mathematically expressed as follows: \[F_{ij}^{\alpha}=0.01c^{2}\zeta f(\eta)\frac{x_{ij}^{\alpha}}{r_{ij}^{2}} \tag{21}\] \[\eta=\frac{r_{ij}}{0.75h_{ij}} \tag{22}\] \[\zeta=1-\frac{r_{ij}}{\Delta d},0<r_{ij}<\Delta p \tag{23}\] \[f(\eta)=\begin{cases}2/3,&\text{if }0<\eta\leq 2/3,\\ (2\eta-1.5\eta^{2}),&\text{if }2/3<\eta\leq 1,\\ 0.5(2-\eta^{2}),&\text{if }1<\eta\leq 2,\\ 0,&\text{otherwise}.\end{cases} \tag{24}\] The distance between two particles at the fluid-structure boundary, i.e., one fluid particle and another particle from the deformable structure, is denoted by \(r\). This softer repulsive force effectively mitigates non-physical particle penetration while simultaneously reducing the occurrence of pressure disturbances [53; 54]. While calculating the contact force and modelling the fluid-structure interactions, the SPH particles from the deformable structural domain interact with the fluid particles only through equations 21 - 24 and vice-versa. Finally, we add the interaction forces to the discrete fluid and structure momentum conservation equations 7 and 15. A predictor-corrector integration method is utilized to solve the discretized equations governing the fluid-structure interaction (FSI) problem, with the time step determined through the Courant-Friedrichs-Lewy condition. ## 5 Numerical examples Within this section, we present several numerical examples. We utilize the proposed coupled WCSPH-pseudo-spring SPH method to simulate scenarios involving free-surface flow, elastic solids undergoing significant deformation, and fluid-structure interactions with deformable structures. To assess the accuracy of our simulations, we compare the numerical results with analytical solutions and experimental and numerical data available in the existing literature. In the last example, we simulate the material damage and fracture in the structure due to water wave interaction. For all the simulations, we utilize the WCSPH technique in conjunction with \(\delta\)-SPH correction to handle the fluid phase, while we adopt the pseudo-spring SPH approach to represent the deformable structure. ### Dam break: collapse of a water column The phenomenon involving the collapse of a water column was initially explored in [55] and has since been extensively examined through numerical simulations. Additionally, an analytical investigation of this problem was conducted in [56]. These studies have become the benchmark tests routinely utilized to validate various computational frameworks simulating the free surface flow of water. The test setup is shown in Fig. 2 with \(W=H=0.057\) m and \(L=4H\). The water density is considered to be \(1000\ Kg/m^{3}\). The dynamic viscosity coefficient (\(\mu_{f}\)) is \(0.05\) Pa s. Three sizes of inter-particle spacing are used in the simulations with \(\Delta p=0.00057\) m, \(0.0014\) m and \(0.0029\) m. The rigid wall and water interaction is modelled through the boundary condition described in section 2.1. The position of the water-front toe measured from the left wall is shown in Fig. 2(a) at different time steps with the inter-particle resolutions and compared with the experimental results [55]. The non-dimensional time coefficient is calculated as \(\tau=\frac{1}{\sqrt{H/g}}\) with \(g\) being the gravity force and the non-dimensional distance is \(\frac{x}{H}\) with x being the current position of the water-front toe measured from the left wall. It can be observed that the present simulations agree well with the experimental result. The positional time history of the water-front toe is compared with other results available in the open literature in Fig. 2(b) with \(\Delta p=0.00057\) m. In Fig. 8, we present the contours illustrating the velocity and pressure distribution of the water at different time steps. The simulation effectively captures the behaviour of free-surface flow influenced by the gravitational force of the dam. The simulation notably depicts the dam breaking, leading to water flow along the dry bed, culminating in an impact against a vertical rigid wall. Figure 2: Setup for the dam break test Subsequently, the water rises, falls, and overturns backwards onto the underlying water. These flow patterns and pressure distributions closely mirror findings from previous research [57; 22; 58]. Figure 4: Contours of velocity magnitude and pressure distribution at 0.12, 0.19 and 0.25 s in dam break test Figure 3: Time history of the water-front toe (measured from the left wall) and the effect of inter-particle distance on the time history of the water-front toe in the dam break test (\(\tau=\frac{t}{\sqrt{H/g}}\)). ### Oscillation of beam In this example, we show through a transverse oscillation of a beam that the present formulation is able to capture the deformation of deformable solids accurately. We consider an elastic cantilever beam (Fig. 5) of length \(L=10\) m and thickness \(d=1\) m. The frequency of the oscillation is computed as \(\omega^{2}=\frac{Ed^{2}k^{4}}{12\rho(1-\nu^{2})}\)[36]; where, \(\rho=7850kg/m^{3}\) is the material density, \(E=211GPa\) is the elastic modulus and \(\nu=0.3\) is the Poisson's ratio. Wave number \(k\) is computed from the condition \(cos(kL)sin(kL)=-1\); for the first mode, \(kL=1.875\). Initially, the beam is set in motion with the following velocity function. \[\frac{v_{y}}{c_{0}}=V_{f}\frac{M\left\{\cos(kx)-\cosh(kx)\right\}-N\left\{\sin (kx)-\sinh(kx)\right\}}{Q}. \tag{25}\] where, \(c_{0}\) is the sound speed in the medium, \(V_{f}\) is the transverse velocity set as \(V_{f}=0.05\), \(M=\sin(kL)+\sinh(kL)\), \(N=\cos(kL)+\cosh(kL)\) and \(Q=2(\cos(kL)\sinh(kL)-\sin(kL)\cosh(kL))\). Simulations are performed with inter-particle spacing \(\Delta p=0.05\) m and \(\frac{h}{\Delta p}=1.5\). \(\gamma=0.3\) has been used to suppress the tensile instability in the present problem. The numerically computed time periods differ from the theoretical time period (0.114 s) by 7.2% (in the case of SPH with \(\gamma=0.3\), the time period found to be 0.122 s). It can be concluded that the present SPH formulation with \(\gamma=0.3\) yields results close to the analytical solutions. ### Dam break - large deformation of an elastic gate The investigation into the deformation of an elastic gate due to water pressure was undertaken in [21], employing both experimental and numerical methods. This problem is further modelled using different numerical techniques in [22; 58; 23]. The set-up of the dam break flow through an elastic rubber gate is shown in Fig. 6. Initially, the water column is at rest, measuring 0.14 m in height (\(H\)) and 0.1 m in width (\(W\)). On the other hand, the elastic rubber gate has dimensions of 0.079 m in length (\(L\)) and 0.005 m in thickness (\(D\)). In this example, the water density is 1000 \(Kg/m^{3}\), and the dynamic viscosity coefficient (\(\mu_{f}\)) is taken as 0.05 Pa s. The density of the elastic rubber gate is 1100 \(Kg/m^{3}\). The elastic modulus is 12 MPa, and the Poisson's ratio is 0.45. The initial computational domain is discretized with \(\Delta p=0.0008\) m. The water column applies force to the deformable rubber gate securely clamped to a rigid wall from above (see Fig. 6). After releasing the elastic rubber gate, the water initiates contact with the gate and exits the tank through the gap between the gate and the unyielding bottom wall. The contact between the elastic rubber gate and the water is modelled through the soft repulsive particle contact model discussed in section 4, whereas the interaction between water and rigid wall is modelled using the boundary condition described in section 2.1. Fig. 7 Figure 5: Schematic sketch of the beam under transverse oscillation evolution of horizontal and vertical displacements observed at the free end of the gate. Our results have been compared with the experimental data [21] and other numerical results [21; 22]. The process unfolds in the following manner: initially, the water's pressure pushes the elastic gate aside, allowing the water to flow out. During the early stages, the horizontal displacement of the elastic gate increases rapidly. As the water depth in the enclosure decreases, the pressure force acting on the elastic gate diminishes, causing the gate to return towards its initial position slowly. Overall, our simulation closely aligns with the experimental and numerical data, indicating a significant level of agreement. We achieve a more favourable agreement in the early stage of the gate opening, but our simulation tends to underestimate the displacements slightly as the gate progresses toward closure. The simulation frames with the present framework at specific time points are presented in Fig. 8 and compared with the experimental snapshots. It can be noted that the FSI coupling process with nonlinear characteristics is effectively replicated. The pressure distribution is also shown, and a consistent pressure distribution is observed. The simulation as a whole maintains stability, with no occurrences of instability or simulation failure observed. The maximum stress is observed on Figure 6: Setup of the dam break flow through an elastic gate Figure 7: Comparison of time histories of the horizontal and vertical displacements of the free end of the gate the inner side of the anchored gate's end, where the maximum bending moment is concentrated. (see Fig. 9). ### Dam break flow impacting on flexible obstacle In this segment, we undertake another numerical simulation to explore the dynamic interaction of a deformable structure in a fluid-structure interaction (FSI) scenario. Specifically, we simulate the scenario where a vertical water column collapses and strikes a flexible elastic wall, investigating the resulting complex dynamics. The system's configuration is shown in Fig. 10. The water column in this setup exhibits specific geometric parameters: its width, denoted as \(W\), measures 0.146 m, while its height, represented as \(H\), measures 0.292 m. The gap between the two vertical walls, which serves as the spatial constraint for the column, amounts to 4 times the width, or 0.584 m. A deformable elastic plate occupies the central position within this confined space, fixed at its lower end. The plate is positioned at a horizontal distance of \(L\), equivalent to \(W\), from the approaching water column. The elastic plate has distinct dimensions, with a thickness denoted as \(a\) measuring 0.012 m and a height denoted as \(b\) measuring 0.08 m. The computational domain is discretized with \(\Delta p=0.0025\) m. The initial state of the experiment involves the water column being abruptly released, setting in motion its trajectory towards a collision with the stationary elastic plate. The density values for the water and the deformable elastic obstacle are initially set to be 1000 \(Kg/m^{3}\) and 2500 \(Kg/m^{3}\). Furthermore, the deformable elastic obstacle is characterized by an elastic modulus \(E\) of \(10^{6}\)\(N/m^{2}\) and a Poisson ratio \(\nu\) of 0. Fig. 11 comprehensively depicts various aspects of the scenario, showcasing the evolution of the water pressure, free-surface profile, and the deformation of the elastic obstacle at distinct time intervals. At the outset, the water flows freely, resembling a typical dam break scenario, with the flow front exhibiting low pressure. However, as the water collides with the elastic obstacle, a substantial surge in pressure becomes apparent, generating significant impact forces. This leads to a considerable deformation in the elastic obstacle, instigating notable changes in the dynamics of the flowing water. As the water progresses over the wall, the pressure gradually subsides, eventually reaching a state of hydrostatic equilibrium. Concurrently, the elastic wall rebounds from its deformed state. Notably, upon the initial impact of water on the tank's rigid wall, a localized high-pressure zone re-emerges. This method effectively captures the intricate interplay between the fluid and the deformable structure, resulting in a qualitative agreement between the numerical present simulations and the other observations from literature [22; 59; 42]. The stress distribution in the deformable obstacle is also shown in Fig. 11. Following the impact around 0.2 s, the upstream face of the wall experiences tension, while the opposite face is subjected to compression. The maximum stress can be found near the fixed support, consistent with other literature findings. Also, there is no instability observed in the obstacle. To further confirm the accuracy of the SPH-PD method, we analyzed the deflection process occurring at the upper-left corner of the elastic plate over time. In Fig. 12, we present the variations in horizontal displacement observed at the upper-left corner of the elastic wall. Furthermore, we include numerical results from previous literature [22; 58; 60; 61], to enable a comparative assessment. It is evident from the results that the current approach effectively anticipates and replicates the overall response of the obstacle when subjected to hydrodynamic forces induced by the collapsing water column. Figure 8: Qualitive comparison between experimental results [21] and present work at different time steps Figure 10: Setup for water impact on elastic obstacle Figure 9: Pressure and stress distribution in the water and elastic gate Figure 11: Pressure and stress distribution at different time steps for water impact on elastic obstacle ### Damage and fracture of an elastic obstacle due to water impact We examine the interaction between a brittle obstacle and water. The specific geometric arrangement for this case is detailed in Fig. 13. Here, the initial crack/ notch is made by deleting the particles of a row. The obstacle shown in the diagram has an initial crack measuring \(a=0.008\) m in length, positioned and \(l=0.025\) m above the ground. Simultaneously, water is in a state of descent due to the gravitational force. The other dimensions are as follows: \(H=0.3\) m, \(W=0.15\) m, \(L=W\), \(b=0.09\) m and \(d=0.03\) m. The experiment begins with the sudden release of the water column, initiating its path towards a collision with the elastic obstacle. Initially, the density of the water and the flexible elastic barrier are established at \(1000\)\(Kg/m^{3}\) and \(2500\)\(Kg/m^{3}\), Figure 12: Comparison of time histories of the deflection of the free end of the elastic obstacle Figure 13: Setup for water impact on an elastic obstacle with an initial notch respectively. Additionally, the deformable elastic obstacle exhibits an elastic modulus denoted as \(E\) with a value of \(10^{6}\)\(N/m^{2}\), and a Poisson ratio represented as \(\nu\) with a value of \(0\). The fracture strain, \(\epsilon_{f}\), is set at \(0.05\). Therefore, the interaction between a pair of particles \(i\) and \(j\) is stopped when the strain in the connecting pseudo-spring exceeds the value of \(0.05\) (\(f_{ij}=0\)\(if\)\(\epsilon_{f}>0.05\)). The failure process is assumed to be permanent in this simulation. Fig. 14 illustrates the progressive changes in the water's free surface, pressure contour patterns, and crack propagation throughout the simulation. As observed, when the fluid initially interacts with the obstacle, there is a significant surge in pressure within the FSI zone. Subsequently, due to this heightened pressure, the obstacle deforms, and the strains in the particles increase. After a specific duration (\(t\approx 17.5\) s), the accumulated strain surpasses the material's fracture strain threshold \(\epsilon_{f}=0.05\), leading to the initiation of crack propagation of the obstacle and its eventual detachment from the weak area, i.e., the preexisting crack tip. It's worth noting that following the complete detachment of the upper section of the obstacle, the lower part exerts a jet effect on the fluid, causing the water to follow a predefined path. This phenomenon is depicted in the final snapshots of Fig. 14. Similar observations were made in [42]. The overall process of Fluid-Structure Interaction (FSI) and fracture is consistent when a fine discretization, i.e., inter-particle spacing (\(\Delta p\)), is used. It may be observed from Fig. 15 that fine resolution yields a better representation of crack initiation, propagation and fracturing process. In order to highlight the efficacy of the pseudo-spring analogy in modelling material damage and subsequent cracking, we perform a simulation of the same set-up without the failure strain (i.e., even if the strains in the pseudo-springs are greater than \(\epsilon_{f}\), the interaction coefficient is kept same \(f_{ij}=1\)). It can be seen from Fig. 16 that the crack does not initiate, and the elastic obstacle remains undamaged, i.e., does not suffer any failure. ## 6 Conclusion A computational framework for modelling large deformation and material damage and failure is proposed for fluid-structure interaction problems. In the integrated numerical approach, we utilize a two-pronged strategy. Firstly, the fluid phase is simulated by employing the WCSPH method, which includes a density diffusion term to enhance accuracy. The interaction between the fluid phase and the rigid walls is modelled through specialized boundary particles designed to extrapolate relevant variables. Secondly, for the solid phase, we implement a pseudo-spring analogy in SPH, where the immediate neighbour particles are used for approximation. The pseudo-springs help in modelling the material damage and subsequent crack propagation without requiring any computationally intensive processes such as visibility criteria, particle splitting, etc. The interaction between the moving fluid phase and the deformable solid structure/ obstacle is modelled by a soft repulsive particle contact model. This approach results in the establishment of a cohesive framework for effectively managing rigid wall boundaries and fluid-structure interactions. The numerical results obtained in our study have been subjected to thorough comparisons with analytical solutions, experimental data, and other existing numerical findings from the literature. Our findings demonstrate the capability of accurately modelling free surface flow and dynamic elastic problems without encountering instability. The validation of the approach was carried out through the examination of different FSI scenarios involving deformable structures. Our numer Figure 14: Pressure and Damage distribution at different time steps for water impact on an elastic obstacle (\(\Delta p\) = 0.0025 m) Figure 16: Pressure and Damage distribution at different time steps for water impact on an elastic obstacle without considering damage and fracturing (\(\Delta p=0.0025\) m) Figure 15: Pressure and Damage distribution at different time steps for water impact on an elastic obstacle (\(\Delta p=0.001\) m) ical outcomes exhibit good agreement with existing experimental, numerical and analytical data from the literature, reaffirming the reliability of our method. We have also demonstrated in the last numerical example that the proposed framework is capable of modelling material damage and subsequent fracture under extreme hydrodynamic events. While our proposed method has shown promising accuracy in preliminary assessments, it is important to acknowledge the limited availability of experiments in the existing literature that specifically address FSI problems involving deformable structures exhibiting material damage and fracture. To ensure a comprehensive evaluation of the precision and reliability of our approach, further investigations are necessary. In light of this, we plan to conduct dedicated laboratory experiments as part of our validation process in future works. These experiments will provide valuable real-world data and insights that can help us refine and enhance the accuracy of our method in modelling FSI scenarios involving deformable structures experiencing material damage and fracture. However, the proposed framework has shown great stability and efficiency (when compared with the existing numerical schemes) and holds the potential to become a widely employed method for modelling finite deformation and material damage and failure in FSI problems. ## 7 Acknowledgments The author acknowledges the computational support provided as a part of the IIT Delhi NFS grant, on which the simulations have been run.
2309.07227
Beyond the Background: Gravitational Wave Anisotropy and Continuous Waves from Supermassive Black Hole Binaries
Pulsar timing arrays have found evidence for a low-frequency gravitational wave background (GWB). Assuming the GWB is produced by supermassive black hole binaries (SMBHBs), the next gravitational wave (GW) signals astronomers anticipate are Continuous Waves (CWs) from single SMBHBs and their associated GWB anisotropy. The prospects for detecting CWs and anisotropy are highly dependent on the astrophysics of SMBHB populations. Thus, information from single sources can break degeneracies in astrophysical models and place much more stringent constraints than the GWB alone. We simulate and evolve SMBHB populations, model their GWs, and calculate their anisotropy and detectability. We investigate how varying components of our semi-analytic model, including the galaxy stellar mass function, the SMBH--host galaxy relation ($M_\mathrm{BH}$--$M_\mathrm{bulge}$), and the binary evolution prescription impact the expected detections. The CW occurrence rate is greatest for few total binaries, high SMBHB masses, large scatter in $M_\mathrm{BH}$--$M_\mathrm{bulge}$, and long hardening times. The occurrence rate depends most on the binary evolution parameters, implying that CWs offer a novel avenue to probe binary evolution. The most detectable CW sources are in the lowest frequency bin for a 16.03-year PTA, have masses from $\sim\!\!10^9-10^{10}\mathrm{M}_\odot$, and are $\sim\!\!1$ Gpc away. The level of anisotropy increases with frequency, with the angular power spectrum over multipole modes $\ell$ varying in low-frequency $C_{\ell>0}/C_0$ from $\sim\!\!5\times 10^{-3}$ to $\sim\!\!2\times10^{-1}$, depending on the model; typical values are near current upper limits. Observing this anisotropy would support SMBHB models for the GWB over cosmological models, which tend to be isotropic.
Emiko C. Gardiner, Luke Zoltan Kelley, Anna-Malin Lemke, Andrea Mitridate
2023-09-13T18:01:02Z
http://arxiv.org/abs/2309.07227v3
Beyond the Background: Gravitational Wave Anisotropy and Continuous Waves from Supermassive Black Hole Binaries ###### Abstract Pulsar timing arrays have found evidence for a low-frequency gravitational wave background (GWB). Assuming the GWB is produced by supermassive black hole binaries (SMBHBs), the next gravitational wave (GW) signals astronomers anticipate are Continuous Waves (CWs) from single SMBHBs and their associated GWB anisotropy. The prospects for detecting CWs and anisotropy are highly dependent on the astrophysics of SMBHB populations. Thus, information from single sources can break degeneracies in astrophysical models and place much more stringent constraints than the GWB alone. We simulate and evolve SMBHB populations, model their GWs, and calculate their anisotropy and detectability. We investigate how varying components of our semi-analytic model, including the galaxy stellar mass function, the SMBH-host galaxy relation (\(M_{\rm BH}\)-\(M_{\rm bulge}\)), and the binary evolution prescription impact the expected detections. The CW occurrence rate is greatest for few total binaries, high SMBHB masses, large scatter in \(M_{\rm BH}\)-\(M_{\rm bulge}\), and long hardening times. The occurrence rate depends most on the binary evolution parameters, implying that CWs offer a novel avenue to probe binary evolution. The most detectable CW sources are in the lowest frequency bin for a 16.03-year PTA, have masses from \(\sim\)\(10^{9}-10^{10}\)M\({}_{\odot}\), and are \(\sim\)1 Gpc away. The level of anisotropy increases with frequency, with the angular power spectrum over multipole modes \(\ell\) varying in low-frequency \(C_{\ell>0}/C_{0}\) from \(\sim\)\(5\times 10^{-3}\) to \(\sim\)\(2\times 10^{-1}\), depending on the model; typical values are near current upper limits. Observing this anisotropy would support SMBHB models for the GWB over cosmological models, which tend to be isotropic. Gravitational Waves (678) -- Supermassive black holes (1663) -- Galaxies (573) + Footnote †: journal: DESY-23-132 ## 1 Introduction Supermassive black hole binaries (SMBHBs) are predicted to result from galaxy mergers. Two galaxies, each hosting a central supermassive black hole (SMBH) (Richstone et al., 1998), merge as predicted by hierarchical structure formation (Lacey and Cole, 1993). Then, their SMBHs sink to the center of the merged galaxies via dynamical friction, become gravitationally bound, and form a binary with \(\sim\)pc separation. Stellar scattering and circumbinary disk torques harden the binary to small separations (\(\sim\)\(10^{-2}\) pc) (Begelman et al., 1980; Kelley et al., 2017) beyond which they evolve primarily by emitting gravitational waves. The superposition of these continuous waves (CWs) from many SMBHBs across the universe creates an incoherent stochastic gravitational wave background (GWB) (Burkepolaor et al., 2019), like that for which pulsar timing arrays (PTAs) have recently found strong evidence (Agazie et al., 2023; Antoniadis et al., 2023; Reardon et al., 2023; Xu et al., 2023). In the likely scenario that the PTA-observed GWB is produced by SMBHBs (Agazie et al., 2023), CWs from individual, loud SMBHBs are the next highly-anticipated GW signal PTAs could detect. PTA searches have yet to find a CW source (Agazie et al., 2023; Antoniadis et al., 2023), but simulation-based predictions suggest single source CWs could be detected within a few years of the GWB (Kelley et al., 2018). These single sources will likely brighten certain regions of the gravitational wave sky, inducing anisotropy in the background (Pol et al., 2022) before they can be individually resolved. Cosmological models (cosmic inflation, phase transitions, cosmic strings, domain walls, etc.) for the GWB have also been suggested (Afzal et al., 2023). These are more likely to be isotropic. Thus, measuring anisotropy in the GWB would serve as compelling evidence for SMBHBs being the source. This anisotropy has been predicted using analytic (Mingarelli et al., 2013; Hotinli et al., 2019; Sato-Polito and Kamionkowski, 2023), semi-analytic (Mingarelli et al., 2017), and simulation-based (Taylor and Gair, 2013; Taylor et al., 2020; Becsy et al., 2022; Agazie et al., 2023) methods. We conduct the first study into what information content GW anisotropy contains about astrophysical models. Further, this paper offers the first look at how single-source detection statistics and anisotropy are related. Past works have predicted the amplitude and shape of the GWB using host galaxy populations generated from galaxy formation simulations (Kelley et al., 2017; Becsy et al., 2022; Sykes et al., 2022; Becsy et al., 2023), dark matter (DM) merger trees (Izquierdo-Villalba et al., 2022), galaxy catalogs (Mingarelli et al., 2017), or semi-analytic models (Sesana et al., 2008; Agazie et al., 2023), and others have predicted single-source CWs using galaxy simulations (Kelley et al., 2018), DM merger trees (Sesana et al., 2009), and semi-analytic SMBHB assembly models (Rosado et al., 2015). Such studies have historically focused on specific hardening processes (Kelley et al., 2018; Siwek et al., 2020) or accretion scenarios and SMBH-host galaxy relations (Sesana et al., 2005). To advance this field, we predict the parametric dependence of the likelihood and nature of low-frequency CW signals on the most complete SMBHB assembly _and_ evolution models to date. This is the first systematic investigation of model parameter space and the information content of single CW sources. We generate SMBHB populations using holodeck (Kelley et al. in prep.) as explained in SS2.1, extract the loudest single sources, and calculate gravitational waves and binary properties of both the background and single sources as described in SS2.2. We present the resulting characteristic strain spectra, total masses, and final comoving distances, for variations on several model components, including the galaxy stellar mass function (GSMF) (SS3.1.1), the SMBH-host relations (SS3.1.2), and the binary evolution (SS3.1.3). Then we calculate single source detection statistics for simulated PTAs using the methods described in SS2.3. The resulting single source occurrence rates and predicted properties (mass, distance, and frequency) are given in SS3.2.1 and SS3.2.2, respectively. Finally, we calculate the GWB anisotropy from these SMBHB populations as described in SS2.4, with the resulting angular power spectrum presented in SS3.3. We discuss caveats to our model and future steps in SS4 and summarize our key findings in SS5. ## 2 Methods ### Model for SMBHB Populations Using holodeck(Kelley et al. in prep.) we assemble a population of galaxy mergers with comoving volumetric number density \(\eta_{\rm gal-gal}\equiv dN_{\rm gal-gal}/dV_{c}\)(Chen et al., 2019), \[\frac{\partial^{3}\eta_{\rm gal-gal}}{\partial m_{\star 1}\,\partial q_{ \star}\,\partial z}=\frac{\Psi(m_{\star 1},z^{\prime})}{m_{\star 1}\,{\rm ln}(10)} \,\frac{P(m_{\star 1},q_{\star},z^{\prime})}{T_{\rm gal-gal}(m_{\star 1},q_{ \star},z^{\prime})}\frac{\partial t}{\partial z^{\prime}}. \tag{1}\] We direct the reader to Agazie et al. (2023) for a full description of the semi-analytic model components, including the galaxy pair fraction \(P\) and galaxy merger time \(T_{\rm gal-gal}\), both of which are power-law functions of galaxy stellar mass \(m_{\star 1}\), galaxy mass ratio \(q_{\star}\) and initial redshift \(z^{\prime}\). The components of the model that we investigate in this paper are: (1) the normalization \(\psi_{0}\) and characteristic mass \(m_{\psi,0}\) of the galaxy stellar mass function (GSMF) \(\Psi\), (2) the dimensionless mass normalization \(\mu\) and intrinsic scatter \(\epsilon_{\mu}\) of the SMBH mass-bulge mass (\(M_{\rm BH}\)-\(M_{\rm bulge}\)) relation, and (3) the binary lifetime \(\tau_{f}\) and 'inner regime' power-law index \(\nu_{\rm inner}\) of the phenomenological hardening model, each of which are summarized below. We study the effects of each of these six parameters in isolation, by independently varying one parameter across the range listed in Table 1 while fixing the five other parameters to the fiducial values listed there and all other model components to the fiducial values in Agazie et al. (2023) Table B1. **GSMF** -- The GSMF is the number density of galaxies per decade of stellar mass that determines the initial distribution of galaxies. We represent the GSMF as a single Schechter function (Schechter, 1976), \[\Psi(m_{\star 1},z)=\ln(10)\Psi_{0}\cdot\left[\frac{m_{\star 1}}{M_{\psi}} \right]^{\alpha_{\psi}}\exp\left(-\frac{m_{\star 1}}{M_{\psi}}\right), \tag{2}\] where \(m_{\star 1}\) is the primary galaxy stellar mass; \(\Psi_{0}\), \(M_{\psi}\), and \(\alpha_{\psi}\) are phenomenological functions parameterized over redshift as in Chen et al. (2019) such that \[\begin{split}\log_{10}\left(\Psi_{0}/{\rm Mpc}^{-3}\right)& =\ \ \ \ \psi_{0}+\psi_{z}\cdot z,\\ \log_{10}\left(M_{\psi}/{\rm M}_{\rm\odot}\right)& =\ \ \ \mu_{\psi,0}+m_{\psi_{z}}\cdot z,\\ \alpha_{\psi}&=\ 1+\alpha_{\psi,0}+\alpha_{\psi,z}\cdot z. \end{split} \tag{3}\] **SMBH-Host Relation** -- The SMBH masses are related to their host galaxies' bulge masses \(M_{\rm bulge}\) by assuming an \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation defined by dimensionless mass normalization \(\log_{10}\mu\) and power-law index \(\alpha_{\mu}\), in addition to a random normal distribution of \(\log_{10}\) scatter \({\cal N}\!\left(0,\epsilon_{\mu}\right)\) with stan \begin{table} \begin{tabular}{c|c|c|c} Model Component & Parameter & Range & Fiduciala \\ \hline GSMF & \(\psi_{0}\) & [-3.5, -1.5] & -2.50 \\ & \(m_{g,0}\) & [10.5, 12.5] & 11.50 \\ \hline \(M_{\rm BH}\)–\(M_{\rm bulge}\) & \(\mu\) & [7.6, 9.0] & 8.30 \\ & \(\epsilon_{\mu}\) & [0.0, 0.9] & 0.45 \\ \hline phenom \(\left(\frac{dg}{dt}\right)\) & \(\tau_{f}\) & [0.1, 11.0] & 5.55 \\ & \(\nu_{\rm inner}\) & [-1.5, 0.0] & -0.75 \\ \end{tabular} \end{table} Table 1: Astrophysical parameters of the model components investigated in this paper, while the rest remain fixed to the fiducial values in Agazie et al. (2023) Table B1. dard deviation \(\epsilon_{\mu}\): \[\log_{10}(M_{\rm BH}/{\rm M}_{\odot})=\mu+\alpha_{\mu}\log_{10}\!\left(\frac{M_{ \rm bulge}}{10^{11}\,{\rm M}_{\odot}}\right)+\mathcal{N}\!\left(0,\epsilon_{ \mu}\right)\!. \tag{4}\] The bulge mass is calculated as a fraction of the total galaxy stellar mass \(=f_{\star,\rm bulge}\cdot m_{\star 1}\), with \(f_{\star,\rm bulge}=0.615\) based on empirical observations from Lang et al. (2014) and Bluck et al. (2014). Applying the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation in Eq. 4, the number density of merged galaxies in Eq. 1 translates to the number density of SMBHBs by \[\frac{\partial^{3}\eta}{\partial M\,\partial q\,\partial z}=\frac{\partial^{3 }\eta_{\rm gal-gal}}{\partial m_{\star 1}\,\partial q_{\star}\,\partial z}\,\frac{ \partial m_{\star 1}}{\partial M}\,\frac{\partial q_{\star}}{\partial q}. \tag{5}\] **Hardening** --To model gravitational waves detectable by PTAs, the population of SMBHBs must be evolved in time and separation to rest frame orbital frequencies \(f_{p}\) corresponding to GW frequencies of a few nHz. This binary hardening is described in terms of a rate of decreasing separation, \(da/dt=(da/dt)_{\rm gw}+(da/dt)_{\rm phenom}\), i.e. the sum of a GW component \[\frac{da}{dt}\bigg{|}_{\rm gw}=-\frac{64\,G^{3}}{5\,c^{5}}\frac{m_{1}\,m_{2}\,M }{a^{3}}, \tag{6}\] and a phenomenological component, \[\frac{da}{dt}\bigg{|}_{\rm phenom}=H_{a}\cdot\left(\frac{a}{a_{c}}\right)^{1- r_{\rm inner}}\cdot\left(1+\frac{a}{a_{c}}\right)^{r_{\rm inner}-r_{\rm inner}}. \tag{7}\] A double power law allows for distinct asymptotic behavior in the small-separation 'inner' regime and large-separation 'outer' regime, distinguished by a critical break separation \(a_{c}\). \(H_{a}\) is a normalization factor, calibrated for every binary such that it has a total binary lifetime from initial separation \(a_{\rm init}\) to coalescence at the innermost stable circular orbit \(a_{\rm isco}\) of \[\tau_{f}=\int_{a_{\rm init}}^{a_{\rm inco}}\left(\frac{da}{dt}\right)^{-1}da. \tag{8}\] This serves as a self-consistent approach to modeling binary evolution, without depending on assumptions about the binary hardening processes or galactic environment. We also investigate the effects of varying our four GSMF and \(M_{\rm BH}\)-\(M_{\rm bulge}\) parameters for the GW-only model as in Agazie et al. (2023), which is not self-consistent because GWs alone cannot bring the binaries to small enough separations to emit nHz GWs, but serves as a useful comparison. ### Binary Properties and Gravitational Waves The analytic model described in SS2.1 determines a comoving volumetric number density of SMBHBs \(\frac{\partial^{3}\eta}{\partial M\,\partial z}\), from which we calculate a continuous number of SMBHBs per mass \(M\), ratio \(q\), redshift \(z\) (at the time of GW emission), and log rest-frame orbital frequency \(\ln f_{p}\)(Sesana et al., 2008): \[\frac{\partial^{4}N}{\partial M\,\partial q\,\partial z\,\partial\ln f_{p}}= \frac{\partial^{3}\eta}{\partial M\,\partial q\,\partial z}\frac{\partial t}{ \partial\ln f_{p}}\frac{\partial z}{\partial t}\frac{\partial V_{c}}{\partial z}. \tag{9}\] This continuous distribution sets a fractional expectation value for the number of binaries. In reality, gravitational waves are produced by a discrete population of binaries. We generate random universe realizations of this population by selecting a number of binaries \(N(M,q,z,f)\) in each parameter bin of \(\Delta M\), \(\Delta q\), \(\Delta z\), and \(\Delta(\ln f_{p})\) from a Poisson distribution (\(\mathcal{P}\)) centered at the aforementioned expectation value, \[N(M, q, z, f)= \tag{10}\] \[\mathcal{P}\Big{(}\frac{\partial^{4}N}{\partial M^{\prime}\, \partial q^{\prime}\,\partial z^{\prime}\,\partial\ln f_{p}^{\prime}}\,\Delta M ^{\prime}\Delta q^{\prime}\Delta z^{\prime}\Delta\ln f^{\prime}\Big{)}|_{M,q,z,f_{p}}\] We assume circular orbits for all binaries and assign them the \(M\), \(q\), \(z\), and \(f_{p}\) values corresponding to their bin centers. These define their chirp mass \(\mathcal{M}\equiv(m_{1}m_{2})^{3/5}/M^{1/5}=Mq^{3/5}/\left(1+q\right)^{6/5}\) comoving distance \(d_{c}\), observer-frame GW frequency \(f=(2f_{p})/(1+z)\), and (sky and polarization averaged) GW strain amplitude of (Finn and Thorne, 2000) \[h_{\rm s,circ}^{2}(f_{p})=\frac{32}{5c^{8}}\,\,\frac{\left(G\mathcal{M}\right) ^{10/3}}{d_{c}^{2}}\Big{(}2\pi f_{p}\Big{)}^{4/3}. \tag{11}\] Because the loudest single source may not be the most detectable, depending on sky position, inclination, etc., the _ten_ loudest single sources (SS, i.e. with the greatest \(h_{\rm s}\)) in each frequency bin are then extracted from this population. Their individual characteristic strains are calculated as (Rosado et al., 2015) \[h_{\rm c,SS}^{2}(f)=h_{\rm s}^{2}(f_{p})\!\left(\frac{f}{\Delta f}\right). \tag{12}\] Here, \(\Delta f\) is the frequency bin width and arises when considering a finite number of sources in finite frequency bins \(N\sim f*T\sim f/\Delta f\), over an observing duration \(T\). The GWB is then calculated as the sum of gravitational waves from all background (BG) binaries \[h_{\rm c,BG}^{2}(f)\!=\!\sum_{M,q,z,f}N(M,q,z,f)h_{\rm s}^{2}(f_{p})\frac{f}{ \Delta f}. \tag{13}\] Here, we define BG binaries to include all but the single loudest at each frequency because the most immediate observational application of this work will be the detection of _one_ CW source, before PTAs can resolve multiple of them. When considering the GW-only model without phenomenological hardening, this in combination with the GW hardening rate \((da/dt)_{\rm gw}\), leads to the \(h_{\rm c,BG}\propto f^{-2/3}\) power law often used as a comparison point for the characteristic strain spectra. In reality, we expect deviations from this power law not only due to the phenomenological hardening before GWs dominate the evolution (Kocsis and Sesana, 2011), but also due to the discretization of sources where the power-law would otherwise predict fractional binaries (Sesana et al., 2008). We calculate a characteristic mass, mass ratio, redshift, comoving distance, separation, and angular separation for the background at each frequency using an \(h_{\rm c,BG}\)-weighted average over all BG binaries emitting at that frequency. ### Detection Statistics Given the \(h_{\rm c,SS}\) and \(h_{\rm c,BG}\) spectra, we calculate SS and BG detection statistics following the formalism in Rosado et al. (2015). This includes the background signal-to-noise ratio (SNR\({}_{\rm BG}\)), and detection probability (DP\({}_{\rm BG}\)), and each individual source's SNR (SNR\({}_{\rm SS,i}\)) and detection probability (DP\({}_{\rm SS,i}\)). The probability of detecting _any_ single source is then (Rosado et al., 2015) \[{\rm DP}_{\rm SS}=1-\prod_{i}[1-{\rm DP}_{\rm SS,i}] \tag{14}\] and the expected number of detections for that realization is \[\langle N_{\rm SS}\rangle=\sum_{i}{\rm DP}_{\rm SS,i}. \tag{15}\] In this prescription, single source detection probabilities are given by integrating over the \(\mathcal{F}_{e}\)-statistic from some threshold \(\bar{\mathcal{F}}_{e}\) to infinity, where \(\bar{\mathcal{F}}_{e}\) is set to give a false alarm probability (FAP) of \(10^{-3}\). Even with no signal present, the area under this curve will produce a nonzero total detection probability (DP\({}_{\rm SS}\)) equal to the FAP. Thus, \(10^{-3}\) is the lower limit on detection probabilities calculated in Eq. 14 and should be treated effectively as 0. In light of the strong evidence for a GWB in current PTA data (Agazie et al., 2023; Antoniadis et al., 2023; Reardon et al., 2023; Xu et al., 2023), we study the single source detection probability under the same conditions that are likely to produce measurable GWB evidence by calibrating every realization to DP\({}_{\rm BG}=0.50\). We use a white-noise-only simulated PTA of 40 pulsars at randomly assigned sky positions, 16.03 yr duration (corresponding to Agazie et al., 2023), and 0.20 yr cadence \(\Delta t\). Our fiducial method of calibration is to vary the level of white noise \(S_{\rm WN}\), given by the error in pulsar times of arrival (ToAs) \(\sigma\)(Rosado et al., 2015): \[S_{\rm WN}=2\Delta t\sigma^{2}, \tag{16}\] until achieving \(0.49<{\rm DP}_{\rm BG}<0.51\). We calculate \(\langle N_{\rm SS}\rangle\) using the same pulsar positions and \(\sigma\), with all characteristic strains except that of the source in question considered additional noise, \[S_{\rm rest,\it i}=\frac{h_{\rm c,BG}^{2}+\sum_{j\neq i}h_{\rm c,SS,\it j}^{2 }}{12\pi^{2}f^{3}}. \tag{17}\] Then we normalize for small variations around DP\({}_{\rm BG}=0.50\) with \[\langle N_{\rm SS}\rangle[{\rm DP}_{\rm BG}=0.50]=\frac{\langle N_{\rm SS} \rangle[{\rm DP}_{\rm BG}\approx 0.50]}{{\rm DP}_{\rm BG}}\times 0.50 \tag{18}\] For one realization of BG and SS characteristic strain, we calibrate the PTA to the background, then create 100'sky realizations'-the random position, inclination, polarization, and phase assignment for single sources-and conduct SS detection statistics for each. By calculating detection statistics for 10 single sources for each frequency of each realization, we allow for the most detectable to depend on both strain amplitude and random location/orientation. This is repeated for 500'strain realizations' of \(h_{\rm c,BG}\) and \(h_{\rm c,SS}\)-those created by Poisson sampling in Eq. 10, each with their own BG-calibrated PTA-to create 50,000 combined'strain+sky realizations.' Next, we predict the most likely frequencies of detection by calculating the DP\({}_{\rm SS,i}\)-weighted average frequency of all \(n\) loudest single sources across all realizations of a given model: \[\langle f_{\rm SS}\rangle=\frac{\sum_{i}{\rm DP}_{\rm SS,i}f_{\rm SS,\it i}} {\sum_{i}{\rm DP}_{\rm SS,\it i}} \tag{19}\] with weighted standard deviation: \[\sigma_{\langle f_{\rm SS}\rangle}=\frac{\sum_{i}{\rm DP}_{\rm SS,i}(f_{\rm SS,\it i}-\langle f_{\rm SS}\rangle)^{2}}{\frac{n-1}{n}\sum_{i}{\rm DP}_{\rm SS,\it i}}. \tag{20}\] The likely frequency of detection is sensitive to the shape of the PTA noise. Thus, we explore different noise models inspired by realistic sensitivity curves and intrinsic pulsar red noise in SSA. ### GWB Anisotropy We measure the anisotropy corresponding to each model by decomposing a simulated GW sky into spherical harmonics, as in Agazie et al. (2023). To generate this GW sky, we create a HEALPix map (Gorski et al., 2005) of \(h_{\rm c}^{2}/\Delta\Omega\) (\(\Omega\) being solid angle, or equivalently, pixel area) at each frequency of each realization by assigning the single sources to random pixels and distributing the remaining \(h_{\rm c,BG}\) evenly among all the pixels. Then, we can decompose this sky into an angular power spectrum of multipole modes \(\ell\) and \(m\) each accompanied by a coefficient \(a_{\ell m}\) such that the total GW power is the sum of each \(a_{\ell m}\) times the real-valued spherical harmonic \(Y_{\ell m}\). The anafast code, via healpy(Zonca et al., 2019) calculates these coefficients with the estimator (Gorski et al., 1999) \[a_{\ell m}=\frac{4\pi}{N_{\rm pix}}\sum_{p=0}^{N_{\rm pix}-1}Y_{\ell m}^{*}( \gamma_{p})f(\gamma_{p}), \tag{21}\] where \(N_{\rm pix}=12N_{\rm side}^{2}\) is the number of pixels, indexed by \(p\) at positions \(\gamma_{p}\), and \(f(\gamma_{p})\) is \(h_{\rm c}^{2}/\Delta\Omega\) in each pixel. Using anafast, we calculate the corresponding angular power spectrum \[C_{\ell}=\frac{1}{2\ell+1}\sum_{m=-\ell}^{\ell}|a_{\ell m}|^{2} \tag{22}\] where \(C_{\ell}\) represents the measure of fluctuations (i.e. anisotropy) on the angular scale \(\theta\approx 180\deg/\ell\). \(C_{0}\) represents the purely isotropic average component, thus we normalize our results by \(C_{\ell}/C_{0}\). This method is tested for 10, 100, and 1000 loudest single sources per frequency bin, by which point our results are insensitive to the addition of more single sources. By placing these sources randomly and treating the remaining signal as purely isotropic, we do not weigh in possible correlations with large-scale-structure, making this a conservative estimate for anisotropy. We also test the resolution and find \(C_{\ell}\) to be indistinguishable for \(N_{\rm side}=8\) up to \(N_{\rm side}=32\). Thus, we adopt \(N_{\rm side}=8\) (\(N_{\rm pix}=768\)) to efficiently calculate the spherical harmonic decomposition for each realization and present the results in SS3.3. ## 3 Results ### Characteristic Strain and Binary Properties In this section we present the characteristic strain \(h_{\rm c}\), total mass \(M\), and final comoving distance \(d_{c}\) of GWB and CW sources as a function of frequency. The first column of Fig. 1 includes three models with varying GSMF normalization: \(\psi_{0}=-3.5\), \(-2.5\), and \(-1.5\), while all other parameters remain fixed to their fiducial values listed in Table 1. Information about the CW sources is shown in green, for these three models, respectively. This includes the 68% confidence intervals (CIs, shaded regions) across 500 realizations of the single loudest source at each frequency. The 95th percentile of these sources' \(h_{\rm c,SS}\) and \(M_{\rm SS}\), and the 5th percentile of these sources' \(d_{\rm c,SS}\) is also shown (points). For comparison, \(h_{\rm c,BG}\) and the \(h_{\rm c,BG}\)-weighted average properties (\(\langle M\rangle_{\rm BG}\) and \(\langle d_{\rm c}\rangle_{\rm BG}\)) of the background (all but the loudest single sources at each frequency) are shown in corresponding shades of grey, with dashed lines representing their medians. The same is shown for models of varying \(m_{\psi,0}\) in the second column of Fig. 1, for \(\mu\) and \(\epsilon_{\mu}\) in Fig. 2, and for \(\tau_{f}\) and \(\nu_{\rm inner}\) in Fig. 3. The following three sections describe the physical scenarios producing these results for each model component: the GSMF (SS3.1.1), the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation (SS3.1.2), and binary evolution (SS3.1.3). #### 3.1.1 Gsmf The GSMF parameters shape the masses of galaxies, and thus their residing SMBHBs for fixed \(M_{\rm BH}\)-\(M_{\rm bulge}\). The masses in each SMBHB directly determine its strain amplitude as \(h_{\rm c}\propto\mathcal{M}^{3/3}\) (Eq. 11). Changes to the distribution of SMBHB masses also result secondarily in small \(d_{c}\) variations due to the mass dependence of hardening rate versus frequency (described below). Following the Schechter GSMF, the number of SMBHBs decreases with increasing mass. When this expectation value approaches zero, few random realizations contain a source in that bin. When \(\psi_{0}\) increases, the number of sources in every bin increases. After sampling, this translates to the loudest randomly realized sources having higher masses. The background sees a similar increase in the mass of its dominating sources. In addition to this, it also has a larger number of contributing sources in every mass and frequency bin. Thus, \(h_{\rm c,BG}\) increases near-uniformly across all frequencies. Its amplification matches that of \(h_{\rm c,SS}\) at high frequencies and exceeds that of \(h_{\rm c,SS}\) at low frequencies. There, SMBHB numbers are the largest because sources harden more slowly at low frequencies. This is particularly true of high-mass sources, whose numbers dwindle at high frequencies where they harden quickly by emitting more GWs. Thus, scaling up the number of sources in every bin leads to many more massive sources contributing to the low-frequency GWB. Figure 1: Characteristic strain (top row), total mass (middle row), and final comoving distance (bottom row), for varying \(\psi_{0}\) (left column) and \(m_{\psi,0}\) (right column). Shaded green regions represent 68% CI of the single loudest source at each frequency, with markers to indicate the 95th percentiles for \(h_{\rm c,SS}\) and \(M_{\rm SS}\) and 5th percentiles for \(d_{\rm c,SS}\). Dashed lines represent the median background (all but the single loudest source per frequency) characteristic strain \(h_{\rm c,BG}\) and the \(h_{\rm c}\)-weighted background properties (\(\langle M\rangle_{\rm BG}\) and \(\langle d_{\rm c}\rangle_{\rm BG}\)). \(m_{\psi,0}\) increases from -3.5 to -2.5 to -1.5 and \(\psi_{0}\) increases from 10.5 to 11.5 to 12.5 for darkening shades of green/grey. The GSMF characteristic mass \(m_{\psi,0}\) (Fig. 1, right-hand side), sets where the expected number of binaries drops off. Thus, varying \(m_{\psi,0}\) only significantly impacts mass bins corresponding to the lowest end of our varying \(m_{\psi,0}\) range (\(m_{\psi,0}=10.5\)) and above. Thereby, Fig. 1 shows that increasing \(m_{\psi,0}\) from 10.5 to 12.5 raises the \(M_{\rm SS}\) 68% CI most dramatically (\(\sim 1.6\) dex) at low frequencies, where massive sources are more common, and more moderately (\(\sim\) 0.5 dex) at high frequencies. The background also sees large increases in \(\langle M\rangle_{\rm BG}\) at low frequencies because many loud binaries remain there, even after the loudest has been extracted, and little change in typical mass at high frequencies where few high-mass sources remain after single-source extraction. This amounts to \(h_{\rm c,SS}\) having an overall more significant increase than \(h_{\rm c,BG}\). Regardless of the model, low-frequency sources tend to be nearer. This is because more massive sources (which are numerous at low frequencies) take longer to evolve to the PTA band, as explained in greater detail in SS3.1.3. Longer evolution times let these massive sources reach the PTA band at smaller redshifts and closer distances. #### 3.1.2 \(M_{\rm BH}\)-\(M_{\rm bulge}\) Relation Fig. 2 shows \(h_{\rm c}\), \(M\), and \(d_{c}\) for varying \(M_{\rm BH}\)-\(M_{\rm bulge}\) parameters. Given a population of host galaxies, the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation in Eq. 4 sets the masses of these galaxies' central SMBHBs. Increasing the relation's mass normalization (\(\mu\)) shifts all SMBHBs to higher masses. This has negligible impact on low mass bins, where the number density is large and changes gradually with mass. However, at high masses, the number density drops off quickly with increasing mass. This means that shifting the expected number of sources of one bin to the next bin of increasing mass significantly raises the chances of realizing a source in that higher mass bin. Overall, this increases the odds of randomly sampling a source in any high-mass bin. The left column of Fig. 2 shows that a \(\sim\)1 dex increase in \(M_{\rm SS}\) follows the increase in \(\mu\) from 7.6 to 9.0 at low frequencies, where massive sources are numerous. At high frequencies, the 68% CIs see a more modest increase, which follows from the fact that these represent lower mass sources. The changes in \(h_{\rm c,SS}\) are approximately proportional to \(M_{\rm SS}^{5/3}\) (see Eqs. 11 and 12) with deviations below this relation owing to unequal mass ratios. Meanwhile, at high frequencies, the background is minimally affected by \(\mu\) due to the lack of massive sources, especially after the loudest have been removed. Ultimately, across all frequencies, increasing \(\mu\) raises \(h_{\rm c,SS}\) more than \(h_{\rm c,BG}\). Increasing scatter \(\epsilon_{\mu}\) in the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation preferentially scatters sources to higher mass bins through Eddington Bias. Like \(\mu\), this increases the odds of sampling sources from the highest mass bins. The right column of Fig. 2 shows that an increase in \(\epsilon_{\mu}\) from 0.0 to 0.9 increases all low-frequency \(h_{\rm c}\), \(h_{\rm c,SS}\) slightly more than \(h_{\rm c,BG}\). This preferential scattering effect becomes negligible at lower masses where the mass function flattens and number densities become large; thus, we see the \(M_{\rm SS}\) 68% CI and \(\langle M\rangle_{\rm BG}\) medians both converge at high frequencies where lower mass sources dominate. In addition to the preferential scattering systematically increasing masses, introducing scatter to the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation adds a second element of randomness beyond the Poisson bin sampling, widening the variance of random realizations. This is apparent as a slight widening of the 68% CI \(M_{\rm SS}\) and \(h_{\rm c,SS}\) regions and is more obvious in the SS 95th percentiles, which generally increase by \(\gtrsim 0.5\) dex, even at high frequencies where the 68% CIs converge. The \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation slightly impacts the distribution of \(d_{c,\rm SS}\) at low frequencies, with the 68% CI including more distant sources when \(\epsilon_{\mu}\) is higher. When scatter is low and arbitrarily high mass sources are less likely to be sampled, nearby sources tend to be the loudest. When the scatter is increased, the loudest source could instead be more massive, but further away. This effect occurs only at low frequencies because that is where \(\epsilon_{\mu}\) causes a measurable change in \(M_{\rm SS}\) and \(h_{\rm c,SS}\). We also see a frequency dependence of \(d_{c,\rm SS}\) matching that described in the last section: low-frequency single sources tend to be nearer because these sources are more massive, and thus take longer to reach the PTA band. Figure 2: Same as Fig. 1 but for the \(M_{\rm BH}\)–\(M_{\rm bulge}\) parameters: increasing \(\mu\) (left column) from 7.60 to 8.30 to 9.00 and increasing \(\epsilon_{\mu}\) (right column) from 0.00 to 0.45 to 0.90 for darkening shades of orange (single sources) and grey (background). #### 3.1.3 Hardening In our phenomenological hardening model, all sources take the same total time from galaxy merger to SMBH coalescence, set by \(\tau_{f}\). The way this hardening rate is distributed across the binary's lifetime depends on its mass, such that higher mass sources spend less time at high frequencies. This is because 1) more massive sources evolve more rapidly by GWs when they reach small separations and 2) more massive sources merge at larger separations. These combined effects require high-mass sources to evolve slower than low-mass sources at low frequencies, in order to meet the same fixed \(\tau_{f}\). Thus, in our model, massive sources take longer to reach the PTA frequency band. Lower-mass sources will reach the PTA band more quickly, and then dwell there longer as they harden slower than their high-mass counterparts until they eventually coalesce. The slower evolution of massive binaries to PTA frequencies means they tend to be nearer when they emit in the PTA band. However, those that start at too low of redshifts will not reach separations small enough for PTA-band emissions by redshift zero. When \(\tau_{f}\) is increased, this extends the evolution time of binaries at all masses, moving the entire populations to smaller final comoving distances. The left column of Fig. 3 shows this to be similarly true for both the background and single sources, with \(\langle d_{c}\rangle_{\rm BdG}\) decreasing by \(\sim\) 1.0 dex and \(d_{c,\rm SS}\) by \(\sim\)1.1 dex when \(\tau_{f}\) is raised from 0.1 Gyr (light blue) to 11.0 Gyr (dark blue). Secondly, when the hardening time is extended, the most massive sources are unlikely to reach small enough separations to emit at PTA-detectable frequencies, producing a decrease in mass. This effect is largest at high frequencies, to which binaries take the longest to evolve. The characteristic strain has a greater proportional dependence on \(M_{\rm SS}\) than \(d_{c,\rm SS}\), so the changes in \(h_{\rm c,SS}\) with varying \(\tau_{f}\) follow the changes in \(M_{\rm SS}\). When considering the background, the filtration of massive sources applies to a larger _number_ of sources at low frequencies, where massive sources are numerous. Thus, long \(\tau_{f}\) decreases \(h_{\rm c,BG}\) slightly more than it decreases \(h_{\rm c,SS}\) at low frequencies. Per Eq. 7, \(\nu_{\rm inner}\) sets the hardening rate in the small-separation regime, with the asymptotic behavior of \[\frac{dt}{d\ln a}(a\ll a_{c})\sim a^{\nu_{\rm inner}}. \tag{23}\] A flatter (less negative) \(\nu_{\rm inner}\) increases the hardening rate at the lowest end of the PTA frequency regime before \((da/dt)_{\rm gw}\) dominates (see Fig. 3 in Agazie et al. 2023b). This represents faster hardening by processes like stellar scattering and circumbinary disk torques and produces the low-frequency turnover in both \(h_{\rm c,BG}\) and \(h_{\rm c,SS}\) apparent in the top panels of Fig. 3. Single sources are only impacted at low frequencies because there, when \(\nu_{\rm inner}=0\), even the most massive sources will have phenomenological hardening dominate GW hardening. Lower mass binaries can be dominated by phenomenological hardening up to higher frequencies as their GW emission is weaker. Thus, the background-including contributions from lower mass binaries-sees a lower \(h_{\rm c,BG}\) across all frequencies for flat \(\nu_{\rm inner}\). The bottom right panel of Fig. 3 shows the 68% CI of low-frequency single sources to be more distant for flat \(\nu_{\rm inner}\) than either of the other two cases. We attribute this to a selection bias: when \(\nu_{\rm inner}\) is flat, massive sources evolve through the low end of the PTA band quickly. Thus, there are fewer of them. While any individual source is just as able and likely to reach small distances, having fewer of them decreases the odds of having an especially close source. These trends only appear in the 68% CI because the lower bounds of \(d_{c,\rm SS}\) for \(\nu_{\rm inner}\) represent the less-common cases where one of the few loud sources just so happens to be nearby. This source can be as near under flat \(\nu_{\rm inner}\) as it can for steep \(\nu_{\rm inner}\); it is just less likely to exist in the first place. ### Astrophysical Dependence of CW Detections #### 3.2.1 CW Detection Occurrence Rate Fig. 4 shows the single source detection probability (DP\({}_{\rm SS}\)) and the background detection probability (DP\({}_{\rm BG}\)) as a function of each varying parameter, for a fixed-PTA configuration. This 'fixed-PTA' method involves calibrating the PTA's Figure 3: Same as Fig. 1 but for the phenomenological hardening parameters: increasing \(\tau_{f}\) (left column) from 0.10 Gyr to 5.55 Gyr to 11.00 Gyr and flattening \(\nu_{\rm inner}\) (right column) from -1.50 to -0.75 to 0.00 for darkening shades of blue (single sources) and grey (background). noise level so that the median \(h_{\rm c,BG}\) across all realizations of the mean-parameter model (i.e. with the fiducial parameter values listed in Table 1) has a 50% \(\rm DP_{BG}\). For example, the top left panel represents varying \(\psi_{0}\), so the PTA is calibrated to the median \(h_{\rm c,BG}\) across 500 realizations of the \(\psi_{0}=-2.5\) model. This PTA is used throughout the rest of the varying \(\psi_{0}\) with phenomenological hardening analysis. The resulting \(\rm DP_{SS}\) medians are represented by a solid green line, as well as 50% and 95% \(\rm DP_{SS}\) CI as green-shaded regions. The resulting \(\rm DP_{BG}\) medians are represented by a dashed darker green line, with 50% and 95% CI as darker green shaded regions. A new PTA is generated for the GW-only model and calibrated the same way, with the resulting \(\rm DP_{SS}\) medians in dash-dotted light grey with 68% CI shaded in light grey, and the resulting \(\rm DP_{BG}\) medians in dotted dark grey with 68% CI shaded in dark grey. The rest of the panels show the same, but for varying \(m_{\phi,0}\), \(\mu\), \(\epsilon_{\mu}\), \(\tau_{f}\), and \(\nu_{\rm inner}\) as labeled. Note that the \(\tau_{f}\) and \(\nu_{\rm inner}\) panels include constant GW-only data because the GW-only model has no \(\tau_{f}\) or \(\nu_{\rm inner}\) to vary. In all cases, the \(\rm DP_{SS}\) medians remain below \(\rm DP_{BG}\), consistent with the expectation for GWB detection to occur before CW detection (e.g. Rosado et al., 2015). \(\rm DP_{BG}\) is remarkably well constrained (95% CI spanning \(\lesssim\)0.5 dex), while \(\rm DP_{SS}\) 95% CI often range all the way from \(\sim\)10\({}^{-3}\) to \(\sim\)10\({}^{0}\). These 95% CIs of \(\rm DP_{SS}\) can exceed \(\rm DP_{BG}\) in a few corners of parameter space, most notably for \(\psi_{0}\leq-2.3\), \(\tau_{f}\gtrsim 5\) Gyr, and \(\nu_{\rm inner}\gtrsim-0.75\). Thus, low \(m_{\phi,0}\), long \(\tau_{f}\), and flat \(\nu_{\rm inner}\) are disfavored. Recall from SS2.3 that calculating \(\rm DP_{SS}\) by integrating over the \(\mathcal{F}_{e}\)-statistic with zero signal present still produces a nonzero detection probability equal to the FAP, hence the floor of \(\rm DP_{SS}\geq 10^{-3}\). Although the variance between \(\rm DP_{SS}\) realizations is large, there are clear trends in how both \(\rm DP_{SS}\) and \(\rm DP_{BG}\) depend on the model parameters. For \(m_{\phi,0}\), \(\mu\), and \(\epsilon_{\mu}\), the \(\rm DP_{SS}\) medians behave similarly to the \(\rm DP_{BG}\) medians, just at lower values. The greatest difference in \(\rm DP_{SS}\) and \(\rm DP_{BG}\) behavior occurs for the hardening parameters, both of which decrease \(\rm DP_{BG}\) by 3 dex, but only decrease \(\rm DP_{SS}\) by \(\lesssim\)0.7 dex. \(\psi_{0}\) also shows Figure 4: Detection probability for a PTA calibrated to the median \(\rm DP_{BG}\) of the mean parameter model (i.e. using the fiducial values in Table 1). The PTA is calibrated once for each panel’s mean phenomenological model, and once for each panel’s mean GW-only model. The single source detection probability is shown in color, for varying GSMF parameters in green, varying \(M_{\rm BH}\)–\(M_{\rm bulge}\) parameters in orange, and varying hardening parameters in blue. The \(\rm DP_{SS}\) medians are represented by solid lines and the 68% and 95% CIs are shaded. The background detection probability (\(\rm DP_{BG}\)) is given in darker shades of the same colors, with medians as dashed lines and 68% and 95% CI shaded. Figure 5: Expected number of single-source detections \(\langle N_{\rm SS}\rangle\) for a white-noise-only PTA calibrated independently to a 50% background detection probability for each parameter and realization. \(\langle N_{\rm SS}\rangle\) is given as a function of varying GSMF parameters (_top row_) in green, \(M_{\rm BH}\)–\(M_{\rm bulge}\) parameters (_middle row_) in orange, and phenomenological hardening parameters (_bottom row_ in blue) for the phenomenological hardening model. The medians are solid lines and the 68% and 95% CI are shaded. \(\langle N_{\rm SS}\rangle\) for the GW-only hardening model has medians represented by dash-dotted lines and 68% CI shaded in grey. These are replaced in the bottom row by constant values corresponding to the fiducial GSMF and \(M_{\rm BH}\)–\(M_{\rm bulge}\) model because with GW-only hardening there are no \(\tau_{f}\) and \(\nu_{\rm inner}\) to vary. significantly less impact on \(h_{\rm c,SS}\) than \(h_{\rm c,BG}\), the \(h_{\rm c,SS}\) medians increasing only by \(\sim\)1.3 dex compared to DP\({}_{\rm BG}\) increasing by \(\sim\)3 dex. In Fig. 4, there are a wide range of DP\({}_{\rm BGS}\). However, following recent PTA results, there is considerable evidence for a GWB signal. Thus, in Fig. 5 we calibrate a PTA independently for each realization of each set of parameters. This shows how single source detection depends on each parameter, for a fixed confidence in the GWB. Because this'realization-calibrated' method is informed by current evidence for the GWB and allows for a more nuanced exploration of parameter space, we present this as our fiducial method for identifying where in parameter space single sources are most/least likely to be detected, with the key results being those of Fig. 5. Meanwhile, Fig. 4 is useful to distinguish effects due to background calibration from direct effects on single sources. Fig. 5 also includes the GW-only \(\langle N_{\rm SS}\rangle\)s, which follow the same trends as the phenomenological cases but are lower by up to 1 dex. This effect is similar to having a very steep \(\nu_{\rm inner}\): both involve GW hardening dominating the entire PTA band. Without phenomenological hardening speeding up the evolution, sources dwell in the PTA band longer, increasing the number of binaries contributing to the total \(h_{\rm c,BG}\), while the individual loudest remain unaffected (see SS3.1.3). Since DP\({}_{\rm BG}\) is calibrated to 50%, we see the changes in \(\langle N_{\rm SS}\rangle\). Given that PTA's have not yet detected a CW, very long \(\tau_{f}\) and flat \(\nu_{\rm inner}\), both of which predict \(\langle N_{\rm SS}\rangle\) \(\sim\) 1, are unlikely. This is independent, but consistent with the short \(\tau_{f}\) constrained by GWB data in Agazie et al. (2023b). Agazie et al. (2023b) also favors flatter \(\nu_{\rm inner}\), with \(-\)0.4 as their maximum-likelihood posterior. There are clear trends in the medians and 68% CI of \(\langle N_{\rm SS}\rangle\) for all other model parameters, as well. Thus, CW detections can inform and constrain our astrophysical models for SMBHB populations and evolution, beyond the constraints placed by measuring the GWB amplitude. **GSMF--**The top left panel of Fig. 5 shows that \(\langle N_{\rm SS}\rangle\) decreases smoothly from \(\psi_{0}=-3.5\) to \(\psi_{0}=-1.5\), as a result of the more significant increases in \(h_{\rm c,BG}\) than \(h_{\rm c,SS}\) for increasing GSMF normalization. Therefore, a larger total number of galaxies in the universe increases the likelihood of any GW detection, but disfavors CW detection for a fixed DP\({}_{\rm BG}\). The top right panel of Fig. 5 shows \(\langle N_{\rm SS}\rangle\) increases with \(m_{\psi,0}\), indicating that the single source detectability increases more with GSMF characteristic mass than background detectability does. takes binaries to reach the PTA band, as described in SS3.1.3. This is most obvious for the hardening time, where longer \(\tau_{f}\) leaves the loudest sources at peak distances as close as \(\sim\) 250 Mpc by the time they are emitting nHz GWs. Similarly, when \(\nu_{\rm inner}\) is very flat, the hardening timescales at separations just before the PTA band are small, meaning single sources reach these frequencies more quickly and thus at larger distances, with the SNR-weighted number peaking at \(\sim\)1500 Mpc. The DP-weighted average CW frequency across all single sources in all realizations \(\langle f_{\rm SS}\rangle\) is presented as a function of each varying parameter in Fig. 7. The color and panel convention follows that of Fig. 5 with the GW-only model again represented by dash-dotted grey lines. Shaded regions represent one standard deviation above and below the median in log space, calculated according to Eq. 20. In all phenomenological cases regardless of parameters, the CW frequency most likely to be detected by our 16.03-year PTA is around 0.07 yr\({}^{-1}=1.12/16.03\) yr, except for an uptick at flat \(\nu_{\rm inner}\) to \(\sim\)0.1 yr\({}^{-1}\). For this 16.03-year PTA, \(\langle f_{\rm SS}\rangle\) is generally in the lowest frequency bin because white-noise-only PTA models give a monotonic decrease in \({\rm DP}_{\rm SS}\) versus frequency. If \(h_{\rm c,SS}\) continues to increase with decreasing frequency, the loudest sources will likely remain in the lowest frequency bin. However, \(h_{\rm c,SS}\) may instead plateau at low frequencies, moving the average detection frequency to a specific value where the SS strains are maximized relative to the combined noise of the PTA and GWB (Kelley et al., 2018). Including red noise decreases the detection probability of the lowest frequency sources, thus moving the \(\langle f_{\rm SS}\rangle\) to higher frequencies. Given that pulsars typically have some intrinsic red noise (Agazie et al., 2023), the white-noise-only \(\langle f_{\rm SS}\rangle\) predictions should be treated as lower limits on the predicted frequency of first CW detection. We explore the effects of varying red noise models on these predictions in SSA. The GW-only model mostly predicts similar frequencies to the phenomenological models, but with some increase up to \(\sim\) 0.1 yr\({}^{-1}\) at high \(\psi_{0}\) and low \(m_{\psi,0}\), \(\mu\), and \(\epsilon_{\mu}\) - everywhere \(\langle N_{\rm SS}\rangle\) is low. When \(\langle N_{\rm SS}\rangle\) is low, the background likely becomes a more significant source of red noise, pushing the most detectable sources to higher frequencies. This also allows for more variation in the highest-DP frequency between Figure 6: SNR-weighted number density of single sources’ final comoving distance versus total mass. The contours represent 0.5, 1.0, 1.5, and 2.0 \(\sigma\) contours for 3 variations of a single parameter, while all other parameters are fixed at their mean values. The middle shade in each plot refers to the mean-parameters model (i.e. with the fiducial values in Table 1). The different colors correspond to the same models as in Figs. 1, 2, and 3, where green, orange, and blue represent the single sources for the GSMF, \(M_{\rm BH}\)–\(M_{\rm bulge}\) relation, and hardening parameters respectively, and shades of grey represent the \(h_{\rm c,BG}\)-weighted average values. The 10 loudest single sources at each frequency are used for the \({\rm DP}_{\rm SS}\)–weighted number densities, and all but these 10 loudest are used for the \({\rm DP}_{\rm BG}\)-weighted number densities Figure 7: DP-weighted frequency of the loudest single sources, as a function of each varying parameter while the rest of the parameters are fixed. Colored regions and solid lines represent the \(1\sigma\) regions and means for the phenomenological hardening model, while grey regions and grey dash-dotted lines represent the same for the GW-only hardening model. realizations, hence the larger weighted standard deviation in the low \(\langle N_{\rm SS}\rangle\) regions of parameter space. ### Anisotropy in the Gravitational Wave Background We calculate \(C_{\ell}\) up to \(\ell_{\rm max}=8\) for each of the models presented in Fig. 6. The resulting \(C_{\ell}\)s are indistinguishable for each \(\ell\geq 1\) of a given model, consistent with similar holodeck predictions marginalizing over many models in Agazie et al. (2023d) and the shot noise approximation for GWB anisotropy in Sato-Polito and Kamionkowski (2022). Thus, we present \(C_{1}/C_{0}\) versus frequency as a proxy for any \(C_{\ell}/C_{0}\) in Fig. 8, including results from the lowest, mean, and highest variation of each of the six model parameters. These are calculated using the 1000 loudest sources (solid line medians and shaded 68% CI) and 10 loudest sources (dotted line medians) in each frequency bin. Remarkably, using the 10 loudest and the 1000 loudest sources give \(C_{1}/C_{0}\) medians and standard deviations both within \(\leq 0.16\) dex of each other at any frequency, with average differences of just \(\sim\)0.03 dex and \(\sim\)0.02 dex, respectively. Thus, in our models, anisotropy in the GWB is determined by \(\leq 10\) loudest sources in any given frequency bin. The medians span \(6\times 10^{-3}\lesssim C_{\ell}/C_{0}\lesssim 2\times 10^{-1}\) at low frequencies and \(2\times 10^{-1}\lesssim C_{\ell}/C_{0}\lesssim 6\times 10^{-1}\) at high frequencies. This increase in anisotropy with increasing frequency is expected because dwindling numbers of massive sources make the background \(h_{\rm c,BG}\) drop off more than individual sources' \(h_{\rm c,SS}\), until there is hardly even a "background" at high frequencies. The medians at high frequencies are similar to previous analytic predictions by Sato-Polito and Kamionkowski (2022), but at low frequencies, our \(C_{\ell}/C_{0}\) medians reach a minimum \(\sim\)3.5 dex higher than theirs. Agazie et al. (2023d) place Bayesian upper limits of \(C_{1}/C_{0}\lesssim 2\times 10^{-1}\) (circles with dashed lines in Fig. 8). Most 68% CI overlap or nearly reach these upper limits, suggesting that if the GWB is produced by SMBHBs, anisotropy may be detected in the near future, and the lack thereof could place stringent constraints on our parameter space. In fact, very Figure 8: Anisotropy in terms of \(C_{\ell}/C_{0}\) for the first spherical harmonic mode as a function of frequency, for varying astrophysical models. Medians (solid lines) and 68% CI (shaded) correspond to the model of the same panel and color in Fig. 6, as labeled. By these methods, any \(C_{\ell>0}/C_{0}\) is indistinguishable up to \(\ell_{\rm max}=8\), so the \(C_{1}/C_{0}\) data plotted represents any \(C_{\ell>0}/C_{0}\) distribution. Bayesian upper limits on \(C_{1}/C_{0}\) from Agazie et al. (2023d) are plotted for comparison as purple circles. Figure 9: \(C_{\ell>0}/C_{0}\) medians and 68% CI for the first five frequency bins (1.98, 3.95, 5.93, 7.91, and 9.88 mHz) as a function of each varying model parameter. The data plotted uses \(\ell=1\), but is indistinguishable from any other \(1\leq\ell\leq\ell_{\rm max}\). The panels correspond to the same parameters as in Fig. 4, Fig. 5, and Fig. 7. These \(C_{\ell}/C_{0}\)s are calculated using just 10 loudest sources in each frequency bin, which Fig. 8 shows sufficiently reproduces the anisotropy in our model calculated using 1000 loudest sources. The Agazie et al. (2023d) upper limit on \(C_{1}/C_{0}\) in the lowest frequency bin is denoted by a horizontal dashed blue line. long hardening time and flat hardening index predict median anisotropy levels above the current upper limits. This disfavors these corners of parameter space and supports the idea that single sources are particularly useful for constraining binary evolution. Fig. 9 shows \(C_{1}/C_{0}\) for the lowest five frequency bins, as a function of each varying parameter, using just ten loudest per frequency bin. In comparing Fig. 9 to Fig. 5, it is evident that the models with the greatest \(C_{\ell}/C_{0}\) correspond to those with the highest \(\langle N_{\rm SS}\rangle\). This is because increasing (\(N_{\rm SS}\)) and increasing anisotropy both stem from cases where the loudest single sources become more dominant. The greatest model-dependent changes in anisotropy are for long \(\tau_{f}\) and flat \(v_{\rm inner}\). Both these scenarios produce very high \(C_{\ell}/C_{0}\) (\(\gtrsim 0.2\)) at the lowest end of the PTA band, and then are nearly constant with frequency. The significant increases in \(C_{\ell}/C_{0}\) at low frequencies correspond to the scenarios in Fig. 3 where \(h_{\rm c,BG}\) decreases significantly and \(h_{\rm c,SS}\) sees less change. ## 4 Discussion We present the dependence of single-source detection statistics and anisotropy on astrophysical model parameters. These models include several assumptions to keep in mind. First, we assume circular orbits for all binaries. Allowing for eccentricity may move some GW energy from lower to higher frequencies, and could have different impacts on the loudest single sources versus the background - a subject worth further investigation (Siwek et al. in prep.). Another caveat is that our hardening model prescribes a fixed hardening time for all binaries, regardless of mass or redshift. This is a useful approximation to self-consistently examine how changing overall binary evolution impacts GW models without adding too many degrees of freedom, but there is no reason that these hardening times should not be mass-dependent. Thus, we suggest allowing binary lifetimes to depend on mass as a potential way to expand upon this hardening model. A third caveat to this model is that the SMBH-host galaxy relations use empirical measurements of local galaxies. These relations can be improved as more EM data is collected, particularly about more distant galaxies. The relations are based entirely on bulge mass and could be expanded by including velocity dispersion (Matt et al., 2023; Simon, 2023). A challenge in making any conclusions based on single source detection statistics is the 3 dex spread of \(\langle N_{\rm SS}\rangle\) 95% CIs. This is a result of the fact that CW detections depend upon the random chance of a particularly massive binary happening to be nearby. This randomness limits the precision of any single source predictions or parametric constraints by semi-analytic models. Incorporation of galaxy catalogs may allow for more narrowly constrained predictions as to what single sources could be realized in _our_ universe. EM data may also inform the level of GWB anisotropy in our universe. By placing the sources randomly and treating the background as purely isotropic, we make conservative estimates for anisotropy. However, one might predict that more SMBHBs will be emitting PTA-band GWs in regions of higher cosmic density. A future step would be to study possible correlations between GWB anisotropy and galaxy clustering or large-scale structure. On the other hand, if just a few loudest sources at distances of \(\sim\) 1000 Mpc entirely determine anisotropy (as we have found), then these correlations seem less likely because the placement of individual sources is random on scales large enough to treat the universe as purely isotropic. Regardless, this would be an interesting hypothesis to test. By varying PTA noise to make each model produce 50% \(\rm DP_{BG}\), we comprehensively explore the detection statistics of a wide parameter space, including models that produce low GWB amplitudes. The next step to build on this parameter space exploration is to condition our models on current measurements of the GWB amplitude. With these GWB-conditioned models, we can use realistic PTAs to calculate our detection statistics as opposed to calibrating to a fixed \(\rm DP_{BG}\). The resulting background detection statistics will serve as a check on the GWB constraints set by (Agazie et al., 2023). Then, we will test whether the current lack of CW detections (Agazie et al., 2023) and upper limits on anisotropy (Agazie et al., 2023) can further constrain these model parameters. We expect that long \(\tau_{f}\) and flat \(v_{\rm inner}\) will be most easily constrained by single source detection statistics and anisotropy, given that these are the regions of parameter space where both \(\langle N_{\rm SS}\rangle\) and \(C_{\ell}/C_{0}\) are highest and \(\langle N_{\rm SS}\rangle\) has the lowest variance. ## 5 Conclusions In this work, we develop an approach for modeling CWs distinguishable from a background of SMBHBs, their sources' properties, and their corresponding GWB anisotropy. We develop a detection statistics pipeline that calibrates a simulated PTA to a 50% probability of detecting the background and calculates the expected number of single-source detections under those settings. Our primary conclusions are the following: 1. GW anisotropy and CW detections (or lack thereof) convey specific information about the astrophysics governing galaxy population, their SMBHBs, and binary evolution. This anisotropy and CW information can break model degeneracies and allow for much more stringent constraints than possible with the GWB alone. 2. CWs are increasingly likely to be observed for low GSMF normalization \(\psi_{0}\), high GSMF characteristic mass \(m_{\psi,0}\), high \(M_{\rm BH}\)-\(M_{\rm bulge}\) mass normalization \(\mu\) and intrinsic scatter \(\epsilon_{\mu}\), long hardening time \(\tau_{f}\), and flat small-separation hardening index \(\nu_{\rm inner}\). 3. Anisotropy in the GWB, represented by the angular power spectrum \(C_{\ell}\) over multipole modes \(\ell\), is determined in our models by \(\leq\)10 loudest sources at each frequency and is the same for any \(1\leq\ell\leq\ell_{\rm max}\). Models vary in their low-frequency predictions for \(C_{\ell>0}/C_{0}\) from \(\sim\)\(5\times 10^{-3}\) to \(\sim\)\(2\times 10^{-1}\) and converge to \(\sim\)\(3\times 10^{-1}\) at high frequencies. 4. Models with greater single-source detection probability tend to have higher anisotropy, often exceeding current upper limits of \(C_{1}/C_{0}\lesssim 0.2\) at low frequencies. Thus, not detecting anisotropy could strongly constrain our parameter space. 5. Hardening model parameters are best constrained by single source detection statistics and anisotropy. Long hardening time and flat \(\nu_{\rm inner}\) give the greatest probability of CW detection for a fixed GWB confidence and high anisotropy even at low frequencies. 6. The most detectable single sources are found in the lowest frequency bin for a 16.03-year PTA with masses ranging from \(\sim\)\(10^{9}\) M\({}_{\odot}\) to \(\sim\)\(3\times 10^{10}\) M\({}_{\odot}\) and final comoving distances ranging from \(\sim\)\(250\) Mpc to \(\sim\)\(2500\) Mpc. The most detectable frequency has little dependence on the model but increases with greater pulsar red noise. 7. Single source masses generally increase with increasing \(\psi_{0}\), \(m_{\psi,0}\), \(\mu\), \(\epsilon_{\mu}\), and decreasing hardening time. Only the hardening parameters have a demonstrable impact on these sources' final comoving distances, with longer \(\tau_{f}\) and steep \(\nu_{\rm inner}\) resulting in the closest sources. ## 6 Acknowledgments The authors thank Jeff Hazboun, Jeremy Baier, Bence Becsy, and Neil Cornish for helpful discussions throughout the development of this paper, particularly regarding CW detection statistics. We also thank Nihan Pol for insight into the anisotropy methods and interpretation. The work of AM and AL was supported by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. astropy (Astropy Collaboration et al., 2018), python (Behnel et al., 2011), hassia (Hazboun et al., 2019), healpy (Zonca et al., 2019), HEALpix (Gorski et al., 2005) holodeck (Kelley et al. in prep.), jupyter (Kluyver et al., 2016), kalepy (Kelley, 2021), matplotlib (Hunter, 2007), numpy (van der Walt et al., 2011), scipy (Virtanen et al., 2020), ## Appendix A PTA noise models The primary work in this paper uses a white-noise-only simulated PTA. We also consider the impact of red noise models, parameterized by an amplitude \(A_{\rm RN}\) at reference frequency \(f_{\rm ref}=1\) yr\({}^{-1}\) and power-law index \(\gamma_{\rm RN}\), \[S_{\rm RN}=\frac{A_{\rm RN}^{2}}{12\pi^{2}}\bigg{(}\frac{f}{f_{\rm ref}} \bigg{)}^{\gamma_{\rm RN}}f_{\rm ref}^{-3},\] (A1) on our detection probabilities and DP\({}_{\rm SS}\)-weighted average frequencies. The red-noise PTAs are calibrated by fixing \(\gamma_{\rm red}\) and the ratio \(Q\) of red noise to white noise \[Q\equiv\frac{S_{\rm RN}(f_{\rm ref})}{S_{\rm WN}}\] (A2) while allowing the total noise amplitude to vary. Fig. 10 shows the resulting \(\langle N_{\rm SS}\rangle\) as a function of the six varying model parameters for the fiducial white noise only model (black), red-noise with spectral index \(\gamma_{\rm RN}=-1.5\) (purple), and red-noise with spectral index \(\gamma_{\rm RN}=-3.0\) (red). For the red-noise models we include ratios of \(Q=0.01\) (dash-dotted), \(Q=1.0\) (solid), and \(Q=100\) (dashed). The addition of red noise generally raises the relative single source detection probability, because it makes the GWB less distinguishable from the noise, such that the total noise calibration must be lower for a 50% DP\({}_{\rm BG}\). In fact, when steep (\(\gamma_{\rm RN}=-3.0\)) red noise dominates (\(Q=100\)), the median \(\langle N_{\rm SS}\rangle\) is \(\gtrsim 0.1\) for all model variations except the highest \(\psi_{0}\)s. The increase in \(\langle N_{\rm SS}\rangle\) from previously low regions results in an overall flattening in parametric dependence, but maintains the sign of the derivative, i.e. whether \(\langle N_{\rm SS}\rangle\) is increasing or decreasing with each parameter. Fig. 11 shows that adding red noise also raises the most detectable CW frequency because lower frequency sources are drowned out. Moderate red noise (\(\gamma_{\rm RN}=-1.5\), purple) maintains the lack of dependence on model parameters seen in the white noise cases. However, adding steep red noise (\(\gamma_{\rm RN}=-3.0\), red) with a ratio of \(Q\gtrsim 1\) creates a dependence of \(\langle f_{\rm SS}\rangle\) on each varying parameter. For all but \(\nu_{\rm inner}\), this \(\langle f_{\rm SS}\rangle\) dependence tends to follow the opposite trend of \(\langle N_{\rm SS}\rangle\). When \(\langle N_{\rm SS}\rangle\) is low, there is greater noise from the PTA calibration, especially at low frequencies, pushing \(\langle f_{\rm SS}\rangle\) higher.
2309.08114
Multifractality and intermittency in the limit evolution of polygonal vortex filaments
With the aim of quantifying turbulent behaviors of vortex filaments, we study the multifractality and intermittency of the family of generalized Riemann's non-differentiable functions \begin{equation} R_{x_0}(t) = \sum_{n \neq 0} \frac{e^{2\pi i ( n^2 t + n x_0 ) } }{n^2}, \qquad x_0 \in [0,1]. \end{equation} These functions represent, in a certain limit, the trajectory of regular polygonal vortex filaments that evolve according to the binormal flow. When $x_0$ is rational, we show that $R_{x_0}$ is multifractal and intermittent by completely determining the spectrum of singularities of $R_{x_0}$ and computing the $L^p$ norms of its Fourier high-pass filters, which are analogues of structure functions. We prove that $R_{x_0}$ has a multifractal behavior also when $x_0$ is irrational. The proofs rely on a careful design of Diophantine sets that depend on $x_0$, which we study by crucially using the Duffin-Schaeffer theorem and the Mass Transference Principle.
Valeria Banica, Daniel Eceizabarrena, Andrea R. Nahmod, Luis Vega
2023-09-15T02:47:09Z
http://arxiv.org/abs/2309.08114v3
# Multifractality in the evolution of vortex filaments ###### Abstract. Vortex filaments that evolve according the binormal flow are expected to exhibit turbulent properties. We verify and quantify this by studying the multifractality and intermittency of the family of generalized Riemann's non-differentiable functions \[R_{x_{0}}(t)=\sum_{n\neq 0}\frac{e^{2\pi i(n^{2}t+nx_{0})}}{n^{2}},\qquad x_{0} \in[0,1],\] which represent, in a certain limit, the trajectory of regular polygonal vortex filaments. When \(x_{0}\) is rational, we compute the spectrum of singularities of \(R_{x_{0}}\) and prove that it satisfies the Frisch-Parisi multifractal formalism studied in the theory of turbulence. When \(x_{0}\) is irrational, we prove that \(R_{x_{0}}\) has a multifractal behavior. The proofs rely on the measure of Diophantine sets that depend on \(x_{0}\), which we study via the Duffin-Schaeffer theorem and the Mass Transference Principle. Key words and phrases:Turbulence, multifractality, Riemann's non-differentiable function, vortex filaments, Diophantine approximation 2020 Mathematics Subject Classification: 11J82, 11J83, 26A27, 28A78, 42A16, 76F99 ## 1. Introduction Multifractality is one of the main properties expected in turbulent flows, but it is challenging to quantify both physically and mathematically. To advance in this direction, and motivated by the study of vortex filaments, we propose to work with the function \[R_{x_{0}}:[0,1]\to\mathbb{C},\qquad R_{x_{0}}(t)=\sum_{n\neq 0}\frac{e^{2\pi i (n^{2}t+nx_{0})}}{n^{2}}, \tag{1}\] where \(x_{0}\in\mathbb{R}\) is any, but fixed. This function is one of the possible generalizations of the classic Riemann's non-differentiable function. We describe the multifractality and intermittency of \(R_{x_{0}}\) by computing its spectrum of singularities and the \(L^{p}\) norm of its Fourier high-pass filters. We give an essentially complete description in the case \(x_{0}\in\mathbb{Q}\). The case \(x_{0}\not\in\mathbb{Q}\) is much more challenging and relies on a subtle Diophantine analysis; we start to unravel the behavior and give an initial result. We explain our motivation and the background literature in Section 1.1, state our results in Section 1.2 and outline the structure of the rest of the article in Section 1.3. ### Motivation and background Our motivation comes from the study of three dimensional turbulence of fluids and waves, both characterized by low regularity and a chaotic behavior. It is accepted that these are caused by an energy cascade, a mechanism by which the energy injected in large scales is transferred to small scales. In this setting, large eddies constantly split in smaller eddies, generating sharp changes in the velocity magnitude. Moreover, this cascade is not expected to be uniform in space; the rate at which these eddies decrease depends on their location. Mathematically speaking, an option to measure the irregularity of the velocity \(v\) is the local Holder regularity, that is, the largest \(\alpha=\alpha(x_{0})\) such that \(|v(x_{0}+h)-v(x_{0})|\lesssim|h|^{\alpha}\) when \(|h|\to 0\). The lack of uniformity in space suggests that the Holder level sets \(D_{\alpha}=\{\,x\,:\,\alpha(x)=\alpha\,\}\) should be non-empty for many values of \(\alpha\), and of different size. In this context, the spectrum of singularities is defined as \(d(\alpha)=\dim_{\mathcal{H}}D_{\alpha}\), where \(\dim_{\mathcal{H}}\) is the Hausdorff dimension, and the velocity \(v\) is said to be multifractal if \(d(\alpha)\) takes values in more than a single Holder regularity \(\alpha\). Computing the spectrum of singularities is, thus, a way to quantify the effect of turbulence. However, it is generally a difficult task. To overcome the experimental difficulties, Frisch and Parisi [25] proposed to compute instead the average behavior of the velocity at small scales. They proposed that if such space averages1 have a power-law behavior like Footnote 1: These averages are known in the literature as _structure functions_. \[\langle|v(x+h)-v(x)|^{p}\rangle\simeq|h|^{\zeta_{p}},\qquad\text{ for very small }|h|, \tag{2}\] then one should recover the spectrum of singularities by the multifractal formalism2 Footnote 2: The 3 corresponds to the three dimensional space \(\mathbb{R}^{3}\) and should be replaced by \(d\) when working in \(\mathbb{R}^{d}\). \[d(\alpha)=\inf_{p}\{\alpha p-\zeta(p)+3\}. \tag{3}\] Their computation, though, was heuristic and in principle need not hold mathematically. One of the challenges, therefore, is to find and check these multifractal properties in a rigorous mathematical way. A few results in this setting can be found in [30, 31, 32, 3, 4] In this article we propose to study the multifractality of the functions \(R_{x_{0}}\) defined in (1). These functions appear naturally when studying the trajectory of polygonal vortex filaments governed by the binormal flow. The starting point is the 2014 article by De la Hoz and Vega [18] who, inspired by Jerrard and Smets [33], discovered numerically that a mild variation of \(R_{0}\), \[\phi(t)=\sum_{n\in\mathbb{Z}}\frac{e^{2\pi in^{2}t}-1}{n^{2}}=2\pi it-\frac{ \pi^{2}}{3}+R_{0}(t), \tag{4}\] very closely represents the trajectory of the corners of such vortices. More precisely, let the filament be a curve \(\boldsymbol{X}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}^{3}\), \(\boldsymbol{X}=\boldsymbol{X}(x,t)\) that evolves according to the binormal flow equation \(\boldsymbol{X}_{t}=\boldsymbol{X}_{x}\times\boldsymbol{X}_{xx}\). Suppose that the initial filament \(\boldsymbol{X}(x,0)\) is a regular polygon with corners at the integers \(x\in\mathbb{Z}\). De la Hoz-Vega observed that \(\boldsymbol{X}(0,t)\) is a plane curve which, after identifying the plane with \(\mathbb{C}\), behaves like \(\phi(t)\). Later, Banica and Vega [2] rigorously proved this under certain hypotheses. What is more, if \(M\in\mathbb{N}\) is the number of sides of the initial polygon, and if \(\boldsymbol{X}_{M}\) is the corresponding filament, they proved3 that the rescaled trajectory \(M\boldsymbol{X}_{M}\) tends to the plane curve Footnote 3: In [2] only the case \(x_{0}=0\) was considered, but the same proof yields the result for any \(x_{0}\in[0,1]\). \[\lim_{M\to\infty}M\,\boldsymbol{X}_{M}(x_{0},t)=\phi_{x_{0}}(t)=\sum_{n\in \mathbb{Z}}\frac{e^{2\pi in^{2}t}-1}{n^{2}}\,e^{2\pi inx_{0}}. \tag{5}\] This way, the function \(\phi_{x_{0}}\) can be seen as a representative of the trajectory of polygonal vortex filaments. We show in Figures 1 and 2 the image of \(\phi_{x_{0}}\) for several values of \(x_{0}\). Noticing that the Fourier series \(\sum_{n\neq 0}e^{2\pi inx}/n^{2}\) is \(2\pi^{2}(x^{2}-x+1/6)\), proceeding like in (4) one may write \[\phi_{x_{0}}(t)=2\pi it-2\pi^{2}\Big{(}x_{0}^{2}-x_{0}+\frac{1}{6}\Big{)}+R_{ x_{0}}(t).\] Since \(\phi_{x_{0}}\) and \(R_{x_{0}}\) have the same regularity as functions of \(t\), \(R_{x_{0}}\) captures the regularity of the trajectories of polygonal vortex filaments that evolve according to the binormal flow. It is thus that we are led to the analytic study of \(R_{x_{0}}\). The relationship with the existing literature is vast. First and foremost, when \(x_{0}=0\) the function \[R_{0}(t)=\sum_{n\neq 0}\frac{e^{2\pi in^{2}t}}{n^{2}}=2\,\sum_{n=1}^{\infty} \frac{e^{2\pi in^{2}t}}{n^{2}}\] is a complex version of Riemann's non-differentiable function \(R(t)=\sum_{n=1}^{\infty}\sin(n^{2}t)/n^{2}\), introduced by Weierstrass in 1872 [41] as the first candidate to be a continuous yet nowhere differentiable function. After Hardy [28] and Gerver [26, 27] confirmed that it is only almost nowhere differentiable (see also the simplified proof of Smith [39]), Duistermaat [20] and Jaffard [29] worked on its pointwise Holder regularity, and the latter computed its spectrum of singularities to be \(d(\alpha)=4\alpha-2\) for \(\alpha\in[1/2,3/4]\) and proved that the multifractal formalism (3) is satisfied. In a recent work, Broucke and Vindas [10] gave an alternative proof of these results. Regarding the generalization \(R_{x_{0}}\) that we study in this article, the closest work in the literature is the one by Oskolkov and Chakhkiev [38], who studied the regularity of \(R_{x_{0}}(t)\) as a function of two variables, giving results about its partial derivatives and regularity almost everywhere, which are not fine enough to capture multifractal properties. There are a few works studying \(R_{x_{0}}(t)\) as a function of \(x_{0}\) with \(t\) fixed, most likely motivated by the fact that \(R_{x_{0}}\) is also the solution to an initial value problem for the periodic free Schrodinger equation. From this perspective, Kapitanski and Rodnianski [34] studied the Besov regularity of Figure 1. Image of \(\phi_{x_{0}}\), \(t\in[0,1]\), defined in (5), for different values of \(x_{0}\). Figure 2. The images of \(\phi_{x_{0}}\), \(t\in[0,1]\), for the values \(x_{0}=0,0.1,0.2,0.3,0.4,0.5\), from the rightmost to the leftmost. the fundamental solution4 as a function of \(x\) with \(t\) fixed. This perspective is also intimately related to the Talbot effect in optics which, as proposed by Berry and Klein [6], is approximated by the fundamental solution to the periodic free Schrodinger equation. Pursuing the related phenomenon of _quantization5_, this perspective has been extended to the nonlinear setting and other dispersive relations by Chousonis, Erdogan and Tzirakis [22, 17] and Boulton, Farmakis and Pelloni [7, 8], following the preceding numeric works of Chen and Olver [15, 16]. Footnote 4: Which, up to constants, is either \(\partial_{t}R_{x_{0}}(t)\) or \(\partial_{x_{0}}^{2}R_{x_{0}}(t)\). Footnote 5: See the article by Olver [37] for an instructive account of quantization. Other natural generalizations of Riemann's function have also been studied, both from analytic and geometric point of view. Jaffard [29] gave his results not only for \(R\), but also for \[R^{(\alpha)}(t)=\sum_{n=1}^{\infty}\frac{\sin(2\pi n^{2}t)}{n^{\alpha}},\qquad \text{ for }\alpha>1,\] as did Chamizo and Cordoba [12] when they studied the Minkowski dimension of their graphs. Chamizo and Ubis [13, 14] studied the spectrum of singularities of the even more general functions \[F(t)=\sum_{n=1}^{\infty}\frac{e^{2\pi iP(n)t}}{n^{\alpha}},\] where \(P\in\mathbb{Z}[X]\) is a polynomial of degree \(k\geq 2\). ### Results We begin by introducing the basic concepts we need to state our theorems. \(\bullet\)**Holder regularity and spectrum of singularities.** A function \(f:\mathbb{R}\to\mathbb{C}\) is \(\alpha\)-Holder at \(t\in\mathbb{R}\), which we denote by \(f\in\mathcal{C}^{\alpha}(t)\), if there exists a polynomial \(P_{t}\) of degree at most \(\alpha\) such that \[|f(t+h)-P_{t}(h)|\leq C|h|^{\alpha},\quad\text{ for }h\text{ small enough},\] and for some constant \(C>0\). In particular, if \(0<\alpha<1\), the definition above becomes \[f\in\mathcal{C}^{\alpha}(t)\quad\Longleftrightarrow\quad|f(t+h)-f(t)|\leq C|h |^{\alpha},\quad\text{ for }h\text{ small enough}.\] We say \(f\) is globally \(\alpha\)-Holder if \(f\in\mathcal{C}^{\alpha}(t)\) for all \(t\in\mathbb{R}\). On the other hand, the local Holder exponent of \(f\) at \(t\), which we denote by \(\alpha_{f}(t)\), is \[\alpha_{f}(t)=\sup\{\,\alpha\,:\,f\in\mathcal{C}^{\alpha}(t)\,\}.\] We define the spectrum of singularities of \(f\) as \[d_{f}(\alpha)=\dim_{\mathcal{H}}\{\,t\,:\,\alpha_{f}(t)=\alpha\,\},\] where \(\dim_{\mathcal{H}}\) is the Hausdorff dimension6, and convene that \(d(\alpha)=-\infty\) if \(\{\,t\,:\,\alpha_{f}(t)=\alpha\,\}=\emptyset\). For the function \(R_{x_{0}}\), we denote \(\alpha_{R_{x_{0}}}(t)=\alpha_{x_{0}}(t)\) and \(d_{R_{x_{0}}}(\alpha)=d_{x_{0}}(\alpha)\). Footnote 6: See [23, Sections 3.1-3.2] for definitions and basic properties of Hausdorff measures and the Hausdorff dimension. For Riemann's non-differentiable function \(R_{0}\), Jaffard [29, Theorem 1] proved that \[\alpha_{0}(t)=\frac{1}{2}+\frac{1}{2\widetilde{\mu}(t)},\qquad\text{ for }t\not\in\mathbb{Q}, \tag{6}\] where \(\widetilde{\mu}(t)\) is the exponent of irrationality of \(t\) restricted to denominators \(q\not\equiv 2\;(\text{mod }4)\)7, and consequently, thanks to an adaptation of the Jarnik-Besicovitch theorem, Footnote 7: Precisely, \(\widetilde{\mu}(t)=\sup\{\mu>0:\big{|}t-\frac{p}{q}\big{|}\leq\frac{1}{q^{\mu}}\) for infinitely many coprime pairs \((p,q)\in\mathbb{N}^{2}\,\) with \(q_{n}\not\equiv 2\;(\text{mod }4)\}\). \[d_{0}(\alpha)=\left\{\begin{array}{ll}4\alpha-2,&1/2\leq\alpha\leq 3/4,\\ 0,&\alpha=3/2,\\ -\infty,&\text{otherwise}.\end{array}\right. \tag{7}\] In this article we aim at the spectrum of singularities \(d_{x_{0}}\) when \(x_{0}\neq 0\), but we do not pursue the more refined problem of computing \(\alpha_{x_{0}}(t)\) for all \(t\in\mathbb{R}\) like in (6), which we leave for a future work. \(\bullet\)**Fourier high-pass filters and intermittency exponents.** Let \(\Phi\in C^{\infty}(\mathbb{R})\) be a cutoff function such that \(\Phi(x)=0\) in a neighborhood of the origin and \(\Phi(x)=1\) for \(|x|\geq 2\). For a periodic function \(f\) with Fourier series \(f(t)=\sum_{n\in\mathbb{Z}}a_{n}e^{2\pi int}\), let the Fourier high-pass filter be \[P_{\geq N}f(t)=\sum_{n\in\mathbb{N}}\Phi\Big{(}\frac{n}{N}\Big{)}\,a_{n}\,e^{ 2\pi int},\qquad N\in\mathbb{N}.\] In the language of turbulence, the \(L^{p}\) norms of the high-pass filters \(\|P_{\geq N}f\|_{p}^{p}\) are an analytic representation of the \(p\)-averages of the velocity in small scales in (2). Define the exponent8 Footnote 8: This exponent is related to the Besov regularity of \(f\). Assuming \(\|P_{\geq N}f\|_{p}\simeq\|P_{\simeq N}f\|_{p}\) (which is the case for \(R_{x_{0}}\)), where \(P_{\simeq N}f\) denotes the band-pass filter defined with the cutoff \(\Phi\) with the additional assumption of compact support, then \(\eta(p)=\sup\{\,s\,:\,f\in B_{p,\infty}^{s/p}\}\), where \(f\in B_{p,q}^{s}\) if and only if \((2^{ks}\|P_{\simeq^{ks}}f\|)_{k}\in\ell^{q}\). \[\eta(p)=\liminf_{N\to\infty}\frac{\log(\|P_{\geq N}f\|_{p}^{p})}{\log(1/N)}, \tag{8}\] which means that for any \(\epsilon>0\) we have \(\|P_{\geq N}f\|_{p}^{p}\leq N^{-\eta(p)+\epsilon}\) for \(N\gg_{\epsilon}1\), and that this is optimal, in the sense that there is a subsequence \(N_{k}\to\infty\) such that \(\|P_{\geq N_{k}}f\|_{p}^{p}\geq N_{k}^{-\eta(p)-\epsilon}\) for \(k\gg_{\epsilon}1\). The exponent \(\eta(p)\) describes the phenomenon of intermittency in small scales, which measures the departure from a Gaussian behavior and the presence of fat tails in the distribution of the velocity increments. Based on probabilistic moments9, this can be characterized by the \(p\)-flatness \(F_{p}(N)\) satisfying \(\lim_{N\to\infty}F_{p}(N)=\infty\) for some \(p\geq 4\), where Footnote 9: The \(p\)-flatness is an analytic analog of the standardized moments \(\langle|(X-\mu)/\sigma|^{p}\rangle\), where \(X=\delta_{t}v\) with mean \(\mu=\langle X\rangle\) and variance \(\sigma^{2}=\langle|X-\mu|^{2}\rangle\). For context, \(p=3\) is the skewness and \(p=4\) is the kurtosis or flatness. \[F_{p}(N)=\frac{\|P_{\geq N}f\|_{p}^{p}}{\|P_{\geq N}f\|_{2}^{p}}.\] From (8) we may heuristically10 write \(\|P_{\geq N}f\|_{p}^{p}\simeq N^{-\eta(p)}\) so that \(F_{p}(N)\simeq N^{\eta(2)p/2-\eta(p)}\), whence for \(p=4\) we get the classic intermittency exponent \(2\eta(2)-\eta(4)\), which is expected to be positive11. Footnote 10: To make these heuristics rigorous, one needs at least to know that the liminf in (8) is a limit. Footnote 11: As proposed by Frisch [24, p.122, (8.2)], Anselmet et al. [1] and Brun and Pumir [11]. \(\bullet\)**Results.** We start with the result for \(x_{0}\in\mathbb{Q}\). **Theorem 1.1**.: _Let \(x_{0}\in\mathbb{Q}\). Then,_ \[d_{x_{0}}(\alpha)=4\alpha-2,\qquad\text{ for }\qquad\frac{1}{2}\leq\alpha\leq \frac{3}{4}.\] _Let \(1<p<\infty\). Then,_ \[\big{\|}P_{\geq N}R_{x_{0}}\big{\|}_{p}^{p}\simeq\left\{\begin{array}{ll}N^ {-\frac{p}{2}-1},&p>4,\\ N^{-3}\,\log N,&p=4,\\ N^{-3p/4},&p<4,\end{array}\right.\quad\text{ so }\quad\eta(p)=\lim_{N\to\infty} \frac{\log(\|P_{\geq N}f\|_{p}^{p})}{\log(1/N)}=\left\{\begin{array}{ll}p/2 +1,&p>4,\\ 3p/4,&p\leq 4.\end{array}\right. \tag{9}\] _Consequently, \(R_{x_{0}}\) satisfies the Frisch-Parisi multifractal formalism12, in the sense that_ Footnote 12: The heuristic exponent \(\zeta(p)\) in (3) and \(\eta(p)\) defined in (8) are a priori different. However, the definition of \(\zeta(p)\) in (2) can be made rigorous using \(L^{p}\) norms so that it is equal to \(\eta(p)\), as shown by Jaffard in [30, Prop. 3.1]. \[d_{x_{0}}(\alpha)=\inf_{p>0}\{\,\alpha p-\eta(p)+1\,\},\qquad\text{ for }\qquad\frac{1}{2}\leq\alpha\leq\frac{3}{4}.\] **Remark 1.2**.: 1. A complete description of the sets \(\{\,t\,:\,\alpha_{x_{0}}(t)=\alpha\,\}\) is challenging because when \(x_{0}\neq 0\) it is not clear how the Holder regularity \(\alpha_{x_{0}}(t)\) could be characterized in terms of some exponent of irrationality like in (6). Still, even if necessarily \(\alpha_{x_{0}}(t)\neq\alpha_{0}(t)\), we conjecture that \(d_{x_{0}}(\alpha)=d_{0}(\alpha)\) for all \(\alpha\), where \(d_{0}\) is given in (7). 2. The intermittency exponent in (9) is \(2\eta(2)-\eta(4)=0\). However, the logarithm present in \(\|P_{\geq N}R_{x_{0}}\|_{4}^{4}\) makes \(\lim_{N\to\infty}F_{4}(N)=\infty\). For \(p>4\), we have \(\eta(2)p/2-\eta(p)=p/4-1>0\), so clearly \(\lim_{N\to\infty}F_{p}(N)=\infty\) as well, so \(R_{x_{0}}\) is intermittent in small scales when \(x_{0}\in\mathbb{Q}\). To prove Theorem 1.1 we roughly follow the strategies of Duistermaat and Jaffard. However, when \(x_{0}\neq 0\), finding the correct Diophantine sets to disprove Holder regularity, computing their Hausdorff dimension, and estimating the exponential sums corresponding to the \(L^{p}\) norms of the high-pass filters requires new ideas. As we explain in Section 2, we will use the Duffin-Schaeffer theorem and the Mass Transference Principle to overcome these difficulties. Let now \(x_{0}\not\in\mathbb{Q}\). Let \(p_{n}/q_{n}\) be its approximations by continued fractions, and define the exponents \(\mu_{n}\) by \(|x_{0}-p_{n}/q_{n}|=1/q_{n}^{\mu_{n}}\). Define the alternative13 exponent of irrationality Footnote 13: The usual exponent of irrationality is \(\mu(x_{0})=\limsup_{n\to\infty}\mu_{n}\). \[\sigma(x_{0})=\limsup_{n\to\infty}\left\{\,\mu_{n}\,:\,q_{n}\not\in 4\mathbb{N} \,\right\}\!. \tag{10}\] This exponent always exists and \(\sigma(x_{0})\geq 2\). Our result reads as follows. **Theorem 1.3**.: _Let \(x_{0}\not\in\mathbb{Q}\). Let \(2\leq\mu<2\sigma(x_{0})\), with \(\sigma(x_{0})\) as in (10). Then, for all \(\delta>0\),_ \[\frac{1}{\mu}\leq\dim_{\mathcal{H}}\left\{\,t\,:\frac{1}{2}+\frac{1}{4\mu}- \delta\leq\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\right\}\leq\frac{2}{ \mu}.\] **Remark 1.4**.: 1. We show in Figure 3 a graphic representation of Theorem 1.3. 2. Even if Theorem 1.3 does not determine the spectrum of singularities, it shows that \(R_{x_{0}}\) is multifractal when \(\sigma(x_{0})>2\), and it suggests that the spectrum of singularities should be \(d_{x_{0}}(\alpha)=4\alpha-2\) at least in the range \(\frac{1}{2}+\frac{1}{4\sigma(x_{0})}\leq\alpha\leq\frac{3}{4}\). Out of this range, the proof suggests that the spectrum might be different. 3. Regarding the \(L^{p}\) norm of the high-pass filters, the upper bound in Theorem 1.1 (9) holds for all \(x_{0}\in[0,1]\), but we do not expect it to be optimal when \(x_{0}\not\in\mathbb{Q}\). We suspect that the exact behavior, and hence the exponent \(\eta(p)\), depends on the irrationality of \(x_{0}\). We aim to study this question in future works. ### Structure of the article In Section 2 we discuss some facts in Diophantine approximation and the strategy we follow to compute the measure and dimension of Diophantine sets. In Section 3 we prove preliminary results for the local Holder regularity of \(R_{x_{0}}\): the behavior around rational points \(t\) and a general lower bound for \(\alpha_{x_{0}}(t)\) for irrational \(t\). In Section 4 we prove the first part of Theorem 1.1 by computing the spectrum of singularities of \(R_{x_{0}}\) when \(x_{0}\) is rational. In Section 5 we prove the second part of Theorem 1.1 by computing the \(L^{p}\) norms of the high-pass filters of \(R_{x_{0}}\) and proving that \(R_{x_{0}}\) satisfies the multifractal formalism. In Section 6 we prove Theorem 1.3. In Appendix A we compute sums of the Euler totient function restricted to arithmetic sequences required throughout the article. ### Notation Let \(A\subset\mathbb{R}\). For \(0\leq\beta\leq 1\), we denote by \(\mathcal{H}^{\beta}(A)\) the Hausdorff measures of \(A\), and \(\dim_{\mathcal{H}}A\) stands for the Hausdorff dimension of \(A\). We denote the Lebesgue measure of \(A\) by \(|A|\). Since \(R_{x_{0}}\) is periodic of period \(1\) both in \(x_{0}\) and \(t\), we work in the interval \([0,1]\). We denote the set of primes by \(\mathbb{P}\). For shortness, we denote \(\gcd(m,n)\) by \((m,n)\). As usual, the symbol \(\simeq_{Q}\) means that the estimates corresponding to the symbol \(\simeq\) depend on the parameter \(Q\). ## 2. An overview on Diophantine approximation An important part of this article relies on arguments on Diophantine approximation, or the study of how well an irrational can be approximated by rationals. This section is intended to give an overview of the arguments we use in this article. We will restrict our study to numbers \(x\in[0,1]\). We focus our attention in the study of both the exponent of irrationality \[\mu(x)=\sup\Big{\{}\,\mu>0\,:\,\Big{|}x-\frac{p}{q}\Big{|}\leq\frac{1}{q^{\mu}} \text{ for infinitely many coprime pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\Big{\}}, \tag{11}\] and the Lebesgue and Hausdorff measure properties of the sets \[A_{\mu}=\Big{\{}\,x\in[0,1]\,\mid\,\Big{|}x-\frac{p}{q}\Big{|}\leq\frac{1}{q^{ \mu}}\text{ for infinitely many coprime pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\Big{\}}, \tag{12}\] where the case \(\mu=\infty\) is understood as \(A_{\infty}=\bigcap_{\mu\geq 2}A_{\mu}\). In this article, we need to restrict the denominators of the approximations to a subset of the natural numbers, such as odd numbers, primes or multiples of a given number. In general, let \(\mathcal{Q}\subset\mathbb{N}\), and define \[A_{\mu,\mathcal{Q}}=\Big{\{}\,x\in[0,1]\,:\,\Big{|}x-\frac{p}{q}\Big{|}\leq \frac{1}{q^{\mu}}\text{ for infinitely many coprime pairs }(p,q)\in\mathbb{N}\times\mathcal{Q}\,\Big{\}}. \tag{13}\] Clearly \(A_{\mu,\mathcal{Q}}\subset A_{\mu}\), but the set could a priori be smaller. But how smaller? To answer this question at the level of the Lebesgue measure, we will rely on Dirichlet approximation and the Duffin-Schaeffer theorem, while we will compute Hausdorff measures and dimensions via the Jarnik-Besicovitch theorem and the Mass Transference Principle. ### Lebesgue measure: Dirichlet approximation and the Duffin-Schaeffer theorem One of the consequences of the classic Dirichlet approximation theorem, or alternatively the theory of continued fractions, is that \(A_{2}=[0,1]\setminus\mathbb{Q}\). However, neither Dirichlet approximation nor continued fractions give enough information about the sequence of denominators they produce, so Figure 3. A graphic representation of Theorem 1.3. We have a continuum of Whitney boxes parametrized by \(\mu\) along the dashed diagonal line \(d(\alpha)=4\alpha-2\). By Theorem 1.3, the graph of \(d_{x0}(\alpha)\) has at least a point in each of those boxes. they cannot be used to determine the size of the set \(A_{2,\mathcal{Q}}\subset A_{2}\). The recently proved Duffin-Schaeffer conjecture gives an answer to this kind of questions. **Theorem 2.1** (Duffin-Schaeffer theorem [36]).: _Let \(\psi:\mathbb{N}\to[0,\infty)\) be a function. Define_ \[A_{\psi}=\Big{\{}\,x\in[0,1]\,:\,\Big{|}x-\frac{p}{q}\Big{|}\leq\psi(q)\text{ for infinitely many coprime pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\Big{\}}.\] _Let \(\varphi\) denote the Euler totient function14. Then, we have the following dichotomy:_ Footnote 14: The Euler totient function: for \(q\in\mathbb{N}\), \(\varphi(q)\) is the number of natural numbers \(i\leq q\) such that \(\gcd(q,i)=1\). 1. _If_ \(\sum_{q=1}^{\infty}\varphi(q)\psi(q)=\infty\)_, then_ \(|A_{\psi}|=1\)_._ 2. _If_ \(\sum_{q=1}^{\infty}\varphi(q)\psi(q)<\infty\)_, then_ \(|A_{\psi}|=0\)_._ A couple of remarks are in place for this theorem. First, the relevant part of this theorem is \((a)\), since \((b)\) follows directly from the canonical limsup covering \[A_{\psi}\subset\bigcup_{q=Q}^{\infty}\,\bigcup_{\begin{subarray}{c}1\leq p \leq q\\ (p,q)=1\end{subarray}}\Big{(}\frac{p}{q}-\psi(q),\,\frac{p}{q}+\psi(q)\Big{)}, \quad\forall\,Q\in\mathbb{N}\quad\Longrightarrow\quad|A_{\psi}|\leq\sum_{q=Q} ^{\infty}\varphi(q)\psi(q),\quad\forall\,Q\in\mathbb{N}. \tag{14}\] Second, the main feature of this theorem is that, as opposed to the classic theorem by Khinchin15[35, Theorem 32], the arbitrariness of \(\psi\) allows to restrict the denominators to a set \(\mathcal{Q}\subset\mathbb{N}\) just by setting \(\psi(q)=0\) when \(q\not\in\mathcal{Q}\). In particular, \(A_{\mu,\mathcal{Q}}=A_{\psi}\) if we define \(\psi(q)=\mathbb{1}_{\mathcal{Q}}(q)/q^{\mu}\), where \(\mathbb{1}_{\mathcal{Q}}\) is the indicator function of the set \(\mathcal{Q}\). Hence, the relevant sum for the sets \(A_{\mu,\mathcal{Q}}\) is Footnote 15: Khinchin’s theorem states that if \(\psi:\mathbb{N}\to[0,\infty)\) is a function such that \(q^{2}\psi(q)\) is decreasing and \(\sum_{q=1}^{\infty}q\,\psi(q)=\infty\), then the set \(\{\,x\in[0,1]\,:\,|x-p/q|\leq\psi(q)\text{ for infinitely many pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\}\) has Lebesgue measure \(1\). \[\sum_{q=1}^{\infty}\varphi(q)\psi(q)=\sum_{q\in\mathcal{Q}}\,\frac{\varphi(q) }{q^{\mu}}.\] In particular, it is fundamental to understand the behavior of the Euler totient function \(\varphi\) on \(\mathcal{Q}\). The complete proof of the theorem was given recently by Koukoulopoulos and Maynard [36, Theorem 1], but Duffin and Schaeffer proved in their article [19] back in 1941 that the theorem holds under the additional assumption that there exists \(c>0\) such that \[\sum_{q=1}^{N}\varphi(q)\,\psi(q)\geq c\sum_{q=1}^{N}q\,\psi(q),\qquad\text{ for infinitely many }N\in\mathbb{N}. \tag{15}\] In the setting of \(A_{\mu,\mathcal{Q}}\), this condition is immediately satisfied by sets \(\mathcal{Q}\) for which there is a \(c>0\) such that \(\varphi(q)>c\,q\) for all \(q\in\mathcal{Q}\). Examples of this are: * \(\mathcal{Q}=\mathbb{P}\) the set of prime numbers, and * \(\mathcal{Q}=\{\,M^{n}\,:\,n\in\mathbb{N}\,\}\) where \(M\in\mathbb{N}\), that is, the set of power of a given number \(M\). The condition (15) is also satisfied by * \(\mathcal{Q}=\{\,Mn\,:\,n\in\mathbb{N}\,\}\) where \(M\in\mathbb{N}\), that is, the set of multiples of a given number \(M\), as we prove in Appendix A. We will make use mainly of this last kind of sets along this article. ### Hausdorff dimension: the Jarnik-Besicovitch theorem and the Mass Transference Principle We said that the Dirichlet approximation theorem implies \(A_{2}=[0,1]\setminus\mathbb{Q}\), and it follows from the argument in (14) that \(|A_{\mu}|=0\) for \(\mu>2\). It is thus natural to ask how small \(A_{\mu}\) is when \(\mu>2\). A measure theoretic answer to that is the following theorem by Jarnik and Besicovitch from the 1930s, a modern version of which can be found in [23, Section 10.3] **Theorem 2.2** (Jarnik-Besicovitch theorem).: _Let \(\mu>2\) and let \(A_{\mu}\) be defined as in (12). Then, \(\dim_{\mathcal{H}}A_{\mu}=2/\mu\) and \(\mathcal{H}^{2/\mu}(A_{\mu})=\infty\)._ In this article we will need to adapt this result to the sets \(A_{\mu,\mathcal{Q}}\). Thanks to the Duffin-Schaeffer Theorem 2.1, we will be able to find the largest \(\mu_{0}\geq 2\) such that \(|A_{\mu_{0},\mathcal{Q}}|=1\), so that \(|A_{\mu,\mathcal{Q}}|=0\) for all \(\mu>\mu_{0}\). We will thus focus on computing their Hausdorff dimension of those zero-measure sets. For that, we use a theorem by Beresnevich and Velani, called the Mass Transference Principle [5, Theorem 2], that fits this setting in an efficient way. We state here its application to the unit cube and to Hausdorff measures. **Theorem 2.3** (Mass Transference Principle [5]).: _Let \(B_{n}=B_{n}(x_{n},r_{n})\) be a sequence of balls in \([0,1]^{d}\) such that \(\lim_{n\to\infty}r_{n}=0\). Let \(\alpha<d\) and let \(B_{n}^{\alpha}=B_{n}(x_{n},r_{n}^{\alpha})\) be the dilation of \(B_{n}\) centered at \(x_{n}\) by the exponent \(\alpha\). Suppose that \(B^{\alpha}:=\limsup_{n\to\infty}B_{n}^{\alpha}\) is of full Lebesgue measure, that is, \(|B^{\alpha}|=1\). Then, calling \(B:=\limsup_{n\to\infty}B_{n}\), we have \(\dim_{\mathcal{H}}B\geq\alpha\) and \(\mathcal{H}^{\alpha}(B)=\infty\)._ To illustrate the power of the Mass Transference Principle, let us explain how the Jarnik-Besicovitch Theorem2.2 can be obtained as a corollary of the Dirichlet approximation theorem. Indeed, from the definition of \(A_{\mu}\) we can write16 Footnote 16: The expression in (16) is not in the form of a limsup of balls. It follows, however, that the limsup of any enumeration whatsoever of the balls considered in the construction gives the same set. \[A_{\mu}=\limsup_{q\to\infty}\bigcup_{1\leq p\leq q,(p,q)=1}B\Big{(}\frac{p}{q},\,\frac{1}{q^{\mu}}\Big{)}. \tag{16}\] Choose \(\alpha=2/\mu\) so that \((A_{\mu})^{\alpha}=A_{\mu\alpha}=A_{2}\), which by the Dirichlet approximation theorem has full measure. Then, the Mass Transference Principle implies \(\dim_{\mathcal{H}}A_{\mu}\geq 2/\mu\) and \(\mathcal{H}^{2/\mu}(A_{\mu})=\infty\). The upper bound follows from the canonical cover of \(A_{\mu}\) in (16), proceeding like in (14). For \(A_{\mu,\mathcal{Q}}\), we will reproduce this argument by first using the Duffin-Schaeffer theorem to detect the largest \(\mu_{0}\) for which \(|A_{\mu_{0},\mathcal{Q}}|=1\), and then combining the property \((A_{\mu,\mathcal{Q}})^{\alpha}=A_{\mu\alpha,\mathcal{Q}}\) with the Mass Transference Principle to compute \(\dim_{\mathcal{H}}A_{\mu,\mathcal{Q}}\). ### Exponent of irrationality and continued fractions We finish this section with a brief account of the connection of the irrationality measure (11) and continued fractions, which we use along the article. Let \(x\in[0,1]\setminus\mathbb{Q}\). Assume \(x=[a_{0};a_{1},\ldots,a_{n},\ldots]\) is the continued fraction expression of \(x\). For every \(n\in\mathbb{N}\), the \(n\)-th convergent is defined as \([a_{0};a_{1},\ldots,a_{n}]\in\mathbb{Q}\), which we denote by \(p_{n}/q_{n}\) with \((p_{n},q_{n})=1\). If we define the exponents \((\mu_{n})_{n\in\mathbb{N}}\) by \[\Big{|}x-\frac{p_{n}}{q_{n}}\Big{|}=\frac{1}{q_{n}^{\mu_{n}}},\qquad\text{ then}\qquad\mu(x)=\limsup_{n\to\infty}\mu_{n}. \tag{17}\] ## 3. Preliminary results on the local regularity of \(R_{x_{0}}\) In this section we carry over to \(R_{x_{0}}\) regularity results that are by now classical for \(R_{0}\). In Section 3.1 we prove that \(R_{x_{0}}\) is globally \(C^{1/2}\). In Section 3.2 we compute the asymptotic behavior of \(R_{x_{0}}\) around rationals. In Section 3.3 we give a lower bound for \(\alpha_{x_{0}}(t)\) that is independent of \(x_{0}\). ### A global Holder regularity result Duistermaat [20, Lemma 4.1.] proved that \(R_{0}\) is globally \(C^{1/2}(t)\). The same holds for all \(x_{0}\in\mathbb{R}\). We include the proof for completeness. **Proposition 3.1**.: _Let \(x_{0}\in\mathbb{R}\). Then, \(\alpha_{x_{0}}(t)\geq 1/2\) for all \(t\in\mathbb{R}\). That is, \(R_{x_{0}}\) is globally \(C^{1/2}\)._ Proof.: For \(h\neq 0\), let \(N\in\mathbb{N}\) such that \(\frac{1}{(N+1)^{2}}\leq|h|<\frac{1}{N^{2}}\), and write \[R_{x_{0}}(t+h)-R_{x_{0}}(t)=\sum_{|n|\leq N}\frac{e^{2\pi in^{2}t}\,e^{2\pi inx_{0 }}}{n^{2}}\Big{(}e^{2\pi in^{2}h}-1\Big{)}+\sum_{|n|>N}\frac{e^{2\pi in^{2}t}\,e ^{2\pi inx_{0}}}{n^{2}}\Big{(}e^{2\pi in^{2}h}-1\Big{)}.\] Since \(|e^{ix}-1|\leq|x|\) for all \(x\in\mathbb{R}\), we bound \[\Big{|}\sum_{|n|\leq N}\frac{e^{2\pi in^{2}t}\,e^{2\pi inx_{0}}}{n^{2}}\Big{(}e ^{2\pi in^{2}h}-1\Big{)}\Big{|}\leq\sum_{|n|\leq N}\frac{\big{|}e^{2\pi in^{2}h }-1\big{|}}{n^{2}}\leq 2|h|N<2|h|\frac{1}{\sqrt{|h|}}=2\sqrt{|h|}.\] For the other sum, we trivially bound \(\big{|}e^{2\pi in^{2}h}-1\big{|}\leq 2\) to get \[\Big{|}\sum_{|n|>N}\frac{e^{2\pi in^{2}t}\,e^{2\pi inx_{0}}}{n^{2}}\Big{(}e^{2 \pi in^{2}h}-1\Big{)}\Big{|}\leq 2\,\sum_{n=N+1}^{\infty}\frac{2}{n^{2}} \leq\frac{4}{N}\leq\frac{8}{N+1}\leq 8\sqrt{|h|}.\] Hence \(\big{|}R_{x_{0}}(t+h)-R_{x_{0}}(t)\big{|}\leq 10|h|^{1/2}\). This holds for all \(t\), so \(R_{x_{0}}\in C^{1/2}(t)\) for all \(t\in\mathbb{R}\). ### Asymptotic behavior of \(R_{x_{0}}(t)\) around rational points \(t\) The building block for all results in this article is the behavior of \(R_{x_{0}}\) around rationals, which we compute explicitly. **Proposition 3.2**.: _Let \(x_{0}\in\mathbb{R}\). Let \(p,q\in\mathbb{N}\) be such that \((p,q)=1\). Then,_ \[R_{x_{0}}\left(\frac{p}{q}+h\right)-R_{x_{0}}\left(\frac{p}{q}\right)=-2\pi ih +\frac{\sqrt{|h|}}{q}\,\sum_{m\in\mathbb{Z}}G(p,m,q)\,F_{\pm}\left(\frac{x_{0 }-m/q}{\sqrt{h}}\right),\qquad\text{ for }\,h\neq 0,\] _where \(F_{\pm}=F_{+}\) if \(h>0\) and \(F_{\pm}=F_{-}\) if \(h<0\), and_ \[G(p,m,q)=\sum_{r=0}^{q-1}e^{2\pi i\frac{pr^{2}+mr}{q}},\qquad F_{\pm}(\xi)= \int_{\mathbb{R}}\frac{e^{\pm 2\pi ix^{2}}-1}{x^{2}}\,e^{2\pi ix\xi}\,dx.\] _The function \(F_{\pm}\) is bounded and continuous, \(F_{\pm}(0)=2\pi(-1\pm i)\), and_ \[F_{\pm}(\xi)=(1\pm i)\,\frac{e^{\mp\pi i\xi^{2}/2}}{\xi^{2}}+O\left(\frac{1}{ \xi^{4}}\right)=O\left(\frac{1}{\xi^{2}}\right),\qquad\text{ as }\quad\xi\to\infty.\] Proof.: We follow the classical approach, which can be traced back to Smith [39], of using the Poisson summation formula. From the definition of \(R_{x_{0}}\), complete first the sum to \(n\in\mathbb{Z}\) to write \[R_{x_{0}}\left(\frac{p}{q}+h\right)-R_{x_{0}}\left(\frac{p}{q}\right)=-2\pi ih +\sum_{n\in\mathbb{Z}}\frac{e^{2\pi in^{2}h}-1}{n^{2}}\,e^{2\pi i\frac{pn^{2}} {q}}\,e^{2\pi inx_{0}},\] where we must interpret the term \(n=0\) as the value of \(\frac{e^{2\pi in^{2}h}-1}{n^{2}}\simeq 2\pi ih\) as \(n\to 0\). Split the sum modulo \(q\) by writing \(n=mq+r\) and \[\sum_{n\in\mathbb{Z}}\frac{e^{2\pi in^{2}h}-1}{n^{2}}\,e^{2\pi i\frac{pn^{2}} {q}}\,e^{2\pi inx_{0}}=\sum_{r=0}^{q-1}e^{2\pi i\frac{pr^{2}}{q}}\,\sum_{m\in \mathbb{Z}}\frac{e^{2\pi i(mq+r)^{2}h}-1}{(mq+r)^{2}}\,e^{2\pi i(mq+r)x_{0}}. \tag{18}\] Use the Poisson summation formula for the function \[f(y)=\frac{e^{2\pi i(yq+r)^{2}h}-1}{(yq+r)^{2}}\,e^{2\pi i(yq+r)x_{0}},\] for which, changing variables \((yq+r)\sqrt{|h|}=z\), we have \[\widehat{f}(\xi)=\frac{\sqrt{|h|}}{q}\,e^{2\pi ir\xi/q}\,\int\frac{e^{2\pi i\, \mathrm{sgn}(h)z^{2}}-1}{z^{2}}\,e^{2\pi i\frac{\xi}{\sqrt{|h|}}(x_{0}-\xi/q)} \,dz=\frac{\sqrt{|h|}}{q}\,e^{2\pi ir\xi/q}\,F_{\pm}\Big{(}\frac{x_{0}-\xi/q}{ \sqrt{|h|}}\Big{)}.\] Therefore, \[(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeq **Remark 3.4**.: The difference between \(x_{0}=0\) and \(x_{0}\neq 0\) is clear from Corollary 3.3. * If \(x_{0}=0\), we have \(x_{q}=0=m_{q}\) for all \(q\). The main term is \(|h|^{1/2}q^{-1}\,G(p,0,q)\,F_{\pm}(0)\), so there is a clear dichotomy: \(R_{0}\) is differentiable at \(p/q\) if and only if \(G(p,0,q)=0\), which happens if and only if \(q\equiv 2\pmod{4}\); in all other rationals, \(R_{x_{0}}\) is \(C^{1/2}\). * If \(x_{0}\neq 0\), it is in general false that \(x_{q}=0\), so to determine the differentiability of \(R_{x_{0}}\) we need to control the magnitude of \(F_{\pm}(x_{q}/\sqrt{|h|})\). ### Lower bounds for the local Holder regularity We now give lower bounds for \(\alpha_{x_{0}}(t)\) that do not depend on \(x_{0}\). In Section 3.3.1 we work with \(t\in\mathbb{Q}\), and in Section 3.3.2 with \(t\not\in\mathbb{Q}\). #### 3.3.1. At rational points There is a dichotomy in the Holder regularity of \(R_{x_{0}}\) at rational points. **Proposition 3.5**.: _Let \(x_{0}\in\mathbb{R}\) and \(t\in\mathbb{Q}\). Then, either \(\alpha_{x_{0}}(t)=1/2\) or \(\alpha_{x_{0}}(t)\geq 3/2\)._ Proof.: Let \(t=p/q\) with \((p,q)=1\). If \(q\) is fixed, we get \(\min\big{(}\sqrt{q}\,|h|,q^{3/2}\,|h|^{3/2}\big{)}=q^{3/2}|h|^{3/2}\) for small enough \(|h|\), so from Corollary 3.3 we get \[R_{x_{0}}\Big{(}\frac{p}{q}+h\Big{)}-R_{x_{0}}\Big{(}\frac{p}{q}\Big{)}=-2 \pi ih+\frac{\sqrt{|h|}}{q}\,G(p,m_{q},q)F_{\pm}\Big{(}\frac{x_{q}}{\sqrt{|h|} }\Big{)}+O\Big{(}q^{3/2}h^{3/2}\Big{)}. \tag{19}\] Then, differentiability completely depends on the Gauss sum \(G(p,m_{q},q)\) and on \(x_{q}\). **Case 1**: If \(G(p,m_{q},q)=0\), then \(\big{|}R_{x_{0}}\big{(}\frac{p}{q}+h\big{)}-R_{x_{0}}\big{(}\frac{p}{q}\big{)} +2\pi ih\big{|}\lesssim_{q}h^{3/2}\), so \(\alpha_{x_{0}}(p/q)\geq 3/2\). **Case 2**: If \(G(p,m_{q},q)\neq 0\) and \(x_{q}\neq 0\). Then, \(|G(p,m_{q},q)|\simeq\sqrt{q}\) and \(\lim_{h\to 0}x_{q}/\sqrt{|h|}=\infty\), so \(\big{|}F_{\pm}\big{(}x_{q}/\sqrt{|h|}\big{)}\big{|}\lesssim h/x_{q}^{2}\). Hence, \(\alpha_{x_{0}}(p/q)\geq 3/2\) because \[R_{x_{0}}\Big{(}\frac{p}{q}+h\Big{)}-R_{x_{0}}\Big{(}\frac{p}{q}\Big{)}=-2\pi ih +O\Big{(}\frac{\sqrt{h}}{\sqrt{q}}\frac{h}{x_{q}^{2}}+q^{3/2}h^{3/2}\Big{)}=- 2\pi ih+O_{q}\big{(}h^{3/2}\big{)}.\] **Case 3**: If \(G(p,m_{q},q)\neq 0\) and \(x_{q}=0\), we have \(|G(p,m_{q},q)|\simeq\sqrt{q}\), so from (19) we get \[\Big{|}R_{x_{0}}\Big{(}\frac{p}{q}+h\Big{)}-R_{x_{0}}\Big{(}\frac{p}{q}\Big{)} \Big{|}\geq\frac{\sqrt{|h|}}{q}|G(p,m_{q},q)||F_{\pm}(0)|+O_{q}(h)\simeq\frac {\sqrt{h}}{\sqrt{q}}+O_{q}(h)\gtrsim_{q}h^{1/2}\] for \(h\ll_{q}1\). Together with Proposition 3.1, this implies \(\alpha_{x_{0}}(p/q)=1/2\). #### 3.3.2. At irrational points We give a lower bound \(\alpha_{x_{0}}(t)\) that depends on the exponent of irrationality of \(t\), but not on \(x_{0}\). **Proposition 3.6**.: _Let \(x_{0}\in\mathbb{R}\) and \(t\in\mathbb{R}\setminus\mathbb{Q}\). Let \(\mu(t)\) be the exponent of irrationality of \(t\). Then, \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}\)._ The proof of this result, which we include for completeness, closely follows the procedure by Chamizo and Ubis [14, Proof of Theorem 2.3]. **Remark 3.7**.: Similar to what happens for \(x_{0}=0\), where \(\alpha_{0}(t)=1/2+1/2\widetilde{\mu}(t)\geq 1/2+1/2\mu(t)\) (see (6)), we do not expect the bound in Proposition 3.6 to be optimal for all \(t\not\in\mathbb{Q}\). However, it will be enough to compute the spectrum of singularities. Proof.: In view of Proposition 3.1, there is nothing to prove if \(\mu(t)=\infty\), so assume \(\mu(t)<\infty\). Following notation in Section 2.3, let \(p_{n}/q_{n}\) be the \(n\)-th approximation by continued fractions of \(t\). Center the asymptotic behavior in Corollary 3.3 at \(p_{n}/q_{n}\), and bound it from above by \[\Big{|}R_{x_{0}}\Big{(}\frac{p_{n}}{q_{n}}+h\Big{)}-R_{x_{0}}\Big{(}\frac{p_{n}} {q_{n}}\Big{)}\Big{|}\lesssim\frac{\sqrt{h}}{\sqrt{q_{n}}}+h+\min\Big{(} \sqrt{q_{n}}\,h,q_{n}^{3/2}\,h^{3/2}\Big{)}, \tag{20}\] using that \(|G(p_{n},m_{q_{n}},q_{n})|\leq\sqrt{2q_{n}}\) for all \(n\in\mathbb{N}\) and \(|F(x)|\lesssim 1\) for all \(x\in\mathbb{R}\). Let \(h\neq 0\) be small enough. The sequence of errors \(|t-p_{n}/q_{n}|\) is strictly decreasing, so we can choose \(n\in\mathbb{N}\) such that \[\left|t-\frac{p_{n}}{q_{n}}\right|\leq|h|<\left|t-\frac{p_{n-1}}{q_{n-1}} \right|. \tag{21}\] Then, from (20), (21) and \(|t-p_{n}/q_{n}+h|\leq 2|h|\), we get \[\begin{split}|R_{x_{0}}\left(t+h\right)&-R_{x_{0}} \left(t\right)|\\ &\leq\left|R_{x_{0}}\left(\frac{p_{n}}{q_{n}}+t-\frac{p_{n}}{q_{n} }+h\right)-R_{x_{0}}\left(\frac{p_{n}}{q_{n}}\right)\right|+\left|R_{x_{0}} \left(\frac{p_{n}}{q_{n}}+t-\frac{p_{n}}{q_{n}}\right)-R_{x_{0}}\left(\frac{ p_{n}}{q_{n}}\right)\right|\\ &\lesssim\frac{\sqrt{|h|}}{\sqrt{q_{n}}}+|h|+\min\left(\sqrt{q_{n }}\left|h\right|,q_{n}^{3/2}\left|h\right|^{3/2}\right).\end{split} \tag{22}\] Next we compute the dependence between \(q_{n}\) and \(h\). By the property of continued fractions \[\frac{1}{q_{n}^{\mu_{n}}}=\left|t-\frac{p_{n}}{q_{n}}\right|\leq\frac{1}{q_{n +1}q_{n}},\] witht\(\mu_{n}\) as in (17), we get \(1/q_{n}\leq 1/q_{n+1}^{1/(\mu_{n}-1)}\) for all \(n\in\mathbb{N}\). Then, from (21) we get \[\frac{1}{q_{n}^{\mu_{n}}}\leq|h|<\frac{1}{q_{n-1}^{\mu_{n-1}}}\leq\frac{1}{q_{ n}^{\mu_{n-1}/(\mu_{n-1}-1)}}. \tag{23}\] We now bound each term in (22) using (23). * For the first term, by (23), \(\sqrt{|h|}/\sqrt{q_{n}}\leq|h|^{\frac{1}{2}+\frac{1}{2\mu_{n}}}\). * The fact that \(\mu_{n}\geq 2\) implies \(\frac{1}{2}+\frac{1}{2\mu_{n}}\leq\frac{3}{4}\), so \(|h|\leq|h|^{3/4}\leq|h|^{\frac{1}{2}+\frac{1}{2\mu_{n}}}\) and the second term is absorbed by the first one. * For the third term, we write the minimum as \[\min(\sqrt{q_{n}}\left|h\right|,q_{n}^{3/2}\left|h\right|^{3/2})=\left\{ \begin{array}{ll}\sqrt{q_{n}}\left|h\right|,&\text{ when }|h|\geq 1/q_{n}^{2},\\ q_{n}^{3/2}\left|h\right|^{3/2}&\text{ when }|h|\leq 1/q_{n}^{2}.\end{array}\right.\] So we have two regions: * When \(|h|\geq 1/q_{n}^{2}\), use (23) to bound \[\sqrt{q_{n}}\left|h\right|\leq\frac{|h|}{|h|^{(\mu_{n-1}-1)/2\mu_{n-1}}}=|h|^{ \frac{1}{2}+\frac{1}{2\mu_{n-1}}}.\] * When \(|h|\leq 1/q_{n}^{2}\), we directly have \(q_{n}\leq|h|^{-1/2}\), so \[q_{n}^{3/2}\left|h\right|^{3/2}=|h|^{3/2-3/4}=|h|^{3/4}\leq|h|^{\frac{1}{2}+ \frac{1}{2\mu_{n-1}}},\] where in the last inequality we used \(\frac{1}{2}+\frac{1}{2\mu_{n-1}}\leq\frac{3}{4}\) as before. Gathering all cases, we get \[|R_{x_{0}}(t+h)-R_{x_{0}}(t)|\leq|h|^{\frac{1}{2}+\frac{1}{2\mu_{n}}}+|h|^{ \frac{1}{2}+\frac{1}{2\mu_{n-1}}}.\] From the definition of the exponent of irrationality \(\mu(t)=\limsup_{n\to\infty}\mu_{n}\), for any \(\delta>0\) there exists \(N_{\delta}\in\mathbb{N}\) such that \(\mu_{n}\leq\mu(t)+\delta\) for all \(n\geq N_{\delta}\). Then, since \(|h|<1\), we have \(|h|^{\frac{1}{2}+\frac{1}{2\mu_{n}}}\leq|h|^{\frac{1}{2}+\frac{1}{2\mu(t)+2 \delta}}\) for all \(n\geq N_{\delta}\). Renaming \(\delta\), we get \(N_{\epsilon}\in\mathbb{N}\) such that \[|R_{x_{0}}(t+h)-R_{x_{0}}(t)|\leq|h|^{\frac{1}{2}+\frac{1}{2\mu(t)}-\delta}, \qquad\text{ for all }\quad|h|\leq\left|t-\frac{p_{N_{\delta}}}{q_{N_{\delta}}}\right|,\] so \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}-\delta\). Since this holds for all \(\delta>0\), we conclude that \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}\). ## 4. Main results for the Holder regularity when \(x_{0}\in\mathbb{Q}\) In this section we give upper bounds for the Holder regularity \(\alpha_{x_{0}}(t)\) and compute the spectrum of singularities \(d_{x_{0}}\) when \(x_{0}\in\mathbb{Q}\), and thus prove the first part of Theorem 1.1. Let us fix \(x_{0}=P/Q\) such that \((P,Q)=1\). To compute the spectrum of singularities \(d_{x_{0}}\), it is fundamental to understand the regularity \(\alpha_{x_{0}}(t)\) at irrational \(t\). Still, for that we first need to study the case \(t\) rational by characterizing the rational points \(t\) where \(R_{x_{0}}\) is not differentiable. ### At rational points \(t\) Based on Corollary 3.3, in the proof of Proposition 3.5 we established that \(R_{x_{0}}\) is not differentiable at \(t=p/q\) if and only if \[G(p,m_{q},q)\neq 0\qquad\text{ and }\qquad x_{q}=\operatorname{dist}\left(x_{0}, \frac{\mathbb{Z}}{q}\right)=0,\] in which case \(\alpha_{x_{0}}(p/q)=1/2\). Recall that \(m_{q}\in\mathbb{Z}\) is the number such that \(\operatorname{dist}(x_{0},\mathbb{Z}/q)=|x_{0}-m_{q}/q|\). We characterize this situation in the following proposition. **Proposition 4.1**.: _Let \(x_{0}=P/Q\) with \(\gcd(P,Q)=1\), and let \(t=p/q\) with \(\gcd(p,q)=1\)._ * _If_ \(Q\equiv 1\pmod{2}\)_, then_ \(R_{x_{0}}\) _is non-differentiable at_ \(t=p/q\) _if and only if_ \(q=kQ\) _with_ \(k\equiv 0,1,3\pmod{4}\)_._ * _If_ \(Q\equiv 0\pmod{4}\)_, then_ \(R_{x_{0}}\) _is non-differentiable at_ \(t=p/q\) _if and only if_ \(q=kQ\) _with_ \(k\equiv 0\pmod{2}\)_._ * _If_ \(Q\equiv 2\pmod{4}\)_, then_ \(R_{x_{0}}\) _is non-differentiable at_ \(t=p/q\) _if and only if_ \(q=kQ\) _with_ \(k\in\mathbb{Z}\)_._ _In all such cases, the asymptotic behavior is_ \[R_{x_{0}}\left(\frac{p}{q}+h\right)-R_{x_{0}}\left(\frac{p}{q}\right)=c\,e^{2 \pi i\phi_{p,q,x_{0}}}\,F_{\pm}(0)\,\frac{\sqrt{|h|}}{\sqrt{q}}-2\pi ih+O \left(\min\left(\sqrt{q}\,h,q^{3/2}\,h^{3/2}\right)\right). \tag{24}\] _where \(c=1\) or \(c=\sqrt{2}\) depending on parity conditions of \(Q\) and \(q\). In particular, \(\alpha_{x_{0}}(t)=1/2\)._ Proof.: In view of the proof of Proposition 3.5, we must identify the conditions for \(G(p,m_{q},q)\neq 0\) and \(x_{q}=0\). Since \(x_{q}=\operatorname{dist}(P/Q,\mathbb{Z}/q)\), we have \(x_{q}=0\) when there exists \(m_{q}\in\mathbb{Z}\) such that \[\frac{P}{Q}=\frac{m_{q}}{q}\quad\Longleftrightarrow\quad Pq=m_{q}Q.\] Since \(\gcd(P,Q)=1\), then necessarily \(Q|q\), that is, \(q\) must be a multiple of \(Q\). Reversely, if \(q=kQ\), then picking \(m_{q}=kP\) we have \(m_{q}/q=P/Q\). In short, \[x_{q}=0\quad\Longleftrightarrow\quad q\text{ is a multiple of }Q.\] Therefore, let \(q=kQ\) for some \(k\in\mathbb{N}\). Then, \(m_{q}=kP\). Let us characterize the second condition \(G(p,m_{q},q)=G(p,kP,kQ)\neq 0\). It is well-known that \[G(a,b,c)\neq 0\quad\Longleftrightarrow\quad\text{ either }\left\{\begin{array}{l}c \text{ is odd, or}\\ c\text{ is even and }\frac{c}{2}\equiv b\pmod{2}.\end{array}\right. \tag{25}\] We separate cases: * Suppose \(Q\) is odd. Then, according to (25), we need either * \(kQ\) odd, which holds if and only if \(k\) is odd, or * \(kQ\) even, which holds if and only if \(k\) is even, and \(kQ/2\equiv kP\pmod{2}\). Since \(Q\) is odd and \(k\) is even, this is equivalent to \(k/2\equiv 0\pmod{2}\), which means \(k\equiv 0\pmod{4}\). Therefore, if \(q=kQ\), the Gauss sum \(G(p,m_{q},q)\neq 0\) if and only if \(k\equiv 0,1,3\pmod{4}\). * Suppose \(Q\equiv 0\pmod{4}\). Since \(q=kQ\) is even, by (25) we need \(kQ/2\equiv kP\pmod{2}\). Since \(Q\) is a multiple of \(4\), this is equivalent to \(kP\equiv 0\pmod{2}\). But since \(Q\) is even, then \(P\) must be odd. Therefore, \(k\) must be even. In short, if \(q=kQ\), we have \(G(p,m_{q},q)\neq 0\) if and only if \(k\) is even. * Suppose \(Q\equiv 2\pmod{4}\). Since \(q=kQ\) is even, by (25) we need \(kQ/2\equiv kP\pmod{2}\). Now both \(Q/2\) and \(P\) are odd, so this is equivalent to \(k\equiv k\pmod{2}\), which is of course true. Therefore, if \(q=kQ\), we have \(G(p,m_{q},q)\neq 0\) for all \(k\in\mathbb{Z}\). Once all cases have been identified, expression (24) follows from Corollary 3.3 and from the fact that if \(G(p,m_{q},q)\neq 0\) we have \(|G(p,m_{q},q)|=c\sqrt{q}\) with \(c=1\) or \(c=\sqrt{2}\). ### At irrational points \(t\) Let now \(t\not\in\mathbb{Q}\). To obtain an upper bound for \(\alpha_{x_{0}}(t)\), we will approximate \(t\) by rationals \(p/q\) where \(R_{x_{0}}\) is non-differentiable and use the asymptotic behavior (24). For that, however, we need to make sure that \(t\) can be properly approximated by rationals with denominators satisfying the conditions in Proposition 4.1, which depend on the parity of \(Q\). To reduce the cases to treat, let us further restrict the denominators \(q\) in order to unify those conditions17. It is easy to see that if \(q\in 4Q\mathbb{N}\), the three conditions in Proposition 4.1 are simultaneously satisfied. Hence (24) always holds if \(q\in 4Q\mathbb{N}\). Footnote 17: We lose nothing with this reduction when computing the spectrum of singularities, but it may be problematic when computing the Hölder regularity \(\alpha_{x_{0}}(t)\) for all \(t\). Let \(\mu\in[2,\infty)\). Define the classic Diophantine set \[A_{\mu}=\left\{\,t\in(0,1)\setminus\mathbb{Q}\,:\,\big{|}t-\frac{p}{q}\big{|} \leq\frac{1}{q^{\mu}}\ \ \text{for i. m. coprime pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\right\}\] and for \(0<a<1\) small enough define the restricted Diophantine set \[A_{\mu,Q}=\left\{\,t\in(0,1)\setminus\mathbb{Q}\,:\,\big{|}t-\frac{p}{q}\big{|} \leq\frac{a}{q^{\mu}}\ \ \text{for i. m. coprime pairs }(p,q)\in\mathbb{N}\times 4Q\mathbb{N}\, \right\}.\] Recall that for \(\mu=\infty\) we define \(A_{\infty}=\bigcap_{\mu\geq 2}A_{\mu}\) and \(A_{\infty,Q}=\bigcap_{\mu\geq 2}A_{\mu,Q}\). Clearly, \(A_{\mu,Q}\subset A_{\mu}\). We give an upper bound for \(\alpha_{x_{0}}(t)\) for \(t\in A_{\mu,Q}\). **Proposition 4.2**.: _Let \(\mu\geq 2\) and \(t\in A_{\mu,Q}\). Then, \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\)._ Proof.: We begin with the case \(\mu<\infty\). If \(t\in A_{\mu,Q}\), there is a sequence of irreducible fractions \(p_{n}/q_{n}\) with \(q_{n}\in 4Q\mathbb{N}\), for which we can use (24) and write \[R_{x_{0}}\left(t\right)-R_{x_{0}}\Big{(}\frac{p_{n}}{q_{n}}\Big{)}=c\,e^{2\pi i \phi_{n,x_{0}}}\,\frac{\sqrt{h_{n}}}{\sqrt{q_{n}}}-2\pi ih_{n}+O\left(\min\left( \sqrt{q_{n}}\,h_{n},q_{n}^{3/2}\,h_{n}^{3/2}\right)\right), \tag{26}\] where we absorbed \(F(0)\) into \(c\) and we defined \(h_{n}\) and \(\mu_{n}\) as \[h_{n}=\Big{|}t-\frac{p_{n}}{q_{n}}\Big{|}=\frac{1}{q_{n}^{\mu_{n}}}\leq\frac{a }{q_{n}^{\mu}}<\frac{1}{q_{n}^{\mu}}. \tag{27}\] We now aim to absorb the second and third terms in (26) into the first term, which has magnitude \(\sqrt{h_{n}}/\sqrt{q_{n}}\). First, observe that \(q_{n}^{2}h_{n}\leq 1\) because \(\mu\geq 2\). This is equivalent to \(q_{n}^{3/2}h_{n}^{3/2}\leq\sqrt{q_{n}}h_{n}\), so \(\min(\sqrt{q_{n}}\,h_{n},q_{n}^{3/2}\,h_{n}^{3/2})=q_{n}^{3/2}\,h_{n}^{3/2}.\) Now, letting \(C\) be the universal constant in the \(O\) in (26), \[C\,q_{n}^{3/2}h_{n}^{3/2}\leq\frac{c}{4}\frac{\sqrt{h_{n}}}{\sqrt{q_{n}}} \qquad\Longleftrightarrow\qquad q_{n}^{2}h_{n}\leq\frac{c}{4C},\] and since \(q_{n}^{2}h_{n}\leq aq_{n}^{2-\mu}\leq a\), both inequalities hold if we choose \(a\leq c/(4C)\). Regarding the second term, we have \[2\pi h_{n}\leq\frac{c}{4}\,\frac{\sqrt{h_{n}}}{\sqrt{q_{n}}}\qquad \Longleftrightarrow\qquad q_{n}\,h_{n}\leq\Big{(}\frac{c}{8\pi}\Big{)}^{2}\] This holds for large \(n\) because \(q_{n}^{2}h_{n}\leq 1\) implies \(q_{n}\,h_{n}\leq 1/q_{n}\), and because \(\limsup_{n\to\infty}q_{n}=\infty\) (otherwise \(q_{n}\) would be bounded and hence the sequence \(p_{n}/q_{n}\) would be finite). All together, using the reverse triangle inequality in (26) and the bound for \(h_{n}\) in (27) \[\Big{|}R_{x_{0}}\left(t\right)-R_{x_{0}}\Big{(}\frac{p_{n}}{q_{n}}\Big{)}\Big{|} \geq\frac{c}{2}\,\frac{\sqrt{h_{n}}}{\sqrt{q_{n}}}\geq\frac{c}{2}\,h_{n}^{\frac {1}{2}+\frac{1}{2\mu}},\qquad\forall n\gg 1.\] This means that \(R_{x_{0}}\) cannot be better than \(\mathcal{C}^{\frac{1}{2}+\frac{1}{2\mu}}\) at \(t\), thus concluding the proof for \(\mu<\infty\). If \(t\in A_{\infty,Q}\), by definition \(t\in A_{\mu,Q}\) for all \(\mu\geq 2\), hence we just proved that \(\alpha_{x_{0}}(t)\leq 1/2+1/(2\mu)\) for all \(\mu\geq 2\). Taking the limit \(\mu\to\infty\) we get \(\alpha_{x_{0}}(t)\leq 1/2\). We need to compute the Hausdorff dimension of the sets \(\{\,t\,:\,\alpha_{x_{0}}(t)=\alpha\,\}\) with prescribed \(\alpha\), so we would like to complement Proposition 4.2 and prove that for \(t\in A_{\mu,Q}\) we also have \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu}\). According to Proposition 3.6, it would suffice to prove that \(t\in A_{\mu,Q}\) has irrationality \(\mu(t)=\mu\). Unfortunately, when \(\mu<\infty\) this need not be true. To fix this, for \(2\leq\mu<\infty\) define the companion sets \[B_{\mu}=A_{\mu}\setminus\bigcup_{\epsilon>0}A_{\mu+\epsilon}=\Big{\{}\,t\in A _{\mu}\,\mid\,\forall\epsilon>0,\,\big{|}t-\frac{p}{q}\big{|}\leq\frac{1}{q^{ \mu+\epsilon}}\ \ \text{only for finitely many}\ \frac{p}{q}\,\Big{\}},\] and \[B_{\mu,Q}=A_{\mu,Q}\setminus\bigcup_{\epsilon>0}A_{\mu+\epsilon}=\Big{\{}\,t \in A_{\mu,Q}\,\mid\,\forall\epsilon>0,\,\big{|}t-\frac{p}{q}\big{|}\leq\frac{ 1}{q^{\mu+\epsilon}}\ \ \text{only for finitely many}\ \frac{p}{q}\,\Big{\}}, \tag{28}\] which have the properties we need. **Proposition 4.3**.: _Let \(2\leq\mu<\infty\). Then,_ 1. \(B_{\mu,Q}\subset B_{\mu}\subset\{\,t\in\mathbb{R}\setminus\mathbb{Q}\,:\,\mu( t)=\mu\,\}\)_._ 2. _If_ \(t\in B_{\mu,Q}\)_, then_ \(\alpha_{x_{0}}(t)=\frac{1}{2}+\frac{1}{2\mu}\)_._ 3. _If_ \(t\in A_{\infty,Q}\)_, then_ \(\alpha_{x_{0}}(t)=1/2\)_._ Proof.: \((i)\) First, \(B_{\mu,Q}\subset B_{\mu}\) because \(A_{\mu,Q}\subset A_{\mu}\). The second inclusion is a consequence of the definition of the irrationality exponent in (11). Indeed, \(t\in B_{\mu}\subset A_{\mu}\) directly implies that \(\mu(t)\geq\mu\). On the other hand, for all \(\epsilon>0\), \(t\in B_{\mu}\) implies \(t\notin A_{\mu+\epsilon}\), so \(t\) can be approximated with the exponent \(\mu+\epsilon\) only with finitely many fractions, and thus \(\mu(t)\leq\mu+\epsilon\). Consequently, \(\mu(t)\leq\mu\). \((ii)\) By \((i)\), \(t\in B_{\mu,Q}\) implies \(\mu(t)=\mu\), so by Proposition 3.6 we get \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu}\). At the same time, \(t\in B_{\mu,Q}\subset A_{\mu,Q}\), so Proposition 4.2 implies \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\). \((iii)\) It follows directly from Propositions 3.1 and 4.2. **Corollary 4.4**.: _Let \(2<\mu<\infty\). Then, for all \(\epsilon>0\),_ \[B_{\mu,Q}\subset\bigg{\{}\,t\in(0,1)\,:\,\alpha_{x_{0}}(t)=\frac{1}{2}+\frac{ 1}{2\mu}\,\bigg{\}}\subset A_{\mu-\epsilon}.\] _For \(\mu=2\) we have the slightly more precise_ \[B_{2,Q}\subset\{\,t\in(0,1)\,:\,\alpha_{x_{0}}(t)=3/4\,\}\subset A_{2}.\] _For \(\mu=\infty\),_ \[A_{\infty,Q}\subset\{\,t\in(0,1)\,:\,\alpha_{x_{0}}(t)=1/2\,\}\subset A_{ \infty}\cup\mathbb{Q}.\] Proof.: Left inclusions follow from Proposition 4.3 for all \(\mu\geq 2\), so we only need to prove the right inclusions. When \(\mu=2\), it follows from the Dirichlet approximation theorem, which states that \(\mathbb{R}\setminus\mathbb{Q}\subset A_{2}\), and Proposition 3.5, in which we proved that if \(t\) is rational, then either \(\alpha_{x_{0}}(t)=1/2\) or \(\alpha_{x_{0}}(t)\geq 3/2\). Thus, \(\{\,t\in(0,1)\,:\,\alpha_{x_{0}}(t)=3/4\,\}\subset(0,1)\setminus\mathbb{Q} \subset A_{2}\). Suppose now that \(2<\mu<\infty\) and that \(\alpha_{x_{0}}(t)=\frac{1}{2}+\frac{1}{2\mu}\). By Proposition 3.6, \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}\), so we get \(\mu\leq\mu(t)\). In particular, given any \(\epsilon>0\), we have \(\mu-\epsilon<\mu(t)\), so \(\left|t-\frac{p}{q}\right|\leq 1/q^{\mu-\epsilon}\) for infinitely many coprime pairs \((p,q)\in\mathbb{N}\times\mathbb{N}\), which means that \(t\in A_{\mu-\epsilon}\). Finally, for \(\mu=\infty\), if \(t\not\in\mathbb{Q}\) is such that \(\alpha_{x_{0}}(t)=1/2\), then by Proposition 3.6 we get \(\mu(t)=\infty\), which implies that \(t\in A_{\mu}\) for all \(\mu\geq 2\), hence \(t\in A_{\infty}\). To compute the spectrum of singularities \(d_{x_{0}}(\alpha)=\dim_{\mathcal{H}}\{\,t\,:\,\alpha_{x_{0}}(t)=\alpha\,\}\), in view of Corollary 4.4 it suffices to compute \(\dim_{\mathcal{H}}A_{\mu}\) and \(\dim_{\mathcal{H}}B_{\mu,Q}\). **Theorem 4.5**.: _For \(2\leq\mu<\infty\), \(\dim_{\mathcal{H}}A_{\mu}=\dim_{\mathcal{H}}B_{\mu,Q}=2/\mu\). Also, \(\dim_{\mathcal{H}}A_{\infty}=0\)._ Before proving Theorem 4.5 we state as a corollary the first part of Theorem 1.1. **Corollary 4.6**.: _Let \(x_{0}\in\mathbb{Q}\), and let \(d_{x_{0}}\) be the spectrum of singularities of \(R_{x_{0}}\). Then_ \[d_{x_{0}}(\alpha)=4\alpha-2,\qquad\frac{1}{2}\leq\alpha\leq\frac{3}{4}.\] _In particular, \(R_{x_{0}}\) is multifractal._ Proof.: It follows from Corollary 4.4, Theorem 4.5 and the 1-periodicity of \(R_{x_{0}}\). When \(2\leq\mu<\infty\), \[\frac{2}{\mu}\leq d_{x_{0}}\left(\frac{1}{2}+\frac{1}{2\mu}\right)\leq\frac{2 }{\mu-\epsilon},\qquad\forall\epsilon>0\qquad\Longrightarrow\qquad d_{x_{0}} \left(\frac{1}{2}+\frac{1}{2\mu}\right)=\frac{2}{\mu}.\] On the other hand, \(d_{x_{0}}(1/2)\leq\dim_{\mathcal{H}}(A_{\infty}\cup\mathbb{Q})=0\) because \(\dim_{\mathcal{H}}\mathbb{Q}=\dim_{\mathcal{H}}A_{\infty}=0\). We conclude renaming \(\alpha=\frac{1}{2}+\frac{1}{2\mu}\). Let us now prove Theorem 4.5. Proof of Theorem 4.5.: We have \(A_{2}=(0,1)\setminus\mathbb{Q}\) by Dirichlet approximation, so \(\dim_{\mathcal{H}}A_{2}=1\). For \(\mu>2\) we have \(\dim_{\mathcal{H}}A_{\mu}=2/\mu\) by the Jarnik-Besicovitch Theorem 2.2. Also, \(A_{\infty}\subset A_{\mu}\) for all \(\mu\geq 2\), so \(\dim_{\mathcal{H}}A_{\infty}\leq 2/\mu\) for all \(\mu\geq 2\), hence \(\dim_{\mathcal{H}}A_{\infty}=0\). So we only need to prove that \(\dim_{\mathcal{H}}B_{\mu,Q}=2/\mu\) for \(2\leq\mu<\infty\). Moreover, \[B_{\mu,Q}=A_{\mu,Q}\setminus\bigcup_{\epsilon>0}A_{\mu+\epsilon}\subset A_{ \mu,Q}\subset A_{\mu},\] which implies \(\dim_{\mathcal{H}}B_{\mu,Q}\leq\dim_{\mathcal{H}}A_{\mu}=2/\mu\). Hence it suffices to prove that \(\dim_{\mathcal{H}}B_{\mu,Q}\geq 2/\mu\). This claim follows from \(\mathcal{H}^{2/\mu}(A_{\mu,Q})>0\). Indeed, we first remark that the sets \(A_{\mu}\) are nested, in the sense that \(A_{\sigma}\subset A_{\mu}\) when \(\sigma>\mu\). We can therefore write \[\bigcup_{\epsilon>0}A_{\mu+\epsilon}=\bigcup_{n\in\mathbb{N}}A_{\mu+\frac{1}{ n}}.\] By the Jarnik-Besicovitch Theorem 2.2, \(\dim_{\mathcal{H}}A_{\mu+1/n}=2/(\mu+1/n)<2/\mu\), so \(\mathcal{H}^{2/\mu}(A_{\mu+1/n})=0\) for all \(n\in\mathbb{N}\), hence \[\mathcal{H}^{2/\mu}\Big{(}\bigcup_{\epsilon>0}A_{\mu+\epsilon}\Big{)}= \mathcal{H}^{2/\mu}\Big{(}\bigcup_{n\in\mathbb{N}}A_{\mu+\frac{1}{n}}\Big{)} =\lim_{n\to\infty}\mathcal{H}^{2/\mu}\big{(}A_{\mu+\frac{1}{n}}\big{)}=0.\] Therefore, \[\mathcal{H}^{2/\mu}\big{(}B_{\mu,Q}\big{)}=\mathcal{H}^{2/\mu}\Big{(}A_{\mu,Q }\setminus\bigcup_{\epsilon>0}A_{\mu+\epsilon}\Big{)}=\mathcal{H}^{2/\mu}(A_{ \mu,Q})-\mathcal{H}^{2/\mu}\Big{(}\bigcup_{\epsilon>0}A_{\mu+\epsilon}\Big{)} =\mathcal{H}^{2/\mu}\left(A_{\mu,Q}\right),\] so \(\mathcal{H}^{2/\mu}(A_{\mu,Q})>0\) implies \(\mathcal{H}^{2/\mu}(B_{\mu,Q})>0\), hence \(\dim_{\mathcal{H}}B_{\mu,Q}\geq 2/\mu\). Thus, it suffices to prove \(\mathcal{H}^{2/\mu}(A_{\mu,Q})>0\), for which we follow the procedure outlined in Section 2 with the set of denominators \(\mathcal{Q}=4Q\mathbb{N}\). The first step is to detect the largest \(\mu\) such that \(A_{\mu,Q}\) has full Lebesgue measure. We do this using the Duffin-Schaeffer Theorem 2.1. Define \[\psi_{\mu,Q}(q)=a\,\frac{\mathbbm{1}_{4Q\mathbb{N}}(q)}{q^{\mu}},\] where \(a>0\) comes from the definition of \(A_{\mu,Q}\) and \(\mathbb{1}_{4Q\mathbb{N}}(q)\) is the indicator function of \(4Q\mathbb{N}\), \[\mathbb{1}_{4Q\mathbb{N}}(q)=\left\{\begin{array}{ll}1,&\mbox{ if }4Q\,\mid\,q,\\ 0,&\mbox{ otherwise.}\end{array}\right.\] Then, we have \(A_{\mu,Q}=A_{\psi_{\mu,Q}}\), where \[A_{\psi_{\mu,Q}}=\Big{\{}\,t\in[0,1]\,:\,\Big{|}t-\frac{p}{q}\Big{|}\leq\psi_{ \mu,Q}(q)\ \mbox{ for i. m. coprime pairs }(p,q)\in\mathbb{N}\times\mathbb{N}\,\Big{\}}\] has the form needed for the Duffin-Schaeffer Theorem 2.1. The inclusion \(\subset\) follows directly from the definition of \(\psi_{\mu,Q}\). For the inclusion \(\supset\), observe first that if \(t\in A_{\psi_{\mu,Q}}\) with \(\mu>1\), then \(t\not\in\mathbb{Q}\). Now, if a coprime pair \((p,q)\in\mathbb{N}^{2}\) satisfies \(|t-p/q|\leq\psi_{\mu,Q}(q)\), then \(q\in 4Q\mathbb{N}\) because otherwise we get the contradiction \[0<\Big{|}t-\frac{p}{q}\Big{|}\leq\psi_{\mu,Q}(q)=a\;\frac{\mathbb{1}_{4Q \mathbb{N}}(q)}{q^{\mu}}=0.\] In this setting, the Duffin-Schaeffer theorem says that \(A_{\mu,Q}\) has Lebesgue measure \(1\) if and only if \[\sum_{q=1}^{\infty}\varphi(q)\,\psi_{\mu,Q}(q)=\frac{a}{(4Q)^{\mu}}\,\sum_{n= 1}^{\infty}\frac{\varphi(4Qn)}{n^{\mu}}=\infty,\] and has zero measure otherwise. Using this characterization, we prove now \[|A_{\mu,Q}|=\left\{\begin{array}{ll}1,&\mu\leq 2,\\ 0,&\mu>2,\end{array}\right. \tag{29}\] independently of \(a\). To detect the critical \(\mu=2\) is easy: first, trivially bound \(\varphi(n)<n\) so that \[\sum_{n=1}^{\infty}\frac{\varphi(4Qn)}{n^{\mu}}<\sum_{n=1}^{\infty}\frac{4Qn} {n^{\mu}}=4Q\,\sum_{n=1}^{\infty}\frac{1}{n^{\mu-1}}<\infty,\qquad\mbox{ if }\ \mu>2;\] and this argument fails when \(\mu=2\). What is more, denote by \(\mathbb{P}\) the set of primes so that \[\sum_{n=1}^{\infty}\,\frac{\varphi(4Qn)}{n^{2}}>\sum_{p\in\mathbb{P},\,p>4Q} \,\frac{\varphi(4Qp)}{p^{2}}\] If \(p\in\mathbb{P}\) and \(p>4Q\), then \(\gcd(p,4Q)=1\) because \(p\nmid 4Q\) (for if \(p\mid 4Q\) then \(p\leq 4Q\)). Therefore, \(\varphi(4Qp)=\varphi(4Q)\,\varphi(p)=\varphi(4Q)\,(p-1)>\varphi(4Q)\,p/2\), so \[\sum_{n=1}^{\infty}\,\frac{\varphi(4Qn)}{n^{2}}>\frac{\varphi(4Q)}{2}\,\sum_{ p\in\mathbb{P},\,p>4Q}\,\frac{1}{p}=\infty,\] because the sum of the reciprocals of the prime numbers diverges18. The Duffin-Schaeffer Theorem 2.1 thus implies that \(|A_{2,Q}|=1\) and, in particular, \(\dim_{\mathcal{H}}A_{2,Q}=1\). From this we immediately get \(|A_{\mu,Q}|=1\) when \(\mu<2\) because \(A_{2,Q}\subset A_{\mu,Q}\). Footnote 18: This argument shows that the strategy used here to compute the dimension of \(A_{\mu,\mathcal{Q}}\) also works if we restrict the denominators to the primes \(\mathcal{Q}=\mathbb{P}\) in the first place. This situation arises when computing the spectrum of singularities of trajectories of polygonal lines with non-zero rational torsion, studied in [2]. Once we know (29), we can use the Mass Transference Principle Theorem 2.3 to compute the dimension of \(A_{\mu,Q}\) for \(\mu>2\). Write first \[A_{\mu,Q}=\limsup_{q\to\infty}\,\bigcup_{p\leq q,\,(p,q)=1}B\Big{(}\,\frac{p} {q},\psi_{\mu,Q}(q)\Big{)}.\] Let \(\beta=2/\mu\) so that \[\psi_{\mu,Q}(q)^{\beta}=\Big{(}a\,\frac{\mathds{1}_{4Q\mathbb{N}}(q)}{q^{\mu}} \Big{)}^{\beta}=a^{\beta}\,\frac{\mathds{1}_{4Q\mathbb{N}}(q)}{q^{\mu\beta}}=a^{ 2/\mu}\,\frac{\mathds{1}_{4Q\mathbb{N}}(q)}{q^{2}}=\psi_{2,Q}(q),\] with a new underlying constant \(a^{2/\mu}\), and therefore, \[(A_{\mu,Q})^{\beta}:=\limsup_{q\to\infty}\bigcup_{p\leq q,\,(p,q)=1}B\Big{(} \frac{p}{q},\psi_{\mu,Q}(q)^{\beta}\Big{)}=\limsup_{q\to\infty}\bigcup_{p\leq q,\,(p,q)=1}B\Big{(}\frac{p}{q},\psi_{2,Q}(q)\Big{)}=A_{2,Q}.\] Observe that \(\beta\) is chosen to be the largest possible exponent that gives \(|(A_{\mu,Q})^{\beta}|=|(A_{\mu\beta,Q})|=1\). Since (29) is independent of \(a\), we get \(|(A_{\mu,Q})^{2/\mu}|=|A_{2,Q}|=1\), and the Mass Transference Principle Theorem 2.3 implies that \(\mathcal{H}^{2/\mu}\big{(}A_{\mu,Q}\big{)}=\infty\). The proof is complete. ## 5. The high-pass filters and the multifractal formalism when \(x_{0}\in\mathbb{Q}\) In this section we compute the \(L^{p}\) norms of the high-pass filters of \(R_{x_{0}}\) when \(x_{0}\in\mathbb{Q}\). As a consequence, we compute the exponent \(\eta(p)\) defined in (8) and we prove that \(R_{x_{0}}\) satisfies the Frisch-Parisi multifractal formalism, thus completing the proof of Theorem 1.1. In Section 5.1 we define Fourier high-pass filters using smooth cutoffs, reduce the computation of their \(L^{p}\) norms to the study of Fourier localized \(L^{p}\) estimates, state such localized estimates and prove the second part of Theorem 1.1. After that, in Section 5.2 we prove the localized estimates. ### High-pass filters and proof of the second part of Theorem 1.1 We begin with the definition of high-pass filters we use in the proofs. Let \(\phi\in C^{\infty}\) a positive and even cutoff with support on \([-1,1]\) and such that \(\phi(x)=1\) on \(x\in[-1/2,1/2]\). Let \(\psi(x)=\phi(2x)-\phi(x)\), and \[\psi_{-1}(x)=\frac{\phi(x)}{\phi(x)+\sum_{i\in\mathbb{N}}\psi(x/2^{i})}, \qquad\psi_{k}(x)=\frac{\psi(x/2^{k})}{\phi(x)+\sum_{i\in\mathbb{N}}\psi(x/2^ {i})},\qquad\text{ for }k\geq 0,\] so that we have the partition of unity \(\sum_{k=-1}^{\infty}\psi_{k}(x)=1\). For \(k\geq 0\), \(\psi_{k}\) is supported on \([-2^{k+1},-2^{k-1}]\cup[2^{k-1},2^{k+1}]\). Let \(f\) be a periodic function with Fourier series \(f(t)=\sum_{n\in\mathbb{Z}}a_{n}e^{2\pi int}\). With the partition of unity above, we perform a Littlewood-Paley decomposition \[f(t)=\sum_{k=-1}^{\infty}P_{k}f(t),\qquad\text{ where }\qquad P_{k}f(t)=\sum_{n \in\mathbb{Z}}\psi_{k}(n)a_{n}e^{2\pi int}.\] Roughly speaking, the Fourier high-pass filter at frequency \(N\in\mathbb{N}\) is \(P_{\geq N}f(t)=\sum_{k\geq\log N}P_{k}f(t)\). Let us be more precise working directly with \(R_{x_{0}}\), whose frequencies in \(t\) are squared. Let \(N\in\mathbb{N}\) be large, and define \(k_{N}\) to be the unique \(k_{N}\in\mathbb{N}\) such that \(2^{k_{N}}\leq\sqrt{N}<2^{k_{N}+1}\). We define the high-pass filter of \(R_{x_{0}}\) at frequency \(N\) as \[P_{\geq N}R_{x_{0}}(t)=\sum_{k\geq k_{N}}P_{k}R_{x_{0}}(t),\qquad\text{ where }\qquad P_{k}R_{x_{0}}(t)=\sum_{n\in\mathbb{N}}\psi_{k}(n)\frac{e^{2\pi i(n^{2}t+ nx_{0})}}{n^{2}}. \tag{30}\] We first estimate \(\|P_{k}R_{x_{0}}\|_{p}\) and then extend the result to estimate \(\|P_{\geq N}R_{x_{0}}\|_{p}\). **Remark 5.1**.: At a first glance, using pure Littlewood-Paley blocks in the definition for high-pass filters in (30) may seem restrictive, since it is analogue to estimating high-frequency cutoffs only for a sequence \(N_{k}\simeq 2^{k}\to\infty\). However, the estimates we give depend only on the \(L^{1}\) norm of the cutoff \(\psi\), so slightly varying the definition and support of \(\psi\) does not affect the estimates. This would be analogue to having a continuum of frequencies \(N\to\infty\) available for cutoffs. We now state the estimates for the frequency localized \(L^{p}\) estimates. For the sake of generality, let \(\Psi\in C^{\infty}\) be compactly supported outside the origin and bounded below in an interval of its support (for instance, \(\psi\) defined above). **Theorem 5.2**.: _Let \(x_{0}\in\mathbb{R}\). Then, for \(N\gg 1\),_ \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi\big{(}\frac{n}{N}\big{)}\,e^{2\pi i(n^{2}\,t +n\,x_{0})}\Big{\|}_{L^{p}(0,1)}^{p}\lesssim\left\{\begin{array}{ll}N^{p-2},&\mbox{ when }p>4,\\ N^{2}\log N,&\mbox{ when }p=4,\\ N^{p/2},&\mbox{ when }p<4.\end{array}\right. \tag{31}\] _When \(p=2\), the upper bound is sharp, that is, \(\big{\|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\big{\|}_ {L^{2}(0,1)}^{2}\simeq N\)._ _If \(x_{0}\in\mathbb{Q}\), then the upper bound is sharp. That is, if \(x_{0}=P/Q\) with \((P,Q)=1\), then_ \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi\big{(}\frac{n}{N}\big{)}\,e^{2\pi i(n^{2}\, t+n\,x_{0})}\Big{\|}_{L^{p}(0,1)}^{p}\simeq_{Q}\left\{\begin{array}{ll}N^{p-2},&\mbox{ when }p>4,\\ N^{2}\log N,&\mbox{ when }p=4,\\ N^{p/2},&\mbox{ when }p<4.\end{array}\right. \tag{32}\] **Remark 5.3**.: All estimates in Theorem 5.2 depend on \(\|\Psi\|_{1}\) due to Lemma 5.4. We postpone the proof of Theorem 5.2 to Section 5.2. Let us see how to use it to compute the \(L^{p}\) norms of the high-pass filters \(\|P_{\geq N}R_{x_{0}}\|_{p}\) and therefore prove the second part of Theorem 1.1. Proof of second part of Theorem 1.1.: Denote the estimate for \(x_{0}\in\mathbb{Q}\) on (32) in Theorem 5.2 by \[\big{\|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\big{\|}_ {L^{p}(0,1)}^{p}\simeq G_{p}(N). \tag{33}\] First, use the triangle inequality in (30) to bound \[\|P_{\geq N}R_{x_{0}}\|_{p}\leq\sum_{k\geq k_{N}}\|P_{k}R_{x_{0}}\|_{p}=\sum_{ k\geq k_{N}}\Big{\|}\sum_{n\in\mathbb{Z}}\psi_{k}(n)\,\frac{e^{2\pi i(n^{2}t+nx_{0})} }{n^{2}}\Big{\|}_{p}.\] Since \(\psi_{k}\) is supported on \([2^{k-1},2^{k+1}]\), we can take the denominator \(n^{2}\) out of the \(L^{p}\) norm to get \[\|P_{\geq N}R_{x_{0}}\|_{p}\lesssim\sum_{k\geq k_{N}}\frac{1}{2^{2k}}\,\Big{\|} \sum_{n\in\mathbb{Z}}\psi_{k}(n)\,e^{2\pi i(n^{2}t+nx_{0})}\Big{\|}_{p},\] for example using [21, Lemma 3.1, Corollary 3.2]. We can now use (33) to get19 Footnote 19: The estimates in Theorem 5.2 depend on \(\|\Psi\|_{1}\), so strictly speaking we need to check that for large enough \(k\gg 1\), the norm \(\|\psi_{k}(2^{k})\|_{1}\) does not depend on \(k\). This is the case, since \[\int\psi_{k}(2^{k}x)\,dx=\int_{1/2}^{2}\frac{\psi(x)}{\phi(2^{k}x)+\sum_{i=0}^ {\infty}\psi(2^{k}x/2^{i})}\,dx=\int_{1/2}^{2}\frac{\psi(x)}{\psi(x/2)+\psi(x) +\psi(2x)}\,dx=C_{\psi}.\] \[\|P_{\geq N}R_{x_{0}}\|_{p}\lesssim\sum_{k\geq k_{N}}\frac{G_{p}(2^{k})^{1/p}}{2 ^{2k}}\simeq\frac{G_{p}(2^{k_{N}})^{1/p}}{2^{2k_{N}}}, \tag{34}\] where the last equality follows by direct calculation because the defintion of \(G_{p}\) makes the series be geometric. For the lower bound, as long as \(1<p<\infty\), the Mihkhin multiplier theorem20 Footnote 20: Apply Mihkhin’s theorem in \(\mathbb{R}\) to the operator \(P_{k_{N}}\) in (30) to get \(\|P_{k_{N}}f\|_{p}\simeq\|P_{k_{N}}P_{\geq N}f\|_{p}\lesssim\|P_{\geq N}f\|_{p}\), and then periodize the result using a theorem by Stein and Weiss [40, Chapter 7, Theorem 3.8]. combined again with [21, Lemma 3.1, Corollary 3.2] and (33) gives \[\|P_{\geq N}R_{x_{0}}\|_{p}\gtrsim\|P_{k_{N}}R_{x_{0}}\|_{p}\simeq\frac{1}{2^{2 k_{N}}}\,\Big{\|}\sum_{n}\psi_{k_{N}}(n)\,e^{2\pi i(n^{2}t+nx_{0})}\Big{\|}_{p} \simeq\frac{G_{p}(2^{k_{N}})^{1/p}}{2^{2k_{N}}}. \tag{35}\] Joining (34) and (35) and recalling that \(2^{k_{N}}\simeq\sqrt{N}\), we conclude that \[\|P_{\geq N}R_{x_{0}}\|_{p}\simeq\frac{G_{p}(2^{k_{N}})^{1/p}}{2^{2k_{N}}}\simeq \left\{\begin{array}{ll}N^{-1/2-1/p},&p>4,\\ N^{-3/4}\,(\log N)^{1/4},&p=4,\\ N^{-3/4},&p<4,\end{array}\right.\] which proves the first claim in (9) in Theorem 1.1. It immediately follows that \[\eta(p)=\lim_{N\to\infty}\frac{\log(\|P_{\geq N}R_{x_{0}}\|_{p}^{p})}{\log(1/N )}=\left\{\begin{array}{ll}p/2+1,&p>4,\\ 3p/4,&p\leq 4,\end{array}\right.\] and having computed \(d_{x_{0}}(\alpha)=4\alpha-2\) for \(1/2\leq\alpha\leq 3/4\) in Corollary 4.6, direct computation shows the validity of the multifractal formalism \[d_{x_{0}}(\alpha)=\inf_{p>0}\{\,\alpha p-\eta(p)+1\},\qquad\text{ for }\quad \frac{1}{2}\leq\alpha\leq\frac{3}{4}.\qed\] ### Frequency localized \(L^{p}\) norms In this section we prove Theorem 5.2. The \(L^{2}\) estimate, which holds for all \(x_{0}\), follows from Plancherel's theorem. For \(p\neq 2\), we use the following well-known lemma, whose proof can be found in [9, Lemma 3.18] (see also [2, Lemma 4.4]). **Lemma 5.4**.: _Let \(\Psi\in C_{0}^{\infty}(\mathbb{R})\). Let \(N\in\mathbb{N}\) and \(q\in\mathbb{N}\) such that \(q\leq N\). Let also \(a\in\mathbb{Z}\) such that \((a,q)=1\). Then,_ \[\Big{|}t-\frac{a}{q}\Big{|}\leq\frac{1}{qN}\quad\Longrightarrow\quad\Big{|} \sum_{n\in\mathbb{Z}}\Psi\left(\frac{n}{N}\right)\,e^{2\pi i(n^{2}t+nx)}\, \Big{|}\lesssim_{\|\Psi\|_{1}}\frac{N}{\sqrt{q}\,\left(1+N\,\sqrt{|t-a/q|} \right)}. \tag{36}\] _Moreover, there exist \(\delta,\epsilon\leq 1\) only depending on \(\Psi\) such that if_ \[q\leq\epsilon N,\qquad\Big{|}t-\frac{a}{q}\Big{|}\leq\frac{\delta}{N^{2}}, \qquad\Big{|}x-\frac{b}{q}\Big{|}\leq\frac{\delta}{N}\] _for some \(b\in\mathbb{Z}\), then_ \[\Big{|}\sum_{n\in\mathbb{Z}}\Psi\left(\frac{n}{N}\right)\,e^{2\pi i(n^{2}t+nx) }\,\Big{|}\simeq_{\|\Psi\|_{1}}\frac{N}{\sqrt{q}}.\] We are now ready to prove Theorem 5.2. Proof of Theorem 5.2.: Let \(x_{0}\in\mathbb{R}\). For simplicity, we prove the \(L^{2}\) estimate for \(\Psi\) symmetric. Considering \(f\) as a Fourier series in \(t\), by Plancherel's theorem we write \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi\big{(}\frac{n}{N}\big{)}\,e^{2 \pi i(n^{2}\,t+n\,x_{0})}\Big{\|}_{L^{2}(0,1)}^{2} =\sum_{n=1}^{\infty}\Big{|}\Psi\big{(}\frac{n}{N}\big{)}\,e^{2\pi in \,x_{0}}+\Psi\big{(}-\frac{n}{N}\big{)}\,e^{-2\pi in\,x_{0}}\Big{|}^{2}\] \[=\sum_{n=1}^{\infty}\Psi\big{(}\frac{n}{N}\big{)}^{2}\,\big{|}e^{2 \pi inx_{0}}+e^{-2\pi inx_{0}}\big{|}^{2}\simeq\sum_{n=1}^{\infty}\Psi\big{(} \frac{n}{N}\big{)}^{2}\cos^{2}(2\pi nx_{0})\] This sum is upper bounded by \(N\) by the triangle inequality. If \(x_{0}\) is rational, say \(x_{0}=P/Q\), the bound from below follows21 by summing only over multiples of \(Q\) in \([N,2N]\), so that Footnote 21: Without loss of generality assume that \(\Psi(x)\simeq 1\) for \(x\in(1,2)\). \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi(\frac{n}{N})\,e^{2\pi i(n^{2}\,t+n\,x_{0})} \Big{\|}_{L^{2}(0,1)}^{2}\gtrsim\sum_{k=N/Q}^{2N/Q}\cos^{2}(2\pi kQx_{0})=\frac {N}{Q}\simeq_{Q}N.\] If \(x_{0}\) is irrational, it is known that the sequence \((nx_{0})_{n}\) is equidistributed in the torus, which means that for any continuous \(p\)-periodic function \[\lim_{N\to\infty}\frac{1}{N}\sum_{n=1}^{N}f(nx_{0})=\int_{0}^{p}f.\] In particular, since for \(f(y)=\cos(4\pi y)\) we have \(\int_{0}^{1/2}f(y)\,dy=0\), we get22 for large \(N\) that Footnote 22: Using the trigonometric identity \(\cos^{2}(x)=(1+\cos(2x))/2\). \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi\big{(}\frac{n}{N}\big{)}\,e^{2\pi i(n^{2}\,t +n\,x_{0})}\Big{\|}_{L^{2}(0,1)}^{2}\gtrsim\sum_{n=N}^{2N}\cos^{2}(2\pi nx_{0} )\simeq N+\sum_{n=N}^{2N}\cos(4\pi nx_{0})\simeq N.\] We now prove the upper bound (31) for any \(x_{0}\in\mathbb{R}\). The Dirichlet approximation theorem implies that any \(t\in\mathbb{R}\setminus\mathbb{Q}\) can be approximated as follows: \[\forall N\in\mathbb{N},\quad\exists q\leq N,\quad 1\leq a\leq q\quad\text{ such that }\quad\Big{|}t-\frac{a}{q}\Big{|}\leq\frac{1}{qN},\] which can be rewritten as \(\mathbb{R}\setminus\mathbb{Q}\subset\bigcup_{q=1}^{N}\bigcup_{a=1}^{q}B\big{(} \frac{a}{q},\frac{1}{qN}\big{)}\) for all \(N\in\mathbb{N}\). Therefore, for any \(N\in\mathbb{N}\), \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{\|}_ {L^{p}(0,1)}^{p}\leq\sum_{q=1}^{N}\sum_{a=1}^{q}\int_{B(\frac{a}{q},\frac{1}{qN })}\Big{|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{|} ^{p}\,dt. \tag{37}\] We split each integral according to the two situations in (36) in Lemma 5.4: \[\begin{split}\int_{|t-\frac{a}{q}|<\frac{1}{N^{2}}}& \Big{|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{|}^{p }\,dt+\int_{\frac{1}{N^{2}}<|t-\frac{a}{q}|<\frac{1}{qN}}\Big{|}\sum_{n\in \mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{|}^{p}\,dt\\ &\leq\int_{|t-\frac{a}{q}|<\frac{1}{N^{2}}}\Big{(}\frac{N}{\sqrt{q }}\Big{)}^{p}\,dt+\int_{\frac{1}{N^{2}}<|t-\frac{a}{q}|<\frac{1}{qN}}\Big{(} \frac{1}{\sqrt{q}\,|t-\frac{a}{q}|^{1/2}}\Big{)}^{p}\,dt\\ &\simeq\frac{N^{p-2}}{q^{p/2}}+\frac{1}{q^{p/2}}\,\int_{\frac{1}{ N^{2}}}^{\frac{1}{qN}}\frac{1}{h^{p/2}}\,dh.\end{split} \tag{38}\] The behavior of that last integral changes depending on \(p\) being greater or smaller than \(2\). * If \(p<2\), \[(\ref{eq:22})\simeq\frac{N^{p-2}}{q^{p/2}}+\frac{1}{q^{p/2}}\left(\left( \frac{1}{qN}\right)^{1-p/2}-\left(\frac{1}{N^{2}}\right)^{1-p/2}\right)\leq \frac{N^{p-2}}{q^{p/2}}+\frac{1}{q\,N^{1-p/2}},\] so \[(\ref{eq:22})\leq N^{p-2}\,\sum_{q=1}^{N}\sum_{a=1}^{q}\frac{1}{q^{p/2}}+ \frac{1}{N^{1-p/2}}\,\sum_{q=1}^{N}\sum_{a=1}^{q}\frac{1}{q}\lesssim N^{p/2}.\] * If \(p=2\), \[(\ref{eq:22})\simeq\frac{1}{q}\Big{(}1+\int_{\frac{1}{N^{2}}}^{\frac{1}{qN}} \frac{dh}{h}\Big{)}\lesssim\frac{1}{q}\left(1+\log(N^{2})-\log(qN)\right)=\frac {1+\log(N/q)}{q},\] hence \[(\ref{eq:22})\lesssim\sum_{q=1}^{N}\Big{(}1-\log(q/N)\Big{)}\simeq N-\int_{1} ^{N}\log(x/N)\,dx\simeq N\Big{(}1-\int_{\frac{1}{N}}^{1}\log(y)\,dy\Big{)} \simeq N.\] * If \(p>2\), (38) \[\simeq\frac{N^{p-2}}{q^{p/2}}+\frac{\big{(}N^{2}\big{)}^{p/2-1}-(qN)^{p/2-1}}{q^{ p/2}}\lesssim\frac{N^{p-2}}{q^{p/2}}\quad\Longrightarrow\quad\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eq Using this lemma in (41), when \(p<4\) we get \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{\|}_{L^ {p}(0,1)}^{p}\simeq_{p,Q}\frac{N^{p-2}}{Q^{p/2}}\,\Big{(}\frac{\epsilon N}{Q} \Big{)}^{2-\frac{p}{2}}\simeq_{p,Q}N^{p/2}.\] Similarly, when \(p=4\) we get \[\Big{\|}\sum_{n\in\mathbb{Z}}\Psi(n/N)\,e^{2\pi i(n^{2}\,t+n\,x_{0})}\Big{\|}_{ L^{4}(0,1)}^{4}\simeq_{Q}\frac{N^{2}}{Q^{2}}\,\log\Big{(}\frac{\epsilon N}{Q} \Big{)}\simeq_{Q}N^{2}\,\log N.\] Together with the upper bounds in (31), this completes the proof. ## 6. Result for \(R_{x_{0}}\) when \(x_{0}\not\in\mathbb{Q}\) - Proof of Theorem 1.3 In this section we work with \(x_{0}\not\in\mathbb{Q}\) and prove Theorem 1.3. Following the strategy for \(x_{0}\in\mathbb{Q}\), we first study the Holder regularity at rational \(t\) in Section 6.1, and at irrational \(t\) in Section 6.2 ### Regularity at rational \(t\) Let \(t=p/q\) an irreducible fraction. With Corollary 3.3 in mind, we now have \(x_{q}=\operatorname{dist}(x_{0},\mathbb{Z}/q)\neq 0\). Since \(q\) is fixed, \(\lim_{h\to 0}x_{q}/|h|^{1/2}=\infty\), so \(F_{\pm}(x)=O(x^{-2})\) implies \(F_{\pm}(x_{q}/\sqrt{|h|})\lesssim|h|/x_{q}^{2}\) when \(h\to 0\). Also \(|G(p,m_{q},q)|\leq\sqrt{2q}\) for all \(m_{q}\), so from Corollary 3.3 we get the following result, which shows that \(R_{x_{0}}\) is more regular at rational points when \(x_{0}\notin\mathbb{Q}\). **Proposition 6.1**.: _Let \(x_{0}\in\mathbb{R}\setminus\mathbb{Q}\) and let \(t\in\mathbb{Q}\). Then, \(R_{x_{0}}\in C^{3/2}(t)\), that is, \(\alpha_{x_{0}}(t)\geq 3/2\). More precisely, if \(t=p/q\) with \((p,q)=1\), then_ \[\Big{|}\,R_{x_{0}}\Big{(}\frac{p}{q}+h\Big{)}-R_{x_{0}}\Big{(}\frac{p}{q} \Big{)}+2\pi ih\,\Big{|}\lesssim\left(\frac{1}{\sqrt{q}\,x_{q}^{2}}+q^{3/2} \right)\,h^{3/2}.\] ### Regularity at irrational \(t\) Let now \(t\notin\mathbb{Q}\). We aim at an upper bound for \(\alpha_{x_{0}}(t)\) that complements the lower bound in Proposition 3.6. For that, as before, we approximate \(t\not\in\mathbb{Q}\) by rationals \(p_{n}/q_{n}\) and use the asymptotic behavior in Corollary 3.3. Now, however, since \(x_{0}\not\in\mathbb{Q}\) implies \(x_{q_{n}}\neq 0\), we cannot directly assume \(F_{\pm}(x_{q_{n}}/\sqrt{|h_{q_{n}}|})\simeq F_{\pm}(0)\simeq 1\) anymore. Therefore, it is fundamental to understand the behavior of the quotient \(x_{q_{n}}/\sqrt{|h_{q_{n}}|}\). We begin with some heuristic computations. With the definition of the exponent of irrationality in mind, let \(q\in\mathbb{N}\) and define the exponents \(\mu_{q}\) and \(\sigma_{q}\) as \[x_{q}=\operatorname{dist}(x_{0},\mathbb{Z}/q)=\frac{1}{q^{\sigma_{q}}},\qquad |h_{q}|=\operatorname{dist}(t,\mathbb{Z}/q)=\frac{1}{q^{\mu_{q}}},\qquad \Longrightarrow\qquad\frac{x_{q}}{\sqrt{|h_{q}|}}=\frac{1}{q^{\sigma_{q}-\mu_{q }/2}}.\] If \(\sigma_{q}-\mu_{q}/2>c>0\) holds for a sequence \(q_{n}\), we should recover the behavior when \(x_{0}\in\mathbb{Q}\) because \[\lim_{n\to\infty}\big{(}\sigma_{q_{n}}-\frac{\mu_{q_{n}}}{2}\big{)}\geq c>0 \quad\Longrightarrow\quad\lim_{n\to\infty}\frac{x_{q_{n}}}{\sqrt{|h_{q_{n}}|}}= 0\quad\Longrightarrow\quad F_{\pm}\Big{(}\frac{x_{q_{n}}}{\sqrt{|h_{q_{n}}|}} \Big{)}\simeq F_{\pm}(0),\quad n\gg 1. \tag{42}\] The main term in the asymptotic behavior for \(R_{x_{0}}(t)-R_{x_{0}}(p_{n}/q_{n})\) in Corollary 3.3 would then be \[\operatorname{Main\ Term}\ =\frac{\sqrt{|h_{q_{n}}|}}{q_{n}}G(p_{n},m_{q_{n}},q _{n})F_{\pm}(0)\simeq\frac{\sqrt{|h_{q_{n}}|}}{\sqrt{q_{n}}}\simeq h_{q_{n}}^{ \frac{1}{2}+\frac{1}{\mu_{q_{n}}}}\] if we assume the necessary parity conditions so that \(|G(p_{n},m_{q_{n}},q_{n})|\simeq\sqrt{q_{n}}\). Recalling the definition of the exponent of irrationality \(\mu(\cdot)\) in (11), we may think of \(\sigma_{q_{n}}\to\mu(x_{0})\) and \(\mu_{q_{n}}\to\mu(t)\), so these heuristic computations suggest that \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu(t)}\) for \(t\) such that \(\mu(t)\leq 2\mu(x_{0})\). Since Proposition 3.6 gives \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}\), we may expect that \[\alpha_{x_{0}}(t)=\frac{1}{2}+\frac{1}{2\mu(t)},\qquad\text{if}\quad 2\leq\mu(t) \leq 2\mu(x_{0}). \tag{43}\] It is less clear what to expect when \(\mu(t)>2\mu(x_{0})\), since the behavior in (42) could be different. Actually, if we had \(\sigma_{q_{n}}-\mu_{q_{n}}/2<c<0\) for all sequences, then since \(F_{\pm}(x)=x^{-2}+O(x^{-4})\), \[\lim_{n\to\infty}\frac{x_{q_{n}}}{\sqrt{|h_{q_{n}}|}}=\lim_{n\to\infty}q_{n}^{ \mu_{q_{n}}/2-\sigma_{q_{n}}}=\infty\qquad\Longrightarrow\qquad F_{\pm}\Big{(} \frac{x_{q_{n}}}{\sqrt{|h_{q_{n}}|}}\Big{)}\simeq\frac{1}{q_{n}^{\mu_{q_{n}}-2 \sigma_{q_{n}}}}=|h_{q_{n}}|^{1-\frac{2\sigma_{q_{n}}}{\mu_{q_{n}}}},\] which in turn would make the main term in \(R_{x_{0}}(t)-R_{x_{0}}(p_{n}/q_{n})\) be \[\text{Main Term }=\frac{\sqrt{h_{q_{n}}}}{q_{n}}G(p_{n},m_{q_{n}},q_{n})F_{ \pm}\Big{(}\frac{x_{q_{n}}}{\sqrt{|h_{q_{n}}|}}\Big{)}\simeq h_{q_{n}}^{\frac {1}{2}+\frac{1}{2\mu_{q_{n}}}}\;h_{q_{n}}^{1-\frac{2\sigma_{q_{n}}}{\mu_{q_{n} }}}\simeq h_{q_{n}}^{\frac{3}{2}-\frac{4\sigma_{q_{n}}-1}{2\mu_{q_{n}}}},\] which corresponds to an exponent \(\frac{3}{2}-\frac{4\mu(x_{0})-1}{2\mu(t)}\). Together with lower bound in Proposition 3.6, we would get \(\frac{1}{2}+\frac{1}{2\mu(t)}\leq\alpha_{x_{0}}(t)\leq\frac{3}{2}-\frac{4\mu( x_{0})-1}{2\mu(t)}\), which leaves a gap. The main difficulty to materialize the ideas leading to (43) is that we need the sequence \(q_{n}\) to generate good approximations of both \(x_{0}\) and \(t\), which a priori may be not possible. In the following lines we show how we can partially dodge this problem to prove Theorem 1.3. **Proof of Theorem 1.3.** Let \(\sigma\geq 2\). Recalling the definition of the sets \(A_{\mu,\mathcal{Q}}\) in (13), define \[A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}=\left\{\,x\in[0,1]\,:\Big{|}x- \frac{b}{q}\Big{|}<\frac{1}{q^{\sigma}}\text{ for infinitely many coprime pairs }(b,q)\in\mathbb{N}\times(\mathbb{N}\setminus 4\mathbb{N})\,\right\}.\] We first prove that the restriction in the denominators23 does not affect the Hausdorff dimension. Footnote 23: This condition, which will be apparent later, comes from parity the conditions for the Gauss sums not to vanish. **Proposition 6.2**.: _Let \(\sigma\geq 2\). Then, \(\dim_{\mathcal{H}}A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}=2/\sigma\). Moreover, \(|A_{2,\,\mathbb{N}\setminus 4\mathbb{N}}|=1\) and if \(\sigma>2\), \(\mathcal{H}^{2/\sigma}(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})=\infty\)._ Proof.: The proof for the upper bound for the Hausdorff dimension is standard. Writing \[A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}=\limsup_{q\to\infty}\bigcup_{ \begin{subarray}{c}(q\not\in 4\mathbb{N})\end{subarray}}\bigcup_{1\leq b<q,\,(b.q)=1}B \Big{(}\frac{b}{q},\frac{1}{q^{\sigma}}\Big{)}=\bigcap_{Q=1}^{\infty}\bigcup_ {q\geq Q,\,q\not\in 4\mathbb{N}}\Bigg{(}\bigcup_{1\leq b<q,\,(b.q)=1}B\Big{(} \frac{b}{q},\frac{1}{q^{\sigma}}\Big{)}\Bigg{)},\] we get an upper bound for the Hausdorff measures using the canonical cover \[A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\subset\bigcup_{q\geq Q,\,q\not\in 4\mathbb{N}}\Big{(}\bigcup_{1\leq b<q}B\Big{(}\frac{b}{q},\frac{1}{q^{\sigma} }\Big{)}\Big{)},\quad\forall Q\in\mathbb{N}\quad\Longrightarrow\quad\mathcal{ H}^{\beta}(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})\leq\lim_{Q\to\infty}\sum_{q\geq Q}\frac{1}{q^{ \sigma\beta-1}}. \tag{44}\] Thus, \(\mathcal{H}^{\beta}(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})=0\) when \(\sigma\beta-1>1\), and consequently \(\dim_{\mathcal{H}}A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\leq 2/\sigma\). For the lower bound we follow the procedure discussed in Section 2, though unlike in the proof of Theorem 4.5 we do not need the Duffin-Schaeffer theorem here. We first study the Lebesgue measure of \(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\). From (44) with \(\beta=1\), we directly get \(|A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}|=0\) when \(\sigma>2\). When \(\sigma=2\), we get \(A_{2,\,\mathbb{N}\setminus 4\mathbb{N}}=A_{2}=(0,1)\setminus\mathbb{Q}\). Indeed, if \(b_{n}/q_{n}\) is the sequence of approximations by continued fractions of \(x\in(0,1)\setminus\mathbb{Q}\), two consecutive denominators \(q_{n}\) and \(q_{n+1}\) are never both even24. This means that there is a subsequence \(b_{n_{k}}/q_{n_{k}}\) such that \(|x-b_{n_{k}}/q_{n_{k}}|<1/q_{n_{k}}^{2}\) and \(q_{n_{k}}\) is odd for all \(k\in\mathbb{N}\). In particular, \(q_{n_{k}}\not\in 4\mathbb{N}\), so \((0,1)\setminus\mathbb{Q}\subset A_{2,\,\mathbb{N}\setminus 4\mathbb{N}}\). Hence, Footnote 24: If \(x=[a_{0};a_{1},a_{2},\ldots]\) is a continued fraction, then \(q_{0}=1\), \(q_{1}=a_{1}\) and \(q_{n}=a_{n}q_{n-1}+q_{n-2}\) for \(n\geq 2\). If \(q_{N}\) and \(q_{N+1}\) were both even for some \(N\), then \(q_{N-1}\) would also be, and by induction \(q_{0}=1\) would be even. \[|A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}|=\left\{\begin{array}{ll}1,& \sigma\leq 2,\\ 0,&\sigma>2,\end{array}\right. \tag{45}\] With this in hand, we use the Mass Transference Principle Theorem 2.3. For \(\beta>0\), \[(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})^{\beta}=\limsup_{\begin{subarray}{c}q \rightarrow\infty\\ q\not\in 4\mathbb{N}\end{subarray}}\bigcup_{1\leq b<q,\,(b,q)=1}B\Big{(}\frac{b}{q}, \Big{(}\frac{1}{q^{\sigma}}\Big{)}^{\beta}\Big{)}=\limsup_{ \begin{subarray}{c}q\rightarrow\infty\\ q\not\in 4\mathbb{N}\end{subarray}}\bigcup_{1\leq b<q,\,(b,q)=1}B\Big{(}\frac{b}{q}, \frac{1}{q^{\sigma\beta}}\Big{)}=A_{\sigma\beta,\,\mathbb{N}\setminus 4 \mathbb{N}}.\] Thus, choosing \(\beta=2/\sigma\) we get \((A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})^{2/\sigma}=A_{2,\,\mathbb{N} \setminus 4\mathbb{N}}\), hence by (45) we get \(|(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})^{2/\sigma}|=1\). The Mass Transference Principle implies \(\dim_{\mathcal{H}}A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\geq 2/\sigma\) and \(\mathcal{H}^{2/\sigma}(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}})=\infty\). Let \(x_{0}\in A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\). Then there exists a sequence of pairs \((b_{n},q_{n})\in\mathbb{N}\times(\mathbb{N}\setminus 4\mathbb{N})\) such that \(|x_{0}-b_{n}/q_{n}|<1/q_{n}^{\sigma}\) and moreover \(b_{n}/q_{n}\) are all approximations by continued fractions. Define \[\mathcal{Q}_{x_{0}}=\{\,q_{n}\,:\,n\in\mathbb{N}\,\}\] to be the set of such denominators. This sequence exists because: * if \(\sigma=2\), there is a subsequence of continued fraction approximations with odd denominator, in particular with \(q_{n}\not\in 4\mathbb{N}\). * if \(\sigma>2\), by definition there exist a sequence of pairs \((b_{n},q_{n})\in\mathbb{N}\times(\mathbb{N}\setminus 4\mathbb{N})\) such that \[\Big{|}x_{0}-\frac{b_{n}}{q_{n}}\Big{|}<\frac{1}{q_{n}^{\mu}}\leq\frac{1}{2q_ {n}^{2}},\qquad\text{ for large enough }n\in\mathbb{N}.\] By a theorem of Khinchin [35, Theorem 19], all such \(b_{n}/q_{n}\) are continued fraction approximations of \(x_{0}\). Since all such \(q_{n}\) are the denominators of continued fraction approximations, the sequence \(q_{n}\) grows exponentially.25 Following again the notation in (13) in Section 2, for \(\mu\geq 1\) and \(0<c<1/2\), let26 Footnote 25: We actually have \(q_{n}\geq 2^{n/2}\). To see this, rename this sequence as a subsequence \((b_{n_{k}}/q_{n_{k}})_{k}\) of the continued fraction convergents of \(x_{0}\). By the properties of the continued fractions, \(q_{n_{k}}\geq 2^{n_{k}/2}\). Since \(n_{k}\geq k\), we get \(q_{n_{k}}\geq 2^{k/2}\). Footnote 26: When when \(\mu=\infty\) the definition is adapted as usual as \(A_{\infty,Q_{x_{0}}}=\cap_{\mu}A_{\mu,Q_{x_{0}}}\). Proofs for forthcoming results are written for \(\mu<\infty\), but the simpler \(\mu=\infty\) case is proved the same way we did in Section 4.2. \[A_{\mu,\mathcal{Q}_{x_{0}}}=\bigg{\{}\,t\in[0,1]\,:\Big{|}t-\frac{p}{q}\Big{|}< \frac{c}{q^{\mu}}\text{ for infinitely many coprime pairs }(p,q)\in\mathbb{N}\times\mathcal{Q}_{x_{0}}\,\bigg{\}}\,.\] **Proposition 6.3**.: _For \(\mu\geq 1\), \(\dim_{\mathcal{H}}(A_{\mu,\mathcal{Q}_{x_{0}}})=1/\mu\)._ Proof.: As in the proof of Proposition 6.2, the upper bound follows from the limsup expression \(A_{\mu,\mathcal{Q}_{x_{0}}}=\limsup_{n\rightarrow\infty}\bigcup_{1\leq p\leq q _{n},\,(p,q_{n})=1}B(p/q_{n},c/q_{n}^{\mu})\) and its canonical covering \[A_{\mu,\mathcal{Q}_{x_{0}}}\subset\bigcup_{n\geq N}\bigcup_{1\leq p\leq q_{n} }B\Big{(}\frac{p}{q_{n}},\,\frac{c}{q_{n}^{\mu}}\Big{)},\quad\forall N\in \mathbb{N}\quad\Longrightarrow\quad\mathcal{H}^{\beta}\big{(}A_{\mu,\mathcal{ Q}_{x_{0}}}\big{)}\leq c^{\beta}\lim_{N\rightarrow\infty}\sum_{n=N}^{\infty}\frac{1}{q_{n}^{ \mu\beta-1}}. \tag{46}\] Since \(q_{n}\geq 2^{n/2}\), the series converges if and only if \(\mu\beta-1>0\). Thus, \(\mathcal{H}^{\beta}(A_{\mu,\mathcal{Q}_{x_{0}}})=0\) for all \(\beta>1/\mu\), hence \(\dim_{\mathcal{H}}(A_{\mu,\mathcal{Q}_{x_{0}}})\leq 1/\mu\). For the lower bound we follow again the procedure in Section 2. First we compute the Lebesgue measure of \(A_{\mu,\mathcal{Q}_{x_{0}}}\). From (46) with \(\beta=1\) we get \(|A_{\mu,\mathcal{Q}_{x_{0}}}|=0\) if \(\mu>1\). When \(\mu\leq 1\), by the Duffin-Schaeffer Theorem 2.1 we have \(|A_{\mu,\mathcal{Q}_{x_{0}}}|=1\) if and only if \(\sum_{n=1}^{\infty}\varphi(q_{n})/q_{n}^{\mu}=\infty\), and otherwise \(|A_{\mu,\mathcal{Q}_{x_{0}}}|=0\). If \(\mu<1\), we can use one of the classic properties of Euler's totient function, namely that for \(\epsilon=(1-\mu)/2>0\) there exists \(N\in\mathbb{N}\) such that \(\varphi(n)\geq n^{1-\epsilon}\) for all \(n\geq N\). In particular, there exists \(K\in\mathbb{N}\) such that \[\sum_{n=1}^{\infty}\frac{\varphi(q_{n})}{q_{n}^{\mu}}\geq\sum_{n=K}^{\infty} \frac{\varphi(q_{n})}{q_{n}^{\mu}}\geq\sum_{n=K}^{\infty}q_{n}^{1-\mu-\epsilon} \geq\sum_{n=K}^{\infty}1=\infty,\] and therefore \(|A_{\mu,\mathcal{Q}x_{0}}|=1\) if \(\mu<1\). None of these arguments work for \(\mu=1\). To determine \(|A_{1,\mathcal{Q}x_{0}}|\) we need to know the behavior of \(\varphi(q_{n})\) for \(q_{n}\in\mathcal{Q}_{x_{0}}\), of which we have little control. So in all, \[|A_{\mu,\mathcal{Q}x_{0}}|=\left\{\begin{array}{ll}1,&\mu<1,\\?,&\mu=1,\\ 0,&\mu>1.\end{array}\right. \tag{47}\] independently of \(c>0\). Even not knowing \(|A_{1,\mathcal{Q}x_{0}}|\), the Mass Transference Principle Theorem 2.3 allows us to compute the Hausdorff dimension of \(A_{\mu,\mathcal{Q}x_{0}}\) from (47). As usual, we dilate the set with an exponent \(\beta>0\): \[(A_{\mu,\mathcal{Q}x_{0}})^{\beta}=\limsup_{n\to\infty}\bigcup_{1\leq p\leq q _{n}}B\Big{(}\frac{p}{q_{n}},\Big{(}\frac{c}{q_{n}^{\mu}}\Big{)}^{\beta}\Big{)} =\limsup_{n\to\infty}\bigcup_{1\leq p\leq q_{n}}B\Big{(}\frac{p}{q_{n}},\frac{ c^{\beta}}{q_{n}^{\mu\beta}}\Big{)}=A_{\mu\beta,\mathcal{Q}x_{0}},\] with a new constant \(c^{\beta}\). Since (47) is independent of \(c\), we have \(|(A_{\mu,\mathcal{Q}x_{0}})^{\beta}|=|A_{\mu\beta,\mathcal{Q}x_{0}}|=1\) if \(\mu\beta<1\), and the Mass Transference Principle implies \(\dim_{\mathcal{H}}A_{\mu,\mathcal{Q}x_{0}}\geq\beta\). Taking \(\beta\to 1/\mu\), we deduce \(\dim_{\mathcal{H}}A_{\mu,\mathcal{Q}x_{0}}\geq 1/\mu\). As in Proposition 4.3 and in the definition of \(B_{\mu,\mathcal{Q}}\) in (28), to get information about \(\alpha_{x_{0}}(t)\) for \(t\in A_{\mu,\mathcal{Q}x_{0}}\) we need to restrict their exponent of irrationality. We do this by removing sets \(A_{\mu+\epsilon}\) defined in (12). However, compared to Proposition 4.3 we have two fundamental difficulties: 1. The dimensions \(\dim_{\mathcal{H}}A_{\mu}=2/\mu>1/\mu=\dim_{\mathcal{H}}A_{\mu,\mathcal{Q}x_{ 0}}\) do not match anymore. 2. Because do not know the Lebesgue measure of \(A_{1,\mathcal{Q}x_{0}}\) in (47), we cannot conclude that \(\mathcal{H}^{1/\mu}(A_{\mu,\mathcal{Q}x_{0}})=\infty\) if \(\mu>1\). To overcome these difficulties, let \(\delta_{1},\delta_{2}>0\) and define the set \[B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}=\Big{(}A_{\mu,\mathcal{Q}x_ {0}}\setminus A_{\mu+\delta_{1},\mathcal{Q}x_{0}}\Big{)}\setminus\Big{(} \bigcup_{\epsilon>0}A_{2\mu+\delta_{2}+\epsilon}\Big{)}.\] **Remark 6.4** (Explanation of the definition of \(B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\)).: The role of \(\delta_{2}\) is to avoid the problem (b) above, while \(\delta_{1}\) has a technical role when controlling the behavior of \(F_{\pm}(x_{q_{n}}/\sqrt{h_{q_{n}}})\) in (50). Last, we remove \(A_{2\mu+\epsilon}\) instead of \(A_{\mu+\epsilon}\) to avoid problem (a) and to ensure that \(B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\) is not too small. The downside of this is that we can only get \(\mu(t)\in[\mu,2\mu+\delta_{2}]\) for the exponent of irrationality of \(t\in B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\). If instead we worked with the set \[\widetilde{B}_{\mu,\mathcal{Q}x_{0}}^{\delta_{1}}=\Big{(}A_{\mu,\mathcal{Q}x _{0}}\setminus A_{\mu+\delta_{1},\mathcal{Q}x_{0}}\Big{)}\setminus\Big{(} \bigcup_{\epsilon>0}A_{\mu+\epsilon}\Big{)}\] we would deduce \(\mu(t)=\mu\) and therefore \(\alpha_{x_{0}}(t)=1/2+1/(2\mu)\). However, we do not know how to compute the dimension of \(\widetilde{B}_{\mu,\mathcal{Q}x_{0}}^{\delta_{1}}\). **Proposition 6.5**.: _Let \(\mu\geq 1\). Then,_ 1. \(\dim_{\mathcal{H}}B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}=1/\mu\)_._ 2. _If_ \(t\in B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\)_, then_ \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{4\mu+2\delta_{2}}\)_._ 3. _If_ \(2\leq\mu<2\sigma-\delta_{1}\) _and_ \(t\in B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\)_, then_ \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\)_._ Proof of Proposition 6.5.: \((a)\) The inclusion \(B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\subset A_{\mu,\mathcal{Q}x_{0}}\) directly implies \(\dim_{\mathcal{H}}B_{\mu,\mathcal{Q}x_{0}}^{\delta_{1},\delta_{2}}\leq 1/\mu\). We prove the lower bound following the proof of Theorem 4.5 in a few steps: 1. Since \(\dim_{\mathcal{H}}A_{\mu+\delta_{1},\mathcal{Q}x_{0}}=1/(\mu+\delta_{1})<1/\mu\), we have \(\dim_{\mathcal{H}}(A_{\mu,\mathcal{Q}x_{0}}\setminus A_{\mu+\delta_{1}, \mathcal{Q}x_{0}})=1/\mu\). * The sets \(A_{\mu}\) are nested, so by the Jarnik-Besicovitch Theorem 2.2 \[\dim_{\mathcal{H}}\Big{(}\bigcup_{\epsilon>0}A_{2\mu+\delta_{2}+\epsilon}\Big{)} =\sup_{n\in\mathbb{N}}\Big{\{}\dim_{\mathcal{H}}\Big{(}A_{2\mu+\delta_{2}+\frac {1}{n}}\Big{)}\Big{\}}=\sup_{n\in\mathbb{N}}\frac{2}{2\mu+\delta_{2}+\frac{1}{n }}=\frac{1}{\mu+\delta_{2}/2}.\] Moreover, \(\mathcal{H}^{\gamma}\big{(}\bigcup_{\epsilon>0}A_{2\mu+\delta_{2}+\epsilon} \big{)}=\lim_{n\to\infty}\mathcal{H}^{\gamma}\big{(}A_{2\mu+\delta_{2}+1/n} \big{)}=0\) for all \(\gamma\geq 1/(\mu+\delta_{2}/2)\). Take \(\gamma\) such that \(1/(\mu+\delta_{2}/2)<\gamma<1/\mu\). From (a.1) we get \(\mathcal{H}^{\gamma}(A_{\mu,\mathcal{Q}_{x_{0}}}\setminus A_{\mu+\delta_{1}, \mathcal{Q}_{x_{0}}})=\infty\), and from (a.2) we have \(\mathcal{H}^{\gamma}\big{(}\bigcup_{\epsilon>0}A_{2\mu+\delta_{2}+\epsilon} \big{)}=0\), so \[\mathcal{H}^{\gamma}(B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}})= \mathcal{H}^{\gamma}(A_{\mu,\mathcal{Q}_{x_{0}}}\setminus A_{\mu+\delta_{1}, \mathcal{Q}_{x_{0}}})-\mathcal{H}^{\gamma}\Big{(}\bigcup_{\epsilon>0}A_{2\mu+ \delta+\epsilon}\Big{)}>0.\] Consequently \(\dim_{\mathcal{H}}B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\geq\gamma\), and taking \(\gamma\to 1/\mu\) we conclude \(\dim_{\mathcal{H}}B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\geq 1/\mu\). \((b)\) Let \(t\in B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\). If \(\mu(t)\) is the exponent of irrationality of \(t\), then \(t\notin\bigcup_{\epsilon>0}A_{2\mu+\delta_{2}+\epsilon}\) implies \(\mu(t)\leq 2\mu+\delta_{2}\). Combining this with Proposition 3.6 we get \(\alpha_{x_{0}}(t)\geq\frac{1}{2}+\frac{1}{2\mu(t)}\geq\frac{1}{2}+\frac{1}{4 \mu+2\delta_{2}}\). \((c)\) Let \(t\in B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\). Since \(t\in A_{\mu,\mathcal{Q}_{x_{0}}}\setminus A_{\mu+\delta_{1},\mathcal{Q}_{x_{0 }}}\), there is a subsequence of denominators \((q_{n_{k}})_{k}\subset\mathcal{Q}_{x_{0}}\) such that \(c/q_{n_{k}}^{\mu+\delta_{1}}\leq\big{|}t-p_{n_{k}}/q_{n_{k}}\big{|}<c/q_{n_{k} }^{\mu}\) for \(k\in\mathbb{N}\). Define the errors \(h_{n_{k}}\) and \(x_{n_{k}}\), and the exponent \(\mu_{n_{k}}\) as \[h_{n_{k}}=\Big{|}t-\frac{p_{n_{k}}}{q_{n_{k}}}\Big{|}=\frac{1}{q_{n_{k}}^{\mu_ {n_{k}}}}\qquad\text{ and }\qquad x_{n_{k}}=\Big{|}x_{0}-\frac{b_{n_{k}}}{q_{n_{k}}}\Big{|}<\frac{1}{q_{ n_{k}}^{\sigma}}. \tag{48}\] From the condition above, since \(c<1\), we immediately get that for any \(\epsilon>0\), \[\mu<\mu_{n_{k}}\leq\mu+\delta_{1}+\epsilon,\qquad\forall k\gg_{\epsilon}1. \tag{49}\] By the asymptotic expansion in Corollary 3.3, we have \[R_{x_{0}}(t)-R_{x_{0}}\Big{(}\frac{p_{n_{k}}}{q_{n_{k}}}\Big{)}=\frac{h_{n_{k} }^{1/2}}{q_{n_{k}}}\,G(p_{n_{k}},b_{n_{k}},q_{n_{k}})\,F_{\pm}\Big{(}\frac{x_{n _{k}}}{\sqrt{h_{n_{k}}}}\Big{)}-2\pi ih_{n_{k}}+\text{Error},\] where \(\text{Error}=O\Big{(}\min\big{(}q_{n_{k}}^{3/2}\,h_{n_{k}}^{3/2},q_{n_{k}}^{1/2 }\,h_{n_{k}}\big{)}\Big{)}\). Let us treat the elements in this expression separately. * Since \(q_{n_{k}}\not\in 4\mathbb{N}\), we have \(|G(p_{n_{k}},b_{n_{k}},q_{n_{k}})|\geq\sqrt{q_{n_{k}}}\) for \(k\in\mathbb{N}\). Indeed, if \(q_{n_{k}}\) is odd, then \(|G(p_{n_{k}},b_{n_{k}},q_{n_{k}})|=\sqrt{q_{n_{k}}}\). If \(q_{n_{k}}\equiv 2\pmod{4}\), then \(b_{n_{k}}\) is odd, so \(q_{n_{k}}/2\equiv b_{n_{k}}\pmod{2}\) and hence \(|G(p_{n_{k}},b_{n_{k}},q_{n_{k}})|=\sqrt{2q_{n_{k}}}\). Also, by (48) and (49), \[\frac{x_{n_{k}}}{\sqrt{h_{n_{k}}}}=x_{n_{k}}\,q_{n_{k}}^{\mu_{n_{k}}/2}<\frac{ q_{n_{k}}^{\mu_{n_{k}}/2}}{q_{n_{k}}^{\sigma}}\leq\frac{q_{n_{k}}^{\frac{\mu}{2}+ \frac{\delta_{1}}{2}+\frac{\epsilon}{2}}}{q_{n_{k}}^{\sigma}}=\frac{1}{q_{n_{k }}^{\sigma-\frac{\mu}{2}-\frac{\delta_{1}}{2}-\frac{\delta}{2}}}.\] (50) Hence, if \(2\sigma>\mu+\delta_{1}\), we can choose \(\epsilon=\sigma-\mu/2-\delta_{1}/2>0\) and we get \[\lim_{k\to\infty}\frac{x_{n_{k}}}{\sqrt{h_{n_{k}}}}\leq\lim_{k\to\infty}\frac{1 }{q_{n_{k}}^{\sigma-\mu/2-\delta_{1}/2-\epsilon/2}}=\lim_{k\to\infty}\frac{1}{q_ {n_{k}}^{(\sigma-\mu/2-\delta_{1}/2)/2}}=0.\] Since \(F_{\pm}\) is continuous, we get \(|F_{\pm}(x_{n_{k}}/h_{n_{k}}^{1/2})|\geq|F_{\pm}(0)|/2\simeq 1\) for all \(k\gg 1\). Therefore, \[\text{Main term}=\Big{|}\frac{\sqrt{h_{n_{k}}}}{q_{n_{k}}}\,G(p_{n_{k}},b_{n_{k}},q _{n_{k}})\,F\Big{(}\frac{x_{n_{k}}}{h_{n_{k}}^{1/2}}\Big{)}\Big{|}\simeq\frac{ \sqrt{h_{n_{k}}}}{\sqrt{q_{n_{k}}}},\qquad\forall k\gg 1.\] * The term \(2\pi ih_{n_{k}}\) is absorbed by the Main Term if \(h_{n_{k}}\ll\sqrt{h_{n_{k}}}/\sqrt{q_{n_{k}}}\), which is equivalent to \(h_{n_{k}}\ll 1/q_{n_{k}}\). If \(\mu>1\), we get precisely \(h_{n_{k}}<c/q_{n_{k}}^{\mu}\ll 1/q_{n_{k}}\). * Regarding the error term, we can write \[q_{n_{k}}^{1/2}h_{n_{k}}=\frac{\sqrt{h_{n_{k}}}}{\sqrt{q_{n_{k}}}}\,(q_{n_{k}}^{2} h_{n_{k}})^{1/2},\qquad q_{n_{k}}^{3/2}h_{n_{k}}^{3/2}=\frac{\sqrt{h_{n_{k}}}}{ \sqrt{q_{n_{k}}}}\,q_{n_{k}}^{2}h_{n_{k}}.\] Since \(\text{Error}\leq C\,\min\big{(}q_{n_{k}}^{3/2}\,h_{n_{k}}^{3/2},q_{n_{k}}^{1/2 }\,h_{n_{k}}\big{)}\) for some constant \(C>0\), the error is absorbed by the Main Term if \(q_{n_{k}}^{2}\,h_{n_{k}}\leq c\) for a small enough, but universal constant \(c\). Choosing that \(c>0\) in the definition of \(A_{\mu,\mathcal{Q}_{x_{0}}}\), the condition \(h_{n_{k}}\leq c/q_{n_{k}}^{\mu}\leq c/q_{n_{k}}^{2}\) is satisfied if \(\mu\geq 2\). Hence, if \(2\leq\mu<2\sigma-\delta_{1}\) and \(t\in B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\), then \(|R_{x_{0}}(t)-R_{x_{0}}(p_{n_{k}}/q_{n_{k}})|\gtrsim\sqrt{h_{n_{k}}}/\sqrt{q_{ n_{k}}}\) for all \(k\gg 1\). From (49) we have \(1/\sqrt{q_{n_{k}}}=h_{n_{k}}^{1/(2\mu_{n_{k}})}>h_{n_{k}}^{1/(2\mu)}\), so \(|R_{x_{0}}(t)-R_{x_{0}}(p_{n_{k}}/q_{n_{k}})|\gtrsim h_{n_{k}}^{\frac{1}{2}+ \frac{1}{2\mu}}\) for all \(k\gg 1\), which implies \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\). From Proposition 6.5 we can deduce the main part of Theorem 1.3. **Theorem 6.6**.: _Let \(\sigma\geq 2\) and let \(x_{0}\in A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\). Let \(2\leq\mu<2\sigma\). Then, for all \(\delta>0\),_ \[\frac{1}{\mu}\leq\dim_{\mathcal{H}}\bigg{\{}\,t\,:\frac{1}{2}+\frac{1}{4\mu}- \delta\leq\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\bigg{\}}\leq\frac{2 }{\mu}.\] Proof.: Choose \(\delta_{2}>0\) and any \(\delta_{1}<2\sigma-\mu\). Hence, \(2\leq\mu<2\sigma-\delta_{1}\) and Proposition 6.5 implies \[B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}\subset\bigg{\{}\,t\,:\frac {1}{2}+\frac{1}{4\mu+2\delta_{2}}\leq\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1} {2\mu}\bigg{\}}\,.\] Since \(\dim_{\mathcal{H}}B_{\mu,\mathcal{Q}_{x_{0}}}^{\delta_{1},\delta_{2}}=1/\mu\) and \(\delta_{2}\) is arbitrary, we get the lower bound. Let us now prove the upper bound. If \(\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\), by Proposition 3.6 we get \(\frac{1}{2}+\frac{1}{2\mu(t)}\leq\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\), hence \(\mu(t)\geq\mu\). This implies \(t\in A_{\mu-\epsilon}\) for all \(\epsilon>0\), so by the Jarnik-Besicovitch Theorem 2.2 we get \[\dim_{\mathcal{H}}\bigg{\{}\,t\,:\frac{1}{2}+\frac{1}{4\mu}-\delta\leq\alpha_{ x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\bigg{\}}\leq\dim_{\mathcal{H}}A_{\mu- \epsilon}=\frac{2}{\mu-\epsilon}\] for all \(\delta\geq 0\). We conclude by taking the limit \(\epsilon\to 0\). To get the precise statement of Theorem 1.3, we only need to relate the sets \(A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\) with the exponent \(\sigma(x_{0})=\limsup_{n\to\infty}\{\,\mu_{n}\,:\,q_{n}\not\in 4\mathbb{N}\,\}\) defined in (10). Proof of Theorem 1.3.: Since \(\{A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\}_{\sigma\geq 2}\) is a nested family and \(A_{2,\,\mathbb{N}\setminus 4\mathbb{N}}=(0,1)\setminus\mathbb{Q}\), for every \(x_{0}\in(0,1)\setminus\mathbb{Q}\) there exists \(\widetilde{\sigma}(x_{0})=\sup\{\,\sigma\,:\,x_{0}\in A_{\sigma,\,\mathbb{N} \setminus 4\mathbb{N}}\,\}\). Let us check that \(\sigma(x_{0})=\widetilde{\sigma}(x_{0})\). Indeed, call \(\widetilde{\sigma}(x_{0})=\widetilde{\sigma}\). \(\bullet\) If \(\widetilde{\sigma}>2\). Then for \(\epsilon>0\) small enough there exists a sequence \(b_{k}/q_{k}\) such that \(q_{k}\not\in 4\mathbb{N}\) and \(|x_{0}-b_{k}/q_{k}|<1/q_{k}^{\widetilde{\sigma}-\epsilon}<1/(2q_{k}^{2})\). By Khinchin's theorem [35, Theorem 19], \(b_{k}/q_{k}\) is an approximation by continued fraction, for which \(|x_{0}-b_{k}/q_{k}|=1/q_{k}^{\mu_{k}}<1/q_{k}^{\widetilde{\sigma}-\epsilon}\), and therefore \(\mu_{k}\geq\widetilde{\sigma}-\epsilon\). This implies \(\sigma(x_{0})\geq\widetilde{\sigma}-\epsilon\) for all \(\epsilon>0\), hence \(\sigma(x_{0})\geq\widetilde{\sigma}\). On the other hand, for all approximations by continued fractions with \(q_{n}\not\in 4\mathbb{N}\) with large enough \(n\) we have \(|x_{0}-b_{n}/q_{n}|=1/q_{n}^{\mu_{n}}>1/q_{n}^{\widetilde{\sigma}+\epsilon}\), hence \(\mu_{n}\leq\widetilde{\sigma}+\epsilon\). This holds for all \(\epsilon>0\), so \(\sigma(x_{0})\leq\widetilde{\sigma}\). \(\bullet\) If \(\widetilde{\sigma}=2\), then \(|x_{0}-b_{n}/q_{n}|=1/q_{n}^{\mu_{n}}>1/q_{n}^{2+\epsilon}\), hence \(\mu_{n}\leq 2+\epsilon\), for all approximations by continued fractions with \(q_{n}\not\in 4\mathbb{N}\). Therefore, \(\sigma(x_{0})\leq 2\). Since \(\sigma(x_{0})\geq 2\) always holds, we conclude. Therefore, let \(x_{0}\in(0,1)\setminus\mathbb{Q}\). Then, \(x_{0}\in A_{\sigma,\,\mathbb{N}\setminus 4\mathbb{N}}\) for all \(\sigma<\sigma(x_{0})\), so the conclusion of Theorem 6.6 holds for \(2\leq\mu<2\sigma\), for all \(\sigma<\sigma(x_{0})\). That implies that for every \(\delta>0\), \[\frac{1}{\mu}\leq\dim_{\mathcal{H}}\bigg{\{}\,t\,:\frac{1}{2}+\frac{1}{4\mu}- \delta\leq\alpha_{x_{0}}(t)\leq\frac{1}{2}+\frac{1}{2\mu}\bigg{\}}\leq\frac{2 }{\mu},\qquad\text{ for all }\qquad 2\leq\mu<2\sigma(x_{0}).\q ## Appendix A Sums of Euler's totient function Sums of the Euler totient function play a relevant role in this article, especially in Lemma 5.5. In Section A.1 we state the classical results and briefly prove them for completeness. In Section A.2 we adapt these classical proofs to sums modulo \(Q\) that we need in this article. Throughout this appendix, \(\varphi\) denotes the Euler totient function and \(\mu\) denotes the Mobius function27. Footnote 27: For \(n\in\mathbb{N}\), \(\mu(n)=1\) if \(n\) is has no squared prime factor and if it has an even number of prime factors; \(\mu(n)=-1\) if \(n\) is has no squared prime factor and if it has an odd number of prime factors; and \(\mu(n)=0\) if it has a squared prime factor. ### Sums of Euler's totient function Define the sum function \[\Phi(N)=\sum_{n=1}^{N}\varphi(n),\qquad N\in\mathbb{N}.\] **Proposition A.1**.: _For \(N\gg 1\),_ \[\Phi(N)=CN^{2}+O\Big{(}N\log N\Big{)},\qquad\text{ where }\qquad C=\frac{1}{2} \,\sum_{n=1}^{\infty}\frac{\mu(n)}{n^{2}}=\frac{3}{\pi^{2}}\] Proof.: By the Mobius inversion formula, \[\Phi(N)=\sum_{n=1}^{N}\varphi(n)=\sum_{n=1}^{N}n\bigg{(}\sum_{d|n}\frac{\mu(d )}{d}\bigg{)}=\sum_{n=1}^{N}\sum_{d|n}\frac{n}{d}\,\mu(d).\] Calling \(n/d=d^{\prime}\), the sum is in all natural numbers \(d\) and \(d^{\prime}\) such that \(dd^{\prime}\leq N\). Therefore, \[\Phi(N)=\sum_{d,d^{\prime}\,:\,dd^{\prime}\leq N}d^{\prime}\mu(d)=\sum_{d=1}^{ N}\mu(d)\,\sum_{d^{\prime}=1}^{\lfloor N/d\rfloor}d^{\prime}=\sum_{d=1}^{N}\mu(d)\, \frac{\lfloor N/d\rfloor\,(\lfloor N/d\rfloor+1)}{2}.\] For \(x\in\mathbb{R}\), write \(x=\lfloor x\rfloor+\{x\}\), where \(0\leq\{x\}<1\) is the fractional part of \(x\). Then, direct computation shows that \(\lfloor x\rfloor\,(\lfloor x\rfloor+1)=x^{2}+O(x)\) when \(x\geq 1\), so \[\Phi(N)=\frac{1}{2}\,\sum_{d=1}^{N}\mu(d)\left(\Big{(}\frac{N}{d}\Big{)}^{2} +O\Big{(}\frac{N}{d}\Big{)}\right)=\frac{N^{2}}{2}\sum_{d=1}^{N}\frac{\mu(d)}{ d^{2}}+O\left(N\,\sum_{d=1}^{N}\frac{1}{d}\right).\] The series \(\sum_{d=1}^{\infty}\mu(d)/d^{2}\) is absolutely convergent, and its value is known to be \(2C=6/\pi^{2}\), so write \[\sum_{d=1}^{N}\frac{\mu(d)}{d^{2}}=2C-\sum_{d=N+1}^{\infty}\frac{\mu(d)}{d^{2 }}=2C+O\bigg{(}\sum_{d=N+1}^{\infty}\frac{1}{d^{2}}\bigg{)}=2C+O\Big{(}\frac{1 }{N}\Big{)}.\] Since \(\sum_{d=1}^{N}1/d\simeq\log N\), we get \(\Phi(N)=C\,N^{2}+O(N)+O(N\log N)=CN^{2}+O(N\log N)\). As a Corollary of Lemma A.1 we obtain the analogue result for the sums weighted by \(n^{-\alpha}\). Observe that when \(\alpha>2\) the sum is convergent. **Corollary A.2**.: _Let \(\alpha\leq 2\). For \(N\gg 1\),_ \[\sum_{n=1}^{N}\frac{\varphi(n)}{n^{2}}\simeq\log N,\qquad\text{ and }\qquad\sum_{n=1}^{N}\frac{\varphi(n)}{n^{\alpha}}\simeq N^{2-\alpha},\quad\text{ if }\,\alpha<2.\] Proof.: Upper bounds immediately follow from \(\varphi(n)\leq n\). For lower bounds, assume first that \(\alpha\geq 0\). From Proposition A.1 we directly get \[\sum_{n=1}^{N}\frac{\varphi(n)}{n^{\alpha}}\geq\frac{1}{N^{\alpha}}\sum_{n=1}^{N }\varphi(n)=\frac{1}{N^{\alpha}}\Phi(N)\simeq N^{2-\alpha},\] which is optimal when \(\alpha<2\). For the case \(\alpha=2\) we use the summation by parts formula28 to get Footnote 28: Let \(a_{n}\) and \(b_{n}\) be two sequences, and let \(B_{N}=\sum_{n=1}^{N}b_{n}\). Then, \(\sum_{n=1}^{N}a_{n}b_{n}=a_{N}B_{N}-\sum_{n=1}^{N-1}B_{n}(a_{n+1}-a_{n})\). \[\sum_{n=1}^{N}\frac{\varphi(n)}{n^{2}}=\frac{\Phi(N)}{N^{2}}-\sum_{n=1}^{N-1} \Phi(n)\Big{(}\frac{1}{(n+1)^{2}}-\frac{1}{n^{2}}\Big{)}=\frac{\Phi(N)}{N^{2}} +\sum_{n=1}^{N-1}\Phi(n)\frac{2n+1}{n^{2}\,(n+1)^{2}}. \tag{51}\] Restrict the sum to \(\log N\leq n\leq N-1\), and combine it with \(\Phi(n)\simeq n^{2}\) for \(n\gg 1\) from Proposition A.1 to get \[\sum_{n=1}^{N}\frac{\varphi(n)}{n^{2}}\gtrsim 1+\sum_{n\geq\log N}^{N-1} \frac{1}{n}\simeq\log N-\log\log N\simeq\log N,\qquad\text{ for }\,N\gg 1.\] When \(\alpha<0\), restrict the sum to \(n\in[N/2,N]\) and use \(\Phi(N)=CN^{2}+O(N\log N)\) in Proposition A.1 to get \[\sum_{n=1}^{N}\frac{\varphi(n)}{n^{\alpha}}=\sum_{n=1}^{N}\varphi(n)\,n^{| \alpha|}\geq\Big{(}\frac{N}{2}\Big{)}^{|\alpha|}\,\sum_{n\geq N/2}^{N}\varphi( n)\simeq_{|\alpha|}\frac{\Phi(N)-\Phi(N/2)}{N^{\alpha}}\simeq N^{2-\alpha}.\qed\] ### Sums of Euler's totient function modulo \(Q\) To get our results for \(R_{x_{0}}\) when \(x_{0}=P/Q\), we need to know the behavior of the sum function modulo \(Q\), \[\Phi_{Q}(N)=\sum_{n=1}^{N}\varphi(Qn)\qquad\text{ when }\,N\gg 1,\] and its corresponding weighted sums. We adapt the proofs of Proposition A.1 and Corollary A.2. **Proposition A.3**.: _Let \(Q\in\mathbb{N}\). Then, \(\Phi_{Q}(N)\leq QN^{2}\), and there exists a constant \(c_{Q}>0\) such that_ \[\Phi_{Q}(N)\geq c_{Q}N^{2}+O_{Q}(N\log N).\] _Consequently, \(\Phi_{Q}(N)\simeq_{Q}N^{2}\) when \(N\gg 1\)._ Proof.: The upper bound follows directly from \(\varphi(n)<n\) for all \(n\in\mathbb{N}\), so it suffices to prove the lower bound. For that, first restrict the sum to \(n\leq N\) such that \((Q,n)=1\). By the multiplicative property of the Euler function, we get \[\Phi_{Q}(N)\geq\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\varphi(Qn)=\varphi(Q)\sum_{\begin{subarray}{c}n=1 \\ (Q,n)=1\end{subarray}}^{N}\varphi(n). \tag{52}\] The proof now follows the same strategy as in Proposition A.1. Use Mobius inversion to write \[\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\varphi(n)=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\left(n\sum_{d|n}\frac{\mu(d)}{d}\right)=\sum_{ \begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\sum_{d|n}\,\frac{n}{d}\,\mu(d).\] Observe that if \((Q,n)=1\) and if we decompose \(n=d\,d^{\prime}\), then both \(d\) and \(d^{\prime}\) are coprime with \(Q\). Conversely, if \(d\) and \(d^{\prime}\) are coprime with \(Q\), then so is \(n=d\,d^{\prime}\). Thus, \[\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\varphi(n)=\sum_{\begin{subarray}{c}d,d^{\prime} \,:\,d^{\prime}\leq N\\ (Q,d)=1=(Q,d^{\prime})\end{subarray}}d^{\prime}\,\mu(d)=\sum_{ \begin{subarray}{c}d=1\\ (Q,d)=1\end{subarray}}^{N}\mu(d)\Bigg{(}\sum_{\begin{subarray}{c}d^{\prime}=1\\ (Q,d^{\prime})=1\end{subarray}}^{\lfloor N/d\rfloor}d^{\prime}\Bigg{)}. \tag{53}\] In the following lemma we give a closed formula for the inner sum. We postpone its proof. **Lemma A.4**.: _Let \(Q\in\mathbb{N}\), \(Q\geq 2\). Then,_ \[S_{Q}=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{Q-1}n=\frac{Q\,\varphi(Q)}{2},\qquad\text{ and }\qquad S_{Q,k}=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{kQ-1}n=\frac{Q\,\varphi(Q)}{2}\,k^{2},\quad\forall k\in \mathbb{N}.\] Now, for every \(d\leq N\), find \(k_{d}\in\mathbb{N}\cup\{0\}\) such that \(k_{d}Q\leq\lfloor N/d\rfloor<(k_{d}+1)Q\), and write \[\sum_{\begin{subarray}{c}d^{\prime}=1\\ (Q,d^{\prime})=1\end{subarray}}^{\lfloor N/d\rfloor}d^{\prime}=\sum_{ \begin{subarray}{c}d^{\prime}=1\\ (Q,d^{\prime})=1\end{subarray}}^{k_{d}Q-1}d^{\prime}+\sum_{\begin{subarray}{c }d^{\prime}=k_{d}Q+1\\ (Q,d^{\prime})=1\end{subarray}}^{\lfloor N/d\rfloor}d^{\prime}=S_{Q,k_{d}}+O \Big{(}(k_{d}+1)Q^{2}\Big{)}=\frac{Q\,\varphi(Q)}{2}\,k_{d}^{2}+O\Big{(}(k_{d} +1)Q^{2}\Big{)}. \tag{54}\] Since the definition of \(k_{d}\) is equivalent to \(\frac{1}{Q}\,\lfloor N/d\rfloor-1<k_{d}\leq\frac{1}{Q}\,\lfloor N/d\rfloor\), we deduce that \(k_{d}=\lfloor\frac{1}{Q}\lfloor N/d\rfloor\rfloor\). Consequently, since \(\lfloor x\rfloor=x+O(1)\) and \(\lfloor x\rfloor^{2}=x^{2}+O(x)\), we get \[k_{d}=\frac{N}{Qd}+O(1)\qquad\text{ and }\qquad k_{d}^{2}=\frac{N^{2}}{Q^{2}d^{2 }}+\frac{1}{Q}\,O\Big{(}\frac{N}{d}\Big{)}. \tag{55}\] Hence, from (54) and (55) we get \[\sum_{\begin{subarray}{c}d^{\prime}=1\\ (Q,d^{\prime})=1\end{subarray}}^{\lfloor N/d\rfloor}d^{\prime}=\frac{\varphi(Q) }{2Q}\,\frac{N^{2}}{d^{2}}+O\left(\varphi(Q)\,\frac{N}{d}+Q\frac{N}{d}+Q^{2} \right)=\frac{\varphi(Q)}{2Q}\,\frac{N^{2}}{d^{2}}+Q^{2}\,O\bigg{(}\frac{N}{d }\bigg{)}.\] We plug this in (53) to get \[\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\varphi(n)=\frac{\varphi(Q)}{2Q}N^{2}\sum_{ \begin{subarray}{c}d=1\\ (Q,d)=1\end{subarray}}^{N}\frac{\mu(d)}{d^{2}}+O\Big{(}Q^{2}N\sum_{ \begin{subarray}{c}d=1\\ (Q,d)=1\end{subarray}}^{N}\frac{\mu(d)}{d}\Big{)}.\] The sum \(\sum_{n=1}^{\infty}\mu(d)/d^{2}\) is absolutely convergent, and \(c_{Q}:=\sum_{d=1,\,(Q,d)=1}^{\infty}\mu(d)/d^{2}>0\) because \[c_{Q}=1+\sum_{\begin{subarray}{c}d=2\\ (Q,d)=1\end{subarray}}^{\infty}\frac{\mu(d)}{d^{2}}\qquad\text{ and }\qquad\Bigg{|}\sum_{ \begin{subarray}{c}d=2\\ (Q,d)=1\end{subarray}}^{\infty}\frac{\mu(d)}{d^{2}}\Bigg{|}\leq\frac{\pi^{2}}{ 6}-1<1.\] Hence, \[\sum_{\begin{subarray}{c}d=1\\ (Q,d)=1\end{subarray}}^{N}\frac{\mu(d)}{d^{2}}=c_{Q}-\sum_{\begin{subarray}{c }d=N+1\\ (Q,d)=1\end{subarray}}^{\infty}\frac{\mu(d)}{d^{2}}=c_{Q}+O\Big{(}\sum_{d=N+1} ^{\infty}\frac{1}{d^{2}}\Big{)}=c_{Q}+O(1/N).\] Together with \(|\sum_{d=1,\,(Q,d)=1}^{N}\mu(d)/d|\lesssim\log N\), this implies \[\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{N}\varphi(n)=c_{Q}\,\frac{\varphi(Q)}{2Q}N^{2}+O\Big{(} \frac{\varphi(Q)}{Q}N\Big{)}+O(Q^{2}N\log N)=c_{Q}\,\frac{\varphi(Q)}{2Q}N^{2} +O_{Q}(N\log N).\] Together with (52) we conclude \(\Phi_{Q}(N)\geq c_{Q}\,\frac{\varphi(Q)^{2}}{2Q}N^{2}+O_{Q}(N\log N)\). Proof of Lemma a.4.: We begin with \(k=1\). When \(Q=2\), we have \(S_{2,1}=1=2\,\varphi(2)/2\), so we may assume \(Q\geq 3\). We first observe that \(\varphi(Q)\) is even, because if \(Q\) has an odd prime factor \(p\), then \(\varphi(p)=p-1\), which is even, is a factor of \(\varphi(Q)\). Otherwise, \(Q=2^{r}\) with \(r\geq 2\), so \(\varphi(Q)=2^{r-1}\) is even. Now, the observation that \((Q,n)=1\iff(Q,Q-n)=1\) implies \[S_{Q,1}=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{\lfloor Q/2\rfloor}n+\sum_{\begin{subarray}{c}n= \lfloor Q/2\rfloor+1\\ (Q,n)=1\end{subarray}}^{Q-1}n=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{\lfloor Q/2\rfloor}\big{(}n+(Q-n)\big{)}=Q\,\frac{ \varphi(Q)}{2}.\] Let now \(k\geq 2\), so that \[\sum_{\begin{subarray}{c}n=(k-1)Q+1\\ (Q,n)=1\end{subarray}}^{kQ-1}n=\sum_{\begin{subarray}{c}n=1\\ (Q,n)=1\end{subarray}}^{Q-1}\bigg{(}n+(k-1)Q\bigg{)}=S_{Q,1}+(k-1)Q\varphi(Q)=Q \varphi(Q)\Big{(}k-\frac{1}{2}\Big{)}.\] Consequently, \[S_{Q,k}=\sum_{\ell=1}^{k}\Bigg{(}\sum_{\begin{subarray}{c}n=(\ell-1)Q+1\\ (Q,n)=1\end{subarray}}^{\ell Q}n\Bigg{)}=\sum_{\ell=1}^{k}Q\varphi(Q)\Big{(} \ell-\frac{1}{2}\Big{)}=\frac{Q\varphi(Q)}{2}k^{2}.\qed\] To conclude, we prove the estimates for the weighted sums that we needed in Lemma 5.5 as a corollary of Proposition A.3. As before, when \(\alpha>2\) the sums are absolutely convergent. **Corollary A.5** (Lemma 5.5).: _Let \(Q\in\mathbb{N}\) and \(\alpha\leq 2\). For \(N\gg 1\),_ \[\sum_{n=1}^{N}\frac{\varphi(Qn)}{n^{2}}\simeq\log N,\qquad\text{ and }\qquad\sum_{n=1}^{N}\frac{\varphi(Qn)}{n^{\alpha}}\simeq N^{2-\alpha}\quad \text{ for }\quad\alpha<2.\] _The implicit constants depend on \(Q\), and also on \(\alpha\) when \(\alpha<0\)._ Proof.: Upper bounds follow directly from \(\varphi(n)\leq n\). Lower bounds follow from Proposition A.3 with the same strategy as in the proof of Corollary A.2. If \(\alpha\geq 0\), by Proposition A.3 we get \[\sum_{n=1}^{N}\frac{\varphi(Qn)}{n^{\alpha}}\geq\frac{1}{N^{\alpha}}\,\Phi_{Q }(N)\simeq_{Q}N^{2-\alpha},\qquad\text{ when }N\gg 1.\] When \(\alpha=2\), combine Proposition A.3 with summing by parts as in (51) to get \[\sum_{n=1}^{N}\frac{\varphi(Qn)}{n^{2}}=\frac{\Phi_{Q}(N)}{N^{2}}+\sum_{n=1}^ {N-1}\Phi_{Q}(n)\frac{2n+1}{n^{2}\,(n+1)^{2}}\gtrsim 1+\sum_{n=\log N}^{N-1} \frac{1}{n}\simeq\log N.\] When \(\alpha<0\), choosing \(\delta>0\) small enough depending on \(Q\), Proposition A.3 implies \[\sum_{n=1}^{N}\frac{\varphi(Qn)}{n^{\alpha}}\geq_{\alpha}N^{|\alpha|}\sum_{n =\delta N}^{N}\varphi(Qn)=N^{|\alpha|}\Big{(}\Phi_{Q}(N)-\Phi_{Q}(\delta N) \Big{)}\simeq_{Q,\alpha}N^{|\alpha|}N^{2}=N^{2-\alpha}.\qed\]
2309.15810
Distinguishing between long-transient and asymptotic states in a biological aggregation model
Aggregations are emergent features common to many biological systems. Mathematical models to understand their emergence are consequently widespread, with the aggregation-diffusion equation being a prime example. Here we study the aggregation-diffusion equation with linear diffusion. This equation is known to support solutions that involve both single and multiple aggregations. However, numerical evidence suggests that the latter, which we term `multi-peaked solutions' may often be long-transient solutions rather than asymptotic steady states. We develop a novel technique for distinguishing between long transients and asymptotic steady states via an energy minimisation approach. The technique involves first approximating our study equation using a limiting process and a moment closure procedure. We then analyse local minimum energy states of this approximate system, hypothesising that these will correspond to asymptotic patterns in the aggregation-diffusion equation. Finally, we verify our hypotheses through numerical investigation, showing that our approximate analytic technique gives good predictions as to whether a state is asymptotic or a long transient. Overall, we find that almost all twin-peaked, and by extension multi-peaked, solutions are transient, except for some very special cases. We demonstrate numerically that these transients can be arbitrarily long-lived, depending on the parameters of the system.
Jonathan R. Potts, Kevin J. Painter
2023-09-27T17:34:21Z
http://arxiv.org/abs/2309.15810v1
# Distinguishing between long-transient and asymptotic states in a biological aggregation model ###### Abstract Aggregations are emergent features common to many biological systems. Mathematical models to understand their emergence are consequently widespread, with the aggregation-diffusion equation being a prime example. Here we study the aggregation-diffusion equation with linear diffusion. This equation is known to support solutions that involve both single and multiple aggregations. However, numerical evidence suggests that the latter, which we term'multi-peaked solutions' may often be long-transient solutions rather than asymptotic steady states. We develop a novel technique for distinguishing between long transients and asymptotic steady states via an energy minimisation approach. The technique involves first approximating our study equation using a limiting process and a moment closure procedure. We then analyse local minimum energy states of this approximate system, hypothesising that these will correspond to asymptotic patterns in the aggregation-diffusion equation. Finally, we verify our hypotheses through numerical investigation, showing that our approximate analytic technique gives good predictions as to whether a state is asymptotic or a long transient. Overall, we find that almost all twin-peaked, and by extension multi-peaked, solutions are transient, except for some very special cases. We demonstrate numerically that these transients can be arbitrarily long-lived, depending on the parameters of the system. **Keywords: Aggregation-diffusion equation, Asymptotics, Biological aggregation, Long transients, Metastability, Nonlocal advection** ## 1 Introduction Aggregation phenomena are widespread in biology, from cell aggregations (Budrene and Berg, 1995) to the swarming (Roussi, 2020), schooling (Makris et al, 2009), flocking (Clark and Mangel, 1984), and herding (Bond et al, 2019) of animals. When modelled from a continuum perspective (as opposed to via interacting particles), the principal tools take the form of partial differential equations with non-local advection, sometimes combined with a diffusive term (Topaz et al, 2006). Indeed, such equations are often called aggregation equations (Laurent, 2007), highlighting their importance in modelling aggregations, or aggregation-diffusion equations (Carrillo et al, 2019) if there is a diffusion term. As well as modelling aggregated groups of organisms, such equations have also been used to model aggregation-like phenomena elsewhere, such as animal home ranges and territories (Briscoe et al, 2002; Potts and Lewis, 2016) and consensus convergence in opinion dynamics (Garnier et al, 2017). This very broad range of applications, together with the mathematical complexity in dealing with nonlinear nonlocal partial differential equations (PDEs), has led to a great amount of interest from applied mathematicians in understanding the properties of these PDEs (Painter et al, 2023). Of particular interest from a biological perspective are the pattern formation properties of aggregation-diffusion equations, since these can reveal the necessary processes required for observed patterns to emerge. Many traditional techniques for analysing pattern formation, such as linear stability analysis and weakly nonlinear analysis, focus on the onset of patterns from small perturbations of a non-patterned (i.e. spatially homogeneous) state. However, patterns observed in actual biological systems will often be far from the non-patterned state, and not necessarily emerge from small perturbations of spatially homogeneous configurations (Krause et al, 2020; Veerman et al, 2021). Sometimes observed patterns will be asymptotic steady states or other types of attractors. But frequently biological systems will be observed in transient states (Hastings et al, 2018; Morozov et al, 2020). These transient states may persist for a very long time, sometimes so long that they are hard to distinguish from asymptotic states. Moreover, as well as transients being difficult to decipher from observations of biological systems, they can also be tricky to determine from numerical solutions of a PDE model. Therefore analytic techniques are required to guide those engaging in numerical analysis of PDEs as to whether the solution they are observing is likely to be a long transient or an asymptotic state. Our aim here is to provide such analytic techniques for a class of 1D aggregation-diffusion equations of the following form \[\frac{\partial u}{\partial t}=D\frac{\partial^{2}u}{\partial x^{2}}-\gamma \frac{\partial}{\partial x}\left[u\frac{\partial}{\partial x}(K*u)\right], \tag{1}\] where \(K\) is a non-negative averaging kernel, symmetric about 0, with \(\|K\|_{L^{\infty}}=1\), and \[K*u(x)=\int_{\Omega}K(z)u(x+z)\mathrm{d}z \tag{2}\] is a convolution, where \(\Omega\) is the spatial domain of definition. Here, \(D\) and \(\gamma\) are constants, and \(\Omega\) is the circle given by interval \([-L,L]\) with periodic boundary conditions imposed. Our approach is not exact, in the sense that we approximate our study PDE first through the limit as \(D/\gamma\to 0\), then via a moment closure assumption. However, this approximation allows us to analyse the associated energy functional, finding explicit mathematical expressions for local energy minima. Our conjecture is that local energy minima of the approximate system are qualitatively similar to the asymptotic patterns observed the aggregation-diffusion equation we are studying, but any states that do not represent local energy minima of the approximate system are transient states. We then test this numerically in some specific cases. Of particular interest is the question of whether multi-peaked solutions are asymptotic steady states or long transients, which is the question that originally motivated this work. Various numerical studies of Equation (1), and similar equations, report multi-peaked solutions (Armstrong et al, 2006; Buttenschon and Hillen, 2020; Carrillo et al, 2019; Daneri et al, 2022). However, merging and decaying of peaks have also been observed. Furthermore, analytic investigations into chemotaxis equations, which have some similarities with aggregation equations, have demonstrated that multi-peaked solutions can often be long transients (Potapov and Hillen, 2005). This work demonstrates that, except for the very specific case where peaks are of identical heights and evenly-spaced, any two-peaked solutions will eventually evolve into a solution with at most one peak, as the smaller peak decays to zero. The time it takes for the smaller peak to decay grows rapidly with the start height of the smaller peak, eventually tending to infinity as the difference in start heights between the two peaks tends to zero. We show that a key parameter governing the speed of this decay is the diffusion constant \(D\), with higher diffusion constants leading to faster decays. We conjecture that, as \(D\to 0\), the time to decay tends to infinity, meaning that two-peaked solutions become stable. Finally, we investigate the effect of incorporating logistic growth of the population into our model. The motivation for this is that, in situations where transient solutions exists for a long time, it is no longer biologically reasonable to assume that we are working in situations where births and deaths are negligible. We show that, for a given set of parameters and initial condition, there is a critical net reproduction rate, below which the smaller peak will decay and above which it will persist. ## 2 Methodological approach Our study is motivated by an observation. Often, when simulating Equation (1), multiple aggregations may form and persist for a very long time. This can give the appearance of multi-peaked asymptotic stable states. For example, Figure 1 shows a numerical solution where two peaks have formed by time \(t=1\). These appear stable on timescales up to two orders of magnitude longer than the time they took to form: even by time \(t=100\), the solution has not changed very much (Figure 0(a)). However, if we keep running the simulation, we see one of the peaks decay and the other slowly swallow up the former's mass. The question then arises whether multi-peaked solutions to Equation (1) are ever actually stable, or are they always just long transients? To answer this question, our approach will not be to analyse Equation (1) directly, but rather take two approximations, which enable us to perform analytic calculations. First, we assume that \(\gamma\gg D\). Second, we make the following moment closure assumption \[K*u(x)\approx u+\frac{\sigma^{2}}{2}\frac{\partial^{2}u}{\partial x^{2}} \tag{3}\] where \[\sigma^{2}=\int_{-L}^{L}x^{2}K(x)\mathrm{d}x \tag{4}\] is the second moment of \(K\). This leads to the following approximate version of Equation (1) \[\frac{\partial u}{\partial t}=-\gamma\frac{\partial}{\partial x}\left[u\left( \frac{\partial u}{\partial x}+\frac{\sigma^{2}}{2}\frac{\partial^{3}u}{ \partial x^{3}}\right)\right]. \tag{5}\] Note that Equations (1) and (5) both preserve mass when solved with periodic boundary conditions (i.e. \(u(x,L)=u(x,-L)\) and \(\frac{\partial u}{\partial x}(x,L)=\frac{\partial u}{\partial x}(x,-L)\)), so that if we Figure 1: Numerical solutions of Equation (1) starting with initial conditions that are a small random fluctuation of the constant steady state. By \(t=1\) clear aggregations have formed that might seem stable were the solution only run to around time \(t=100\). However, if we run the solution further in time, we see that the middle peak is gradually decaying, and this decay is speeding up over time, so that by \(t=460\) the peak in the middle is much smaller than the other peak. Here, \(D=1\), \(\gamma=10\), and \(K\) is a top-hat kernel (Equation 19) with \(\delta=0.1\). define \[p:=\int_{-L}^{L}u(x,0)\mathrm{d}x \tag{6}\] then \[\int_{-L}^{L}u(x,t)\mathrm{d}x=p, \tag{7}\] for all \(t>0\). Our tactic will be to search for minimum energy solutions to Equation (5) using the following energy functional \[E[u]=-\int_{-L}^{L}u\left(u+\frac{\sigma^{2}}{2}\frac{\partial^{2}u}{\partial x ^{2}}\right)\mathrm{d}x. \tag{8}\] In particular, we are interested in examining critical points of \(E[u]\), so calculate \[\frac{\partial E}{\partial t} =-\int_{-L}^{L}\left[\frac{\partial u}{\partial t}\left(u+\frac{ \sigma^{2}}{2}\frac{\partial^{2}u}{\partial x^{2}}\right)+u\left(\frac{ \partial u}{\partial t}+\frac{\sigma^{2}}{2}\frac{\partial^{2}}{\partial x^{2 }}\frac{\partial u}{\partial t}\right)\right]\mathrm{d}x\] \[=-\int_{-L}^{L}2\frac{\partial u}{\partial t}\left(u+\frac{ \sigma^{2}}{2}\frac{\partial^{2}u}{\partial x^{2}}\right)\mathrm{d}x\] \[=2\gamma\int_{-L}^{L}\frac{\partial}{\partial x}\left[u\frac{ \partial}{\partial x}\left(u+\frac{\sigma^{2}}{2}\frac{\partial^{2}u}{\partial x ^{2}}\right)\right]\left(u+\frac{\sigma^{2}}{2}\frac{\partial^{2}u}{\partial x ^{2}}\right)\mathrm{d}x\] \[=-2\gamma\int_{-L}^{L}u\left[\frac{\partial}{\partial x}\left(u+ \frac{\sigma^{2}}{2}\frac{\partial^{2}u}{\partial x^{2}}\right)\right]^{2} \mathrm{d}x. \tag{9}\] Here, the second and fourth equalities use integration by parts, together with the periodic boundary conditions. If we assume that there exist non-negative solutions to Equation (5) then the final expression in Equation (9) is non-positive, so that \(E[u]\) is non-increasing. Whilst we do not currently have a proof of the non-negativity of \(u\), we note that that all our numerics suggest that non-negativity is preserved over time, that non-negativity results exist for Equation (1) for a variety of different kernels \(K\)(Carrillo et al, 2019; Giunta et al, 2022; Jungel et al, 2022), and so conjecture these might be transferable to the situation of Equation (5) with some effort. Equation (9) shows that critical points, \(u_{*}(x)\), of the energy functional occur when \[\int_{-L}^{L}u_{*}\left[\frac{\partial}{\partial x}\left(u_{*}+\frac{\sigma^{ 2}}{2}\frac{\partial^{2}u_{*}}{\partial x^{2}}\right)\right]^{2}\mathrm{d}x=0, \tag{10}\] which means that, on any connected subset of \([-L,L]\), either \(u_{*}(x)=0\) or \[u_{*}+\frac{\sigma^{2}}{2}\frac{\partial^{2}u_{*}}{\partial x^{2}}=C\] \[\implies u_{*}(x)=C+A\sin\left(\frac{x\sqrt{2}}{\sigma}\right)+B \cos\left(\frac{x\sqrt{2}}{\sigma}\right) \tag{11}\] for constants \(A\), \(B\), and \(C\). Numerics suggest that Equation (1) tends towards a solution containing one or many aggregations, interspersed by constant sections close or near to zero (e.g. Figure 1). We want to construct differentiable solutions that have this type of qualitative appearance, yet also correspond to critical points of \(E[u]\). These can be constructed piecewise from Equation (11). For example, as long as \(\pi\sigma<\sqrt{2}L\), a single-peaked solution can be given as follows \[u_{*}(x)=\begin{cases}\epsilon+c_{\epsilon}\left[1+\cos\left(\frac{x\sqrt{2}} {\sigma}\right)\right],&\text{if }x\in\left(-\frac{\pi\sigma}{\sqrt{2}},\frac{\pi \sigma}{\sqrt{2}}\right)\\ \epsilon,&\text{otherwise},\end{cases} \tag{12}\] where \(\epsilon\in\left[0,\frac{p}{2L}\right]\) and \(c_{\epsilon}\) are constants. One can also construct multi-peaked solutions in a similar way (which we will do later in the case of two peaks). Notice that such solutions are continuously differentiable, i.e. \(u_{*}\in C^{1}([-L,L])\), but not necessarily twice differentiable, so need to be understood in a weak sense (Evans, 2022). By Equation (7), a direct calculation gives \[c_{\epsilon}=\frac{p-2\epsilon L}{\sqrt{2}\pi\sigma} \tag{13}\] so that the only free parameter in Equation (12) is \(\epsilon\). Since the energy, \(E[u]\), is non-increasing over time, the question arises as to which value of \(\epsilon\) minimises \(E[u]\) across the set of all functions of the form in Equation (12). Our approach is to derive such minima, both in the example from Equation (12) and in various multi-peaked examples, conjecturing that such minima ought to approximate asymptotic solutions to the original problem in Equation (1). We then test these conjectures by investigating Equation (1) numerically. ## 3 Single peak Combining Equations (8) and (12) gives \[E[u_{*}]=-\int_{-L}^{L}u_{*}\left(u_{*}+\frac{\sigma^{2}}{2}\frac{\mathrm{d}^ {2}u_{*}}{\mathrm{d}x^{2}}\right)\mathrm{d}x. \tag{14}\] Now, for \(-\pi\sigma/\sqrt{2}<x<\pi\sigma/\sqrt{2}\), we have that \[u_{*}(x)=\epsilon+c_{\epsilon}\left[1+\cos\left(\frac{x\sqrt{2}}{ \sigma}\right)\right] \tag{15}\] which is a solution to \[u_{*}+\frac{\sigma^{2}}{2}\frac{\partial^{2}u_{*}}{\partial x^{ 2}}=\epsilon+c_{\epsilon}. \tag{16}\] Hence \[E[u_{*}]=-\int_{-\frac{\pi\sigma}{\sqrt{2}}}^{\frac{\pi\sigma}{ \sqrt{2}}}\left[\epsilon+c_{\epsilon}\left(1+\cos\left(\frac{\sqrt{2}x}{\sigma }\right)\right)\right](\epsilon+c_{\epsilon})\mathrm{d}x-2\int_{\frac{\pi \sigma}{\sqrt{2}}}^{L}\epsilon^{2}\mathrm{d}x =-\pi\sigma\sqrt{2}(c_{\epsilon}^{2}+2\epsilon c_{\epsilon})-2L \epsilon^{2}. \tag{17}\] Figure 2: When the initial condition is a single peak surrounded by an area of constant density \(\epsilon\), that area becomes sucked-up into the peak. Panels (a) and (d) show this for \(\epsilon=0.1\); (b) and (e) have \(\epsilon=0.2\); (c) and (f) have \(\epsilon=0.3\). In the latter case, a second peak emerges at \(x=\pm 1\) but decays by around \(t\approx 4\), to leave a single-peaked final state. Panels (a-c) show the time-evolution of the system. Panels (d-f) show the initial conditions (blue curves) and final states (black). In all panels, \(D=1\), \(\gamma=10\), and \(K\) is a top-hat kernel (Equation 19) with \(\delta=0.1\). Using Equation (13) and rearranging gives \[E[u_{*}]=\frac{2L}{\pi\sigma}(\pi\sigma-\sqrt{2}L)\epsilon^{2}+\frac{2p}{\pi \sigma}(\sqrt{2}L-\pi\sigma)\epsilon-\frac{p^{2}}{\sqrt{2}\pi\sigma}. \tag{18}\] Since \(\pi\sigma<\sqrt{2}L\) (see above Equation 12), this is a negative quadratic in \(\epsilon\). Furthermore, the maximum is where \(\epsilon=\frac{p}{2L}\). Now, \(\epsilon\in\left[0,\frac{p}{2L}\right]\), so \(E[u_{*}]\) is an increasing function of \(\epsilon\) on the interval \(\left[0,\frac{p}{2L}\right]\). Hence the minimum energy is where \(\epsilon=0\). This analysis suggests that if a numerical solution to either Equation (1) or (5) results in a single peak at long times, we might expect that peak to be of a similar form to Equation (12) with \(\epsilon=0\). We test this conjecture by solving Equation (1) numerically with initial conditions given by Equation (12) for various different values of \(\epsilon\in\left[0,\frac{p}{2L}\right]\), fixing \(p=L=1\). For these simulations, we set \(D=1\), \(\gamma=10\), and \[K(x)=\begin{cases}\frac{1}{2\delta}&\text{for $-\delta<x<\delta$}\\ 0&\text{otherwise,}\end{cases} \tag{19}\] so that \(\sigma=\delta/\sqrt{3}\). Numerics reveal that the system does indeed tend towards a single-peaked solution, where the width of the peak is approximately \(\sqrt{2}\pi\sigma\) and the solution is zero elsewhere (Figure 2). However, the asymptotic distribution is more flat-topped than the initial condition, owing to the fact that the initial condition arises from a moment closure approximation of \(K*u\). This approximation reduces the analytic solution to a single Fourier mode, whereas the numerical solution could have arbitrarily many Fourier modes. Finally note that, in the case \(\epsilon=0.3\) (Figure 2c,f), a second peak emerges around \(x=\pm 1\) (which are identified due to the periodic boundaries, recalling that \(L=1\)). However, this decays by about \(t=4\). We will return to this phenomenon of decaying secondary peaks in the next section. ## 4 Twin peaks In this section, we examine situations where there are two peaks. First, we look at situations where the peaks are the same height, then at cases where one peak is smaller than the other. ### Peaks of identical height Similar to the single-peak case, here we want to understand whether it is energetically favourable for a solution to have no mass outside the two peaks. More precisely, we examine the energy of the following solution to Equation (5), which is a critical point of \(E[u]\) \[u_{*}(x)=\begin{cases}\epsilon+c_{\epsilon}\left[1+\cos\left(\frac{(x+x_{0}) \sqrt{2}}{\sigma}\right)\right],&\text{if }x\in\left(-x_{0}-\frac{\pi\sigma}{\sqrt{2}},-x_{0}+\frac{\pi\sigma}{\sqrt{2} }\right),\\ \epsilon+c_{\epsilon}\left[1+\cos\left(\frac{(x-x_{0})\sqrt{2}}{\sigma} \right)\right],&\text{if }x\in\left(x_{0}-\frac{\pi\sigma}{\sqrt{2}},x_{0}+\frac{\pi \sigma}{\sqrt{2}}\right),\\ \epsilon,&\text{otherwise.}\end{cases} \tag{20}\] Here, \(x_{0}\in\left(\frac{\pi\sigma}{\sqrt{2}},\frac{L}{2}\right)\) is half the (shortest) distance between the centres of the two peaks. As in the single-peak case, we can use Equation (7) to calculate \[c_{\epsilon}=\frac{p-2L\epsilon}{2\sqrt{2}\pi\sigma}. \tag{21}\] A direct calculation using the definition of \(E[u]\) from Equation (8) leads to \[E[u_{*}]=\frac{\sqrt{2}L}{\pi\sigma}(\sqrt{2}\pi\sigma-L)\epsilon^{2}+\frac{ \sqrt{2}p}{\pi\sigma}(L-\sqrt{2}\pi\sigma)\epsilon-\frac{p^{2}}{2\sqrt{2}\pi \sigma}. \tag{22}\] Since \(\sqrt{2}\pi\sigma<L\), this is a negative quadratic in \(\epsilon\). The unique turning point is a maximum at \(\epsilon=\frac{p}{2L}\), so \(E[u_{*}]\) is an increasing function of \(\epsilon\) on the interval \(\left[0,\frac{p}{2L}\right]\). Hence the minimum energy in the two-peaked case is where \(\epsilon=0\), as with the one-peaked case. However, comparing the \(\epsilon=0\) situation with one peak (Equation 18), against that with two peaks (Equation 22), we see that the single peak is a lower-energy solution. This suggests that we might also see a merging of the two peaks, as well as the mass outside the peaks tending to zero. Indeed, in our numerical experiments, we saw a merging of peaks except in the special case where \(x_{0}=0.5\), so that the initial peaks are evenly-spaced. Figure 2(a),b shows an example where \(x_{0}=0.5\) but \(\epsilon>0\). Here two peaks remain but the the mass outside those two peaks is absorbed into the peaks over time. Figure 2(c) gives an example of peak merging for \(x_{0}<0.5\) whilst Figure 2(d) shows how the time it takes for peaks to merge increases dramatically as \(x_{0}\) increases towards \(x_{0}=0.5\). Here, the Figure 3: Similar to the single peak case (Figure 2), when we start with two peaks of equal heights, surrounded by an area of constant density \(\epsilon\), that area becomes sucked-up into the peak. Panels (a) and (b) show this for \(x_{0}=0.5\) and \(\epsilon=0.2\), where both peaks remain. For \(x<0.5\), peaks merge, shown in Panel (c) for \(x_{0}=0.25\). Panel (d) shows the time to merge as a function of \(x_{0}\). Parameters \(D\), \(\gamma\), and \(K\) are as in Figure 2. time to merge is defined as the time at which the centre of the two initial peaks drops below \(0.1\). Whilst this is a rather arbitrary definition, other definitions lead to similar trends. Notice here that the energy analysis does not give direct insight into why merging does not happen for \(x_{0}=0.5\). Instead, we turn to physical intuition: the fact that peaks are evenly-spaced means that there is no 'preferred' direction for them to move in order to coalesce. Therefore they remain as two peaks. ### Peaks of differing heights In Section 4.1, we examined situations where there are two peaks with precisely equal height, finding that both peaks persisted indefinitely when they are evenly-spaced. However, we have already seen in Figure 1 that when peaks are of different heights, the smaller one can shrink over time, whereas the larger one grows. If this continues indefinitely, the smaller peak could decay completely and only one peak would remain, although it might take a long time for this to happen. Here, we seek to explain this phenomenon using our energy approach, ascertaining whether we should always expect a smaller peak to end up decaying to zero, or whether there are situations where two peaks remain. To this end, we examine steady state solutions with the following functional form \[u_{*}(x)=\begin{cases}c_{A}\left[1+\cos\left(\frac{(x+x_{0})\sqrt{2}}{\sigma} \right)\right],&\text{if }x\in\left(-x_{0}-\frac{\pi\sigma}{\sqrt{2}},-x_{0}+\frac{\pi\sigma}{\sqrt{2 }}\right)\\ c_{B}\left[1+\cos\left(\frac{(x-x_{0})\sqrt{2}}{\sigma}\right)\right],&\text{ if }x\in\left(x_{0}-\frac{\pi\sigma}{\sqrt{2}},x_{0}+\frac{\pi\sigma}{\sqrt{2}} \right)\\ 0,&\text{otherwise.}\end{cases} \tag{23}\] In this case, we can use Equation (7) to calculate \[c_{A}=\frac{p-2L\epsilon}{\sqrt{2}\pi\sigma}-c_{B}. \tag{24}\] We see immediately that, in order for \(c_{A}\) and \(c_{B}\) to be non-negative, we must have \(c_{A},c_{B}\in\left[0,\frac{p}{\sqrt{2}\pi\sigma}\right]\). A direct calculation using the definition of \(E[u]\) from Equation (8) leads to \[E[u_{*}]=-2\sqrt{2}\pi\sigma c_{B}^{2}+2pc_{B}-\frac{p^{2}}{\pi\sigma\sqrt{2}}. \tag{25}\] This is a negative quadratic in \(c_{B}\) with critical point at \(c_{B}=c_{A}=\frac{p}{2\sqrt{2}\pi\sigma}\). Therefore the energy minima occur either when \(c_{B}=0\), \(c_{A}=\frac{p}{\sqrt{2}\pi\sigma}\) or \(c_{A}=0\), \(c_{B}=\frac{p}{\sqrt{2}\pi\sigma}\). In other words, they occur when there is just one peak. Consequently, away from the critical point where \(c_{A}=c_{B}\), we would expect the smaller peak to slowly decay to zero over time, leaving just one peak. Indeed, this is what we see in numerical solutions of Equation (5) (e.g. Figure 3(a),b). However, the time it takes for the smaller peak to decay can be very large (Figure 3(c)). This is exacerbated by decreasing the diffusion constant, \(D\) (Figure 3(d)). Here, numerics hint that, as \(D\to 0\), the decay time may tend to infinity, meaning that the second peak may persist indefinitely if there is no diffusion to allow the smaller to seep into the larger. ### Including population growth So far, we have studied a system where the population size remains constant. This assumes that there are negligible births or deaths on the timescales that we are studying. Our focus has been on examining the difference between long transients and asymptotic solutions. However, in any real biological system, the effect of births and deaths will become non-negligible at some point in time. Therefore there is a limit to which transient solutions in these systems are biologically realistic: if the transients persist for too long, it will become necessary to account for the effect of births and deaths in any biologically meaningful model. We therefore examine the extent to which incorporating growth might enable a second peak to persist, by solving the following equation numerically \[\frac{\partial u}{\partial t}=D\frac{\partial^{2}u}{\partial x^{2}}-\gamma \frac{\partial}{\partial x}\left[u\frac{\partial}{\partial x}(K*u)\right]+ru \left(1-\frac{u}{K}\right), \tag{26}\] with initial conditions given by Equation (23). Depending upon the values of \(\gamma\), \(D\), \(K\), and \(c_{B}\), we found that there is a critical value \(r=r_{c}\) above which the second hump persists, and below which it decays. Figure 5a,b shows this in the case \(\gamma=10\), \(D=1\), \(K=5\), \(c_{B}=1\), whereby \(r_{c}\approx 0.23\). Figure 5c demonstrates how \(r_{c}\) depends upon the aggregation strength \(\gamma\): the greater the aggregation strength, the higher the required growth rate to enable a second peak to persist. Figure 4: Panel (a) shows a numerical solution of Equation (5) with initial condition given by Equation (23) with \(c_{B}=1.5\). Panel (b) gives snapshots of the initial and final distributions. Notice that the smaller peak has decayed almost completely by \(t\approx 15\). Panel (c) is constructed from numerical solutions of Equation (5) with initial condition given by Equation (23) but with \(c_{B}\) taking a variety of values, giving different start heights for the smaller peak (note that the start height is \(2c_{B}\)). Panels (c) and (d) plot the time it takes for the smaller peak to decay to a maximum height of less than \(0.1\). This increases exponentially as a function of the start height, explaining the appearance of long-transient multi-peaked solutions to Equation (5) (Panel c). Conversely, the decay time decreases as \(D\) is increased, showing how diffusion can speed up decay of the smaller peak (Panel d). In panels (a-c), \(D=1\). In panel (d), \(c_{B}=1\). In all panels, \(\gamma=10\) and \(K\) is a top-hat kernel (Equation 19) with \(\delta=0.1\). The value of \(c_{A}\) is determined by Equation (24). ## 5 Discussion Distinguishing between asymptotic solutions and long transients in numerical PDEs is a thorny issue, with perhaps no one-size-fits-all solution. Typically, researchers decide that a solution has reached an asymptotically-stable state when some measure (e.g. the change in \(L^{p}\) norm for some \(p\in[1,\infty]\)) is below a small threshold value (see e.g. Burger et al (2014); Giunta et al (2022); Schlichting and Seis (2022)). However, this means that if transient solutions are changing slower than this threshold value then they will be mistaken for asymptotically-stable solutions. Therefore it is valuable to have some analytic insight to guide the user as to whether the solution is (or is likely to be) a long transient or an asymptotically-stable solution. Here, we have provided such a deductive technique for the aggregation-diffusion equation in Equation (1). Rather than studying this equation directly, we instead study an approximation given in Equation (5). This approximate formulation is simple enough to solve for steady state solutions. It also possesses an energy functional, which allows us to search for local minimum energy solutions amongst the steady state solutions, an approach employed successfully in a previous multi-species study (Giunta et al, 2022). Our hypotheses are first that these local minimum energy solutions are stable solutions to Equation (5), whereas other steady states are not; and second that this categorisation carries over to the steady states of Equation (1). In the examples we tested, numerical experiments confirmed these hypotheses, with the sole exception of twin-peaked solutions where the peaks are of identical height and evenly-spaced. We therefore conclude that this method is a useful way for guiding users (i.e. those wanting to solving Equation 1 numerically) as to whether a solution they are observing is likely to be stable or not, whilst also recommending that they verify these calculations up with numerical experiments. Figure 5: **Effect of growth parameter.** Panels (a) and (b) show the initial condition (blue) and solution at time \(t=10\) (black) where the parameters are \(\gamma=10\), \(D=1\), and \(K=5\). In Panel (a), \(r=0.23\) whereas Panel (b) has \(r=0.24\). This demonstrates a transition in long-term patterns, whereby the smaller peak decays for \(r\leq 0.23\) but grows for \(r\geq 0.24\). Panel (c) shows how this transition point, \(r_{c}\), decreases exponentially as the strength of attraction, \(\gamma\), increases. Regarding the examples we tested, we found two main results: first, that stable aggregations are likely to resemble compactly-supported solutions, rather than being non-zero everywhere; second, that multi-peaked solutions will always be transient unless either \(D=0\) or the peaks are precisely the same height and evenly-spaced. In addition to these central messages, further numerical investigations revealed that these twin-peaked transient solutions can be arbitrarily long-lived if the peaks are arbitrarily close to being evenly-spaced (Figure 3) and the heights of these peaks are arbitrarily similar (Figure 4). That said, the consideration of very long transients in a model that operates on timescales where births and deaths are negligible is not terribly realistic, so we also examined the effect of adding a small amount of (logistic) growth. We found that arbitrarily small amounts of growth will not stop the smaller peak from decaying. However, there appears to be a critical growth rate, dependent upon the model parameters, below which the smaller peak will decay and above which it will grow (Figure 5). Therefore, if long transients appear when using Equation (1) to model biological aggregation, it is valuable to think about the effect of net reproductive rate in the system being modelled, and whether this is sufficient to arrest the decay of the smaller peak. Whilst our principal equation of interest is Equation (1), it is worth noting that our approximate analytic techniques can also be applied to various other Equations. For example, the cell adhesion equations introduced in Armstrong et al (2006), have a very similar functional form that can usually be formally related to Equation (1) or modifications thereof (Painter et al, 2023). Chemotaxis equations are also somewhat similar to Equation (1), but here the non-local self-interaction is replaced with a diffusing chemical. The organisms interact with the chemical rather than directly with one another. It turns out that the resulting models are equivalent to a type of aggregation-diffusion equation with advection that is nonlocal in both space and time (Shi et al, 2021). This contrasts with Equation (1), which is nonlocal in space alone. However, similar patterns are observed in these systems, including long-transient multi-peaked solutions similar to those studied here (Potapov and Hillen, 2005). We also note that the moment closure we use in Equation (1) leads to a fourth-order PDE quite similar in nature to the Cahn-Hilliard equation (Novick-Cohen, 2008), for which there is a long history of studies on metastability (Bates and Xun, 1994; Reyna and Ward, 1995; Scholtes and Westdickenberg, 2018). Finally, it is worth noting that the particular version of the aggregation-diffusion equation that we study involves linear diffusion. However, there is also interest in the nonlinear case, particularly where the diffusion is quadratic, replacing \(u_{xx}\) with \((u^{2})_{xx}=2(uu_{x})x\) in Equation (1). An advantage of this formulation is that Equation (1) has the form \(u_{t}=[u(D-\gamma K*u)_{x}]_{x}\), making it amenable to a analysis without taking the limit \(D/\gamma\to 0\). This fact has been exploited, for example, by Ellefsen (2021); Carrillo et al (2018). However, here we have chosen to focus on linear diffusion is important to study as it often arises naturally from models of organism movement (Armstrong et al, 2006; Potts and Schlagel, 2020; Painter et al, 2023). Future work on the nonlinear case could reveal analytic insights about the effect of \(D\) vs. \(\gamma\) on asymptotic patterns, which we were only able to examine numerically in this study. ## Acknowledgments JRP acknowledges support of Engineering and Physical Sciences Research Council (EPSRC) grant EP/V002988/1. KJP is a member of INdAM-GNFM and acknowledges departmental funding through the 'MIUR-Dipartimento di Eccellenza' programme.
2309.03914
DevGPT: Studying Developer-ChatGPT Conversations
This paper introduces DevGPT, a dataset curated to explore how software developers interact with ChatGPT, a prominent large language model (LLM). The dataset encompasses 29,778 prompts and responses from ChatGPT, including 19,106 code snippets, and is linked to corresponding software development artifacts such as source code, commits, issues, pull requests, discussions, and Hacker News threads. This comprehensive dataset is derived from shared ChatGPT conversations collected from GitHub and Hacker News, providing a rich resource for understanding the dynamics of developer interactions with ChatGPT, the nature of their inquiries, and the impact of these interactions on their work. DevGPT enables the study of developer queries, the effectiveness of ChatGPT in code generation and problem solving, and the broader implications of AI-assisted programming. By providing this dataset, the paper paves the way for novel research avenues in software engineering, particularly in understanding and improving the use of LLMs like ChatGPT by developers.
Tao Xiao, Christoph Treude, Hideaki Hata, Kenichi Matsumoto
2023-08-31T06:55:40Z
http://arxiv.org/abs/2309.03914v2
# DevGPT: ###### Abstract The emergence of large language models (LLMs) such as ChatGPT has disrupted the landscape of software development. Many studies are investigating the quality of responses generated by ChatGPT, the efficacy of various prompting techniques, and its comparative performance in programming contests, to name a few examples. Yet, we know very little about how ChatGPT is actually used by software developers. What questions do developers present to ChatGPT? What are the dynamics of these interactions? What is the backdrop against which these conversations are held, and how do the conversations feedback into the artifacts of their work? To close this gap, we introduce DevGPT, a curated dataset which encompasses 17,913 prompts and ChatGPT's responses including 11,751 code snippets, coupled with the corresponding software development artifacts--ranging from source code, commits, issues, pull requests, to discussions and Hacker News threads--to enable the analysis of the context and implications of these developer interactions with ChatGPT. To create DevGPT, we leveraged a feature introduced by OpenAI in late May 2023, which allows users to share their interactions with ChatGPT through dedicated links.1 We collected all such links shared on GitHub and Hacker News at six specific points in time: July 27, 2023, August 3, 2023, August 10, 2023, August 17, 2023, August 24, 2023, and August 31, 2023. If users chose to delete or deactivate their shared conversations in the intervening periods, we ensured data consistency by accessing the original shared link across all these snapshots. Footnote 1: [https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) Table I provides an overview of the snapshot 20230831. Compprising 2,891 shared ChatGPT links sourced from 2,237 GitHub or Hacker News references, the dataset contains a total of 17,913 prompts/answers. This includes 11,751 code snippets, with Python (1,735), JavaScript (1,530), and Bash (1,435) as the top three programming languages. 546 of these links are referenced across multiple sources, resulting in a unique count of 2,345 individual ChatGPT shared links within DevGPT. We will periodically expand the DevGPT dataset until its official release for the MSR mining challenge. Figure 1 shows an instance of a ChatGPT conversation from the dataset, together with the pull request it was related to and how the code was updated after the ChatGPT conversation. ## III Internal structure The dataset consists of a collection of JSON files collected from the six sources detailed in Table I. For each source, we provide distinct metadata in the JSON file to enable source-specific analysis. Apart from the source-specific metadata, every JSON contains a consistent attribute: a list of shared ChatGPT links. Each shared link includes the URL to the ChatGPT conversation, the associated HTTP response status codes, the access date of the URL, and the content within the HTML response. Additionally, each conversation contains a list of prompts/answers, inclusive of any code snippets. We provide details including the date of the conversation, the count of prompts/answers, their token information, and the model version involved in the chat. Attributes detailing where the conversation was referenced are also included--such as the referencing URL, the nature of the mention (e.g., a comment), the individual who mentioned it, and the context in which it was cited. A comprehensive breakdown of the data structure is available at [https://github.com/NAIST-SE/DevGPT](https://github.com/NAIST-SE/DevGPT). Additionally, we provide a CSV file cataloging all shared ChatGPT links gathered from GitHub and Hacker News. ## IV How to access The DevGPT dataset is available for download on Zenodo, see Section VI. It is formatted in JSON, making it easily parsable with any standard JSON library. Additionally, we include the HTTP response, which can be analyzed using any HTML parser. The dataset also categorizes code snippets by
2309.03900
Learning Continuous Exposure Value Representations for Single-Image HDR Reconstruction
Deep learning is commonly used to reconstruct HDR images from LDR images. LDR stack-based methods are used for single-image HDR reconstruction, generating an HDR image from a deep learning-generated LDR stack. However, current methods generate the stack with predetermined exposure values (EVs), which may limit the quality of HDR reconstruction. To address this, we propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs, including those unseen during training. Our approach generates a continuous stack with more images containing diverse EVs, significantly improving HDR reconstruction. We use a cycle training strategy to supervise the model in generating continuous EV LDR images without corresponding ground truths. Our CEVR model outperforms existing methods, as demonstrated by experimental results.
Su-Kai Chen, Hung-Lin Yen, Yu-Lun Liu, Min-Hung Chen, Hou-Ning Hu, Wen-Hsiao Peng, Yen-Yu Lin
2023-09-07T17:59:03Z
http://arxiv.org/abs/2309.03900v1
# Learning Continuous Exposure Value Representations for ###### Abstract Deep learning is commonly used to reconstruct HDR images from LDR images. LDR stack-based methods are used for single-image HDR reconstruction, generating an HDR image from a deep learning-generated LDR stack. However, current methods generate the stack with predetermined exposure values (EVs), which may limit the quality of HDR reconstruction. To address this, we propose the continuous exposure value representation (CEVR), which uses an implicit function to generate LDR images with arbitrary EVs, including those unseen during training. Our approach generates a continuous stack with more images containing diverse EVs, significantly improving HDR reconstruction. We use a cycle training strategy to supervise the model in generating continuous EV LDR images without corresponding ground truths. Our CEVR model outperforms existing methods, as demonstrated by experimental results. ## 1 Introduction High dynamic range (HDR) images can capture detailed appearances in regions with extreme lighting conditions, like sun and shadow. As conventional cameras only capture a limited dynamic range in real-world scenes, one approach to address this issue is to blend multiple LDR images with different exposures into a single HDR image. However, this method is limited to static scenes and may result in ghosting or blurring artifacts in dynamic scenes. Additionally, this method is not applicable when multiple images of the same scene are unavailable, such as an image on the internet. Another branch of methods, e.g., [12, 13, 23, 24, 26, 32, 43], takes a single LDR image as input to generate the HDR counterpart without suffering from misalignment, which is referred to as _single-image HDR reconstruction_. These approaches, e.g., [23, 24], are trained on particular datasets and build an LDR stack with a single LDR image to generate an HDR image using Debevec's method [11]. Using more LDR images with richer EVs improves HDR image quality, as demonstrated in Fig. 2 with different EV stack settings using Debevec's method on the _real_ LDR images of the HDREye dataset [36]. We compare tone-mapping operators RH [39] and KK [20] and use HDR-VDP-2 to evaluate HDR quality. However, accessible datasets have predefined and quantized EVs and may not cover optimal values for HDR reconstruction, causing information loss. Previous studies [8, 35, 46] show the effectiveness of implicit neural representations in modeling continuous relationships, motivating our research. Inspired by the observation in Fig. 2, we address the issue of predefined, quantized EVs by leveraging an implicit neural function to model relationships between image appearance and continuous EVs. It turns out our method can generate LDR images with arbitrary EVs even if the corresponding ground truth is unavailable. More importantly, LDR stacks enriched by images with these continuous and dense EVs can reconstruct HDR images of better quality. Specifically, the proposed approach, _continuous exposure value representation_ (CEVR), exploits an implicit neural function to generate LDR images with continuous exposure values, as shown in Fig. 1(a). Based on the flexibility of our CEVR model, we further develop two strategies, cycle training and continuous stack, to improve the quality of the LDR stack and the final HDR result. Cycle training utilizes CEVR to generate continuous EV images without relying on direct supervision from corresponding ground truths. We train the model using two continuous EVs that sum up to a predefined EV, with the proportion of these two continuous EVs randomly sampled. This strategy enforces the cycle consistency constraint, improving the model's ability to synthesize images with varying EVs and enhancing the quality of the LDR stack. We then use the enriched LDR stack containing seen and unseen EVs in training data for Debevec's method to produce more accurate inverse camera response functions (CRFs) and visually appealing tone-mapped images (Fig. 1(c)) compared to previous methods [23, 24] (Fig. 1(b)). Extensive evaluations demonstrate the effectiveness of our proposed continuous stack and cycle training on the VDS [23] and HDREye [36] datasets. Both quantitative and qualitative evaluations show that CEVR significantly outperforms existing methods. The following summarizes our three primary contributions: * We propose the CEVR approach, which can generate LDR images with continuous exposure values by modeling relationships between image appearances and exposure values. * With the flexibility of the CEVR model, we design a training strategy, cycle training, to explore continuous EV information and enhance the quality of the estimated LDR stack. * We propose the continuous stack, which consists of LDR images with continuous and dense exposure values and can improve the quality of final HDR images. ## 2 Related Work **Multi-image HDR reconstruction.** Modern cameras typically have limited dynamic ranges and cannot well capture all visible details of a scene with a wide range of illumination. To address this issue, one practical solution is to take multiple LDR images at different exposure levels and blend them into an HDR image. To this end, conventional methods such as [11, 30] are developed to estimate the CRF [15], upon which multiple LDR images are converted into the radiance field of the scene and transformed into an HDR image. Recent methods, e.g., [51, 17], use CNNs for directly fusing LDR images and reconstructing their HDR counterpart. However, both conventional and CNN-based methods require multiple differently exposed images of a static scene. Furthermore, for working on dynamic scenes, additional mechanisms are needed to alleviate the misalignment problem and avoid blurring or ghosting artifacts [18, 29, 47]. However, misalignment itself is a complicated issue to resolve. **Single-image HDR reconstruction.** This task aims to reconstruct the HDR image using just one LDR input, also called inverse tone mapping [3, 4, 5, 6], and can bypass the misalignment problem. Existing methods need to enlarge the dynamic range [1, 40, 44, 33] and restore the lost details. Generation techniques [55, 56, 55, 14, 25] for image synthesis are essential to methods of this category. Due to the Figure 2: **Motivation. We observe that an LDR stack with dense EVs improves HDR reconstruction even with the same exposure range (from -2EV to +2EV). A list “[-2,0,2]” means the stack contains three LDR images with -2, 0, and 2 EVs. An example of visual comparison is given.** superior mapping power of CNNs [16, 45] and GAN [14], deep neural networks are widely adopted for HDR reconstruction. One branch of research efforts [12, 32, 57, 43, 53] focuses on learning the mapping from the input LDR image to the HDR image. For example, Marnerides et al. [32] use CNNs to generate the HDR image based on an LDR input. To further improve the performance, Santos et al. [43] filter out the saturated regions in the LDR input and pretrain the deep network for an inpainting task. However, learning the LDR-to-HDR mapping is ill-posed since different LDR images can be mapped to the same HDR image [13]. Another branch of methods, e.g., [13, 19, 23, 24], aims to synthesize a stack of differently exposed LDR counterparts given an LDR image as input. Then, the conventional multi-image methods can be applied to the synthesized LDR stack to complete HDR reconstruction. For example, Endo et al. [13] use 3D convolutions, with exposure variation being one dimension, to learn the relationship between the LDR input and its counterparts with different exposure values. Their approach can generate the LDR stack directly. The LDR stack can be synthesized in a recursive manner [19, 23, 24]. For example, Lee et al. [24] use GAN to generate an image with relative exposure value change. The LDR stack is constructed by recursively using their model. Nevertheless, existing stack-based methods can only generate LDR images with predefined exposure values present in the training data. Inspired by the fact that the real-world captured images can have any EV value depending on different shutter settings instead of predefined ones, we present a method that can synthesize LDR images with continuous exposure values that are even unseen in the training data. Our method can generate an enriched and denser stack with which significantly better HDR results are achieved. **Implicit neural representations.** An implicit function space is a shared function space that contains the neural representation of different objects or images learned by a shared implicit function. It is commonly a latent space where a latent code is mapped to an image using an encoder-decoder structure [9, 34, 41, 42, 52]. This approach has been widely used in image super-resolution [22, 8], 3D shape, surface modeling [2, 9, 46], and view synthesis of 3D structures [35, 37]. Methods using implicit functions have shown that the learned latent space can be continuous [38, 35, 8, 10], allowing for exploring continuous relationships of exposure differences between images. More and more radiance field reconstruction research aims to generalize the trained model across scenes unseen in training data. The methods in [7, 48, 54] propose advanced model architectures and training strategies, making the learned implicit function space achieve the generalization on unseen views. Our method, similar to [7, 48, 54], can generalize well to all images without fine-tuning. ## 3 Approach In this section, we present our proposed Continuous Exposure Value Representation (CEVR) which generates LDR images with continuous EV. We provide an overview of our method in Section 3.1, followed by the architectural design in Section 3.2, which includes the implicit module and intensity transformation. Additionally, we propose two strategies, cycle training, and continuous stack, to further enhance the flexibility of CEVR, which are discussed in detail in Section 3.3 and 3.4, respectively. ### Overview Based on the observation in Fig. 2, we propose the CEVR model to generate an enriched and denser LDR stack for high-quality HDR reconstruction. Our model, shown in Fig. 3, utilizes a hierarchical U-Net structure (Fig. 3(a)) and incorporates the implicit neural representation into the design to predict LDR images with continuous EVs (Fig. 3(c)). To maintain accurate color and image structure while adjusting brightness, we introduce intensity transformation (Fig. 3(d)), which generates an adjustment map from each scale of the feature map. As the ground-truth LDR images with unseen exposure values are lacking, we train the model using unsupervised cycle training (Fig. 4), enabling our method to learn images with varying EVs and enhance the quality of the predicted LDR stack. ### Continuous Exposure Value Representation We show our CEVR model in Fig. 3(a). Our CEVR model employs the hierarchical U-Net structure, where the encoder is a pre-trained VGG-Net and the decoder is a cascade of decoder blocks (Fig. 3(b)), each of which comprises an implicit module that compiles the feature map with an input EV step \(s\). Each decoder block is followed by an intensity transformation to adjust the intensity of input image at that scale. Specifically, the CEVR model \(F\) takes an LDR image \(I\) and the specified EV step \(s\) as input and generates another LDR image \(\hat{I}_{s}\), a counterpart of \(I\) with the relative exposure value change \(s\), via \[\hat{I}_{s}=F(I,s). \tag{1}\] Take the widely used VDS dataset [23] as an example. An LDR image with EV0 in this dataset can serve as \(I\). An LDR stack can be generated by applying our CEVR \(F\) to \(I\) and every EV step in \(\{s\in\mathbb{Z}|-3\leq s\leq 3\}\). **Implicit module.** To synthesize an LDR image conditioned on a continuous EV step \(s\) even unseen in the training data, each decoder block in Fig. 3(b) has an associated, learnable implicit module \(f_{\theta}\), which is built by MLPs and shown in Fig. 3(c). The implicit module \(f_{\theta}\) parameterized by \(\theta\) takes the form: \[x_{s}(p,q)=f_{\theta}([x(p,q),s]), \tag{2}\] where \(x\in\mathbb{R}^{H\times W\times C}\) is the input feature map, \(x(p,q)\in\mathbb{R}^{C}\) is the feature vector at location \((p,q)\), and \([x(p,q),s]\in\mathbb{R}^{C+1}\) refers to the concatenation of \(x(p,q)\) and \(s\). The output feature map \(x_{s}\) is generated by repeatedly applying the implicit module \(f_{\theta}\) to all \(H\times W\) locations of \(x\) with the desired relative exposure value change \(s\). Intensity transformation.In Fig. 3(a), our CEVR leverages U-Net to perform multi-scale synthesis to generate a better LDR image with a different EV. The input and output images, \(I\) and \(\hat{I}_{s}\), cover the same scene under different exposures. Thus, their content should not undergo significant changes. To preserve the image structure and allow the model to focus on the brightness changes for detail reconstruction at each scale, the proposed intensity transformation module in Fig. 3(d) takes the resized feature map from the decoder block as input and produces the \(\alpha\) and \(\beta\) maps. As shown in Fig. 3(a), the \(\alpha\) and \(\beta\) maps carry out affine brightness transformation at each scale. The output \(\hat{I}_{s}\) is synthesized through multi-scale transformations. Reconstruction Loss.Suppose that we are given a training set of \(N\) images \(\{I_{n}\}_{n=1}^{N}\) with a set of \(M\) EV steps \(\{s_{m}\}_{m=1}^{M}\). For each training image \(I_{n}\), its ground-truth LDR stack \(\{I_{n}^{*}(s_{m})\}_{m=1}^{M}\) is provided, where \(I_{n}^{*}(s_{m})\) is the counterpart of \(I_{n}\) with the relative exposure value change \(s_{m}\). We train the CEVR model \(F\) in Eq. (1) by minimizing the \(L_{1}\) reconstruction loss: \[\mathcal{L}_{\text{rec}}=\sum_{n=1}^{N}\sum_{m=1}^{M}\|I_{n}^{*}(s_{m})-F(I_{n },s_{m})\|_{1}. \tag{3}\] ### Cycle Training Strategy Existing training datasets, such as the VDS dataset [23], provide the ground truth for a sparse set of predefined EV steps, e.g., \([-3,-2,...,3]\). Inspired by the success of cycle consistency training in video frame interpolation [27] and to make our CEVR work well for synthesizing images with arbitrary EVs, we introduce the cycle training strategy to train the model with _continuous_ EV steps without the corresponding ground-truth images. For each training image \(I_{n}\) and each EV step \(s_{m}\) covered by the training set, the cycle training strategy shown in Fig. 4, derives the CEVR model with two branches. The first branch takes \(I_{n}\) and \(s_{m}\) as input. Since the ground-truth image \(I_{n}^{*}(s_{m})\) is available, the reconstruction loss \(\mathcal{L}_{\text{rec}}\) is used to supervise this branch. The second branch implements a two-step process. We randomly sample a real value \(a\in[0,1]\) for each image \(I_{n}\) at each training iteration, and decompose the EV step \(s_{m}\) into two sub-steps: \(u=as_{m}\) and \(v=(1-a)s_{m}\), with \(u+v=s_{m}\). Our CEVR model is applied _twice_ with the two EV sub-steps, respectively. Although the ground truth for Figure 3: **Proposed network architecture.** (a) The proposed CEVR model takes an image \(I\) and an EV step \(s\) as input, and produces an LDR image \(\hat{I}_{s}\) with a relative exposure value change \(s\). It adopts the U-Net structure, where the encoder is a pre-trained VGG-Net, and the decoder is a cascade of decoder blocks. (b) Each decoder block comprises an implicit module to enable continuous EV representation learning, as shown in (c). (d) Following each decoder block, an intensity transformation module is learned to produce the \(\alpha\) and \(\beta\) maps for image brightness transformation. the randomly sampled sub-step \(u\) is unavailable, we expect that the output of taking the two sub-steps should be similar to the ground truth \(I_{n}^{*}(s_{m})\) because of \(u+v=s_{m}\). Thereby, we enforce the proposed cycle loss: \[\mathcal{L}_{\text{cyc}}=\sum_{n=1}^{N}\sum_{m=1}^{M}\|I_{n}^{*}(s_{m})-F(F(I_{n },u),v)\|_{1}. \tag{4}\] The sub-step \(u\) in Eq. (4) is randomly sampled for each training image with each covered EV step at each training iteration. It is used to simulate arbitrary EV step input to our CEVR model. To compensate for the lack of the ground truth of the intermediate output \(F(I_{n},u)\), the cycle loss \(\mathcal{L}_{\text{cyc}}\) in Eq. (4) offers indirect supervision, ensuring the continuity of our CEVR model with continuous EV steps. The objective function used to derive the proposed CEVR is defined by \[\mathcal{L}=\mathcal{L}_{\text{rec}}+\lambda\mathcal{L}_{\text{cyc}}, \tag{5}\] where we empirically set \(\lambda\) to 0.1 in our experiments. ### Continuous Stack In the inference phase, with the implicit module and the cycle training strategy, our CEVR model can generate high-quality LDR images with continuous EVs. The LDR stack containing more LDR images with various EVs can help Debevec's method [11] estimate a more accurate inverse CRF, as shown in Fig. 1(c), and improve the HDR image reconstruction, as shown in Fig. 2. Inspired by this observation, we proposed the continuous stack, which predicts additional LDR images with continuous EVs from our CEVR model. The predicted continuous and dense LDR stack further benefits the stack fusion process and enhances the final HDR quality, as shown in Fig. 1(c). ## 4 Experiments ### Experimental Setup **Datasets.** We train our model using the training set of the VDS dataset [23], which contains image stacks of 48 scenes. The testing sets of the VDS and HDREye datasets [36], which contain 48 and 42 scenes respectively, serve as the testing sets for evaluations. The auto-bracketing feature of the camera produces seven photos with predefined exposure values for each scene in the VDS dataset (EV-3 to EV+3). We follow the common evaluation protocol [23, 24] and select the image with the zero exposure value, which is expected to have the most evenly distributed histogram, as the input to the model. **Training details.** For training, we consider each training scene \(n\) from the VDS dataset [23] and take the corresponding EV0 LDR image as input \(I_{n}\). We also take each EV step \(s_{m}\in\{-3,-2,-1,0,1,2,3\}\) into account. We feed \(I_{n}\) and \(s_{m}\) to the CEVR model and estimate the LDR image with EV \(s_{m}\) for model training. Since the inverse CRF is usually asymmetrical, we train two different models with the same architecture to handle the increasing and decreasing exposure changes, respectively. For upsampling, we use bicubic upsampling, followed by a \(3\times 3\) 2D convolution with stride 1 and padding 1. The model is trained for 1,250 epochs with Adam optimizer [21] and cosine annealing warmup with restarts as the scheduler. We use random rotation and flip to augment the data. **Evaluation metrics.** We employ PSNR, SSIM [49], and MS-SSIM [50] as the metrics for evaluating the qualities of \begin{table} \begin{tabular}{c l c c c c c c} \hline \hline \multirow{2}{*}{EV} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{PSNR} & \multicolumn{2}{c}{SSIM} & \multicolumn{2}{c}{MS-SSIM} \\ \cline{3-8} & & m & \(\sigma\) & m & \(\sigma\) & m & \(\sigma\) \\ \hline \multirow{4}{*}{+3} & Deep chain HDRI [23] & 28.18 & 2.77 & 0.953 & 0.065 & 0.983 & 0.015 \\ & Deep recursive HDRI [24] & 28.97 & 2.92 & 0.944 & 0.044 & 0.981 & 0.014 \\ & CEVR (Ours) & **34.34** & 3.46 & **0.973** & 0.021 & **0.989** & 0.007 \\ \hline \multirow{4}{*}{+2} & Deep chain HDRI [23] & 29.65 & 3.06 & 0.959 & 0.065 & 0.986 & 0.016 \\ & Deep recursive HDRI [24] & 29.43 & 2.85 & 0.952 & 0.939 & 0.986 & 0.010 \\ & CEVR (Ours) & **35.30** & 3.08 & **0.981** & 0.016 & **0.993** & 0.004 \\ \hline \multirow{4}{*}{+1} & Deep chain HDRI [23] & 31.90 & 3.43 & 0.969 & 0.039 & 0.992 & 0.008 \\ & CEVR (Ours) & **37.64** & 2.96 & **0.989** & 0.009 & **0.996** & 0.004 \\ \hline \multirow{4}{*}{-1} & Deep chain HDRI [23] & 29.01 & 3.83 & 0.935 & 0.056 & 0.980 & 0.017 \\ & Deep recursive HDRI [24] & 31.22 & 3.69 & 0.951 & 0.031 & 0.986 & 0.090 \\ & CEVR (Ours) & **34.62** & 3.47 & **0.980** & 0.011 & **0.992** & 0.005 \\ \hline \multirow{4}{*}{-2} & Deep chain HDRI [23] & 26.72 & 4.54 & 0.952 & 0.029 & 0.974 & 0.021 \\ & Deep recursive HDRI [24] & 31.08 & 3.07 & 0.948 & 0.041 & 0.986 & 0.014 \\ & CEVR (Ours) & **33.99** & 4.34 & **0.978** & 0.017 & **0.988** & 0.010 \\ \hline \multirow{4}{*}{-3} & Deep chain HDRI [23] & 24.33 & 4.57 & 0.919 & 0.036 & 0.948 & 0.037 \\ & Deep recursive HDRI [24] & 29.15 & 4.75 & 0.910 & 0.061 & 0.966 & 0.025 \\ \cline{1-1} & CEVR (Ours) & **30.58** & 5.32 & **0.954** & 0.046 & **0.972** & 0.032 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative comparison of the predicted LDR stacks on the VDS dataset [23].** CEVR outperforms existing approaches in estimating LDR stacks for all EVs. With cycle training, our method can generate high-quality LDR images even with large EV changes. Figure 4: **Cycle training. We derive the CEVR model in an unsupervised cycle training strategy without using the corresponding ground truth. In this way, our model exploits the cycle consistency constraint and learns more continuous information by varying the EV sub-step \(u\).** the predicted LDR stacks and HDR tone-mapped images. We also utilize HDR-VDP-2 [31], a metric based on the human visual system, to evaluate the quality of the reconstructed HDR images. We follow the setting of [23, 24], which sets a 24-inch monitor with a viewing distance of 0.5 meters, a peak contrast of 0.0025, and a gamma of 2.2 for measuring the HDR-VDP-2 metric. **HDR reconstruction and tone-mapping operators.** Our approach uses Debevec's approach [11] to reconstruct HDR images with the predicted LDR stack and utilizes Reinhard's method [39] or Kim and Kautz's method [20] to tone-map the HDR images. ### Comparison of LDR Stacks Prediction **Quantitative comparisons.** The quantitative comparisons of the estimated LDR exposure stacks from the VDS dataset are shown in Tab. 1. The table shows that the proposed method performs favorably against existing methods at every exposure value. The output LDR image quality decreases as the exposure value gap increases because more extensive over- and under-exposed regions reconstruction are required, which makes the task more difficult. However, with our cycle training, our model can still generate high-quality LDR images in the cases of EV+3 and EV-3 by incorporating continuous and dense EV information into the training process. The continuous EV generation during training helps the model learn how to explicitly infer LDR images with arbitrary exposure values. **Qualitative comparisons.** With the cycle training, our method can generate a high-quality LDR image even with large EV changes. A detailed qualitative comparison is presented in Fig. 5. The first row of this figure shows that Deep recursive HDRI [24] often gets less accurate color tone in estimating the LDR images, which may further degrade the quality of the HDR images fused by the LDR stack. On the contrary, our method can better estimate the LDR images in all EVs with more accurate color tones. In addition, the second row demonstrates that our method can estimate the LDR images without severe artifacts. More visual comparisons can be found in the supplementary material. ### Comparison of HDR Image Prediction We compare our method with five recent single-image HDR reconstruction methods, including Santos et al. [43], DrTMO [13], Deep chain HDRI [23], Deep recursive HDRI [24], and Liu et al. [26]. For Santos et al. [43], Deep recursive HDRI [24] and Liu et al. [26], we use their official implementations along with the released pre-trained model weight to generate all the quantitative and qualitative results on the VDS [23] and HDREye [36] datasets. For DrTMO [13] and Deep chain HDRI [23], we compare our results to the numbers reported in their papers. For HDR image prediction, our approach adopts the continuous stack strategy where the EV steps are enriched from \(\{-3,-2,...,+3\}\) to \(\{-3,-2.5,...,+3\}\), and the images with the extra EV steps are also synthesized by using the proposed CEVR model. **Quantitative evaluations.** As shown in Tab. 2, our method performs favorably against the competing methods on the VDS dataset [23]. The HDREye dataset [36] serves as a blind test bed, our HDR prediction still achieves better qualities using the same tone-mapping operators. Our proposed cycle training makes the model explicitly learn the continuity as EV steps change, leading to better generalization on the unseen dataset, HDREye. With the continuous stack, more LDR images with various EVs are involved in the fusion process, which helps Debevec's approach [11] estimate a more accurate inverse CRF and generate HDR images with better qualities. **Qualitative comparisons.** To generate the tone-mapped images for visual comparisons, we first reconstruct HDR images by fusing the LDR stacks with Debevec's approach [11]. Then we use Reinhard's method [39] to gen \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{PSNR} & \multicolumn{2}{c}{PSNR} & \multirow{2}{*}{HDR-VDP-2} \\ \cline{3-6} & & RH’s & TMO & & & \multicolumn{1}{c}{KK’s TMO} & \\ \cline{3-6} & & m & \(\sigma\) & m & \(\sigma\) & m & \(\sigma\) \\ \hline \multirow{6}{*}{VDS} & DrTMO [13] & 25.49 & 4.28 & 21.36 & 4.50 & 54.33 & 6.27 \\ & Deep chain HDRI [23] & 30.86 & 3.36 & 24.54 & 4.01 & 56.36 & 4.41 \\ & Deep recursive HDRI [24] & 32.99 & 2.81 & 28.02 & 3.50 & 57.15 & 4.35 \\ & Santos et al. [43] & 22.56 & 2.68 & 18.23 & 3.53 & 53.51 & 4.76 \\ & Lin et al. [26] & 30.89 & 3.27 & 28.00 & 4.15 & 56.97 & 6.15 \\ & CEVR (Ours) & **34.67** & 3.50 & 38.04 & 4.45 & **59.00** & 5.78 \\ \hline \multirow{6}{*}{HDREye} & DrTMO [13] & 23.68 & 3.27 & 19.97 & 4.11 & 46.67 & 5.81 \\ & Deep chain HDRI [23] & 25.77 & 2.44 & 22.62 & 3.39 & 49.80 & 5.97 \\ \cline{1-1} & Deep recursive HDRI [24] & 26.28 & 2.70 & 24.26 & 2.90 & 52.63 & 4.84 \\ \cline{1-1} & Santos et al. [43] & 19.89 & 2.46 & 19.00 & 3.06 & 49.97 & 5.44 \\ \cline{1-1} & Liu et al. [26] & 26.25 & 3.08 & 24.67 & 3.54 & 50.33 & 6.67 \\ \cline{1-1} & CEVR (Ours) & **26.54** & 3.10 & **24.81** & 2.91 & **53.15** & 4.91 \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative comparison of HDR and TMO images.** Two tone-mapping approaches, Reinhard’s approach [39], and Kim and Kautz’s approach [20], are denoted as RH’s and KK’s TMO. Figure 5: **Qualitative comparison of LDR image predictions in the VDS dataset [23].** Our approach recovers more details compared to Deep recursive HDRI [24] in the EV-3 example. In the EV+3 example, our approach generates LDR images with a color tone similar to the ground truth. erate HDR TMO images for all competing methods. As Liu et al. [26] generate an HDR image directly, we apply Reinhard's tone-mapping operator [39] for tone-mapping the HDR images. Note that Liu et al. [26] do not train their method on the VDS dataset; hence we only compare the qualitative results with their method on the HDREye dataset, as shown in Fig. 6. In Fig. 6, it can be observed that Deep recursive HDRI [24] often suffers from inaccurate color tone: the color of the building is inaccurate in the tone-mapped images, and artifacts are present in severely exposed regions. Liu et al. [26] directly estimate and reverse the whole camera pipeline to generate HDR images. It sometimes struggles with generating detailed textures and produces artifacts in severely exposed regions, e.g., the over-exposed window frame and sky in the daylight. With the intensity transformation, our model can preserve the image structure and generate the tone-mapped images with similar tones to the ground truth. It also produces fewer artifacts. More visual comparisons can be found in the supplementary material. ### Ablation Studies In the following, we validate three design contributions to improving the quality of the LDR stack and HDR images. **Intensity transformation.** Learning to adjust image brightness while maintaining color tone accuracy and image structures can be challenging. The CEVR model, which directly outputs results from the U-net structure without using intensity transformation, can struggle to adjust brightness or produce inaccurate LDR images with artifacts, as shown in Fig. 7. The intensity transform \begin{table} \begin{tabular}{l c c c c c} \hline \hline Intensity transformation & - & \multicolumn{2}{c}{\(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \check{\ the model's capacity, producing LDR images with more accurate brightness, color tones, and structures. This design improves the quality of the LDR stack and HDR results, as demonstrated in Tab. 3 and Tab. 4. **Cycle training.** With cycle training, the model can be supervised on continuous EV steps without using the corresponding ground truth. It can learn how to change the exposure value continuously, which improves the quality and reduces the artifacts of the estimated LDR images, which also leads to better HDR quality, as shown in Fig. 8,Tab. 3 and Tab. 4. To further demonstrate the effectiveness of cycle training, we conducted the _hold-out_ experiment, excluding EV-1 and +1 LDR images during training. Then, we used the model to estimate EV-1 and +1 LDR images for each scene and evaluated the PSNR. The table shows that our model can generate better LDR images with unseen EVs when the cycle training strategy is adopted. **Continuous stack.** Debevec's method [11] uses the LDR stack to recover response curves and reconstruct HDR images. A denser and continuous EV LDR stack helps produce an accurate inverse CRF, enhancing HDR quality. We compare two stack settings: "predefined stack" and "continuous stack." The CEVR model estimates seven LDR images (EVs: -3, -2, -1,..., +3) for the predefined stack, which is the setting used in existing methods, while the continuous stack has 13 LDR images with various EVs (-3, -2.5, -2,..., +3). Tab. 4 and Fig. 1(b)(c) show that the tone-mapped image from the continuous stack has superior quality and is more visually pleasing. We can further validate the effectiveness of the continuous stack by visualizing the CRF of both the predefined stack and the continuous stack. As shown in Fig. 14, the denser EV setting can help generate a smoother CRF compared to the predefined EV setting. Additional analysis of the inverse CRF can be found in the supplementary material. ### Failure cases Although the proposed method performs favorably against other existing methods in quantitative and qualitative results, we do not explicitly design the module to address the over-exposed issue, which may make the CEVR model fail to generate reasonable content in large saturated regions, as shown in Fig. 10. It is a promising direction to take the emerging generative model designs, e.g. [28, 59], into account to address this issue. ## 5 Conclusion We introduce CEVR, a learning-based method that produces LDR EV stacks from continuous EV input. Our approach combines U-Net with implicit functions, and allows the network to generate LDR images with continuous EVs. We propose two strategies, including (1) cycle training for learning on continuous EV changes unseen in the training dataset and (2) continuous stack for improving LDR stack \begin{table} \begin{tabular}{l|c c c c} \hline \hline Intensity transformation & - & ✓ & ✓ & ✓ \\ Continuous stack & - & - & ✓ & ✓ \\ Cycle training & - & - & - & ✓ \\ \hline PSNR & 32.52 & 34.20 & 34.47 & **34.67** \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation studies on the reconstructed HDR images on the VDS dataset [23].** Intensity transformation and cycle training enhance the quality of LDR stacks, and continuous stack benefits the stack fusion process. Figure 8: **Ablation on cycle training for LDR and HDR images generation.** With the cycle training, the model captures the finer granularity of “EV changing” and generates more accurate and visually pleasing LDR and HDR images. Figure 7: **Ablation on intensity transformation.** With intensity transformation, CEVR can adjust LDR image intensity while preserving the image structure and color tone. \begin{table} \begin{tabular}{l|c c} \hline \hline Cycle training & \(\times\) & ✓ \\ \hline EV-1 & 27.75 & **33.77** \\ \hline EV+1 & 33.37 & **36.96** \\ \hline \hline \end{tabular} \end{table} Table 5: **Hold-out experiment.** Hold-out experiment excludes EV-1 and +1 LDR images during training. With cycle training, the model generate better LDR images with unseen EVs (EV-1, +1). fusion using additional images with dense and continuous EVs. Our approach with the two strategies greatly enhances LDR stack quality and improves HDR image results, as demonstrated through extensive quantitative and qualitative evaluations on two benchmark datasets. Acknowledgments.This work was supported in part by National Science and Technology Council (NSTC) under grants 111-2628-E-A49-025-MY3, 112-2221-E-A49-090-MY3, 111-2634-F-002-023, 111-2634-F-006-012, 110-2221-E-A49-065-MY3 and 111-2634-F-A49-010. This work was funded in part by MediaTek.
2309.09593
Mutual Information-calibrated Conformal Feature Fusion for Uncertainty-Aware Multimodal 3D Object Detection at the Edge
In the expanding landscape of AI-enabled robotics, robust quantification of predictive uncertainties is of great importance. Three-dimensional (3D) object detection, a critical robotics operation, has seen significant advancements; however, the majority of current works focus only on accuracy and ignore uncertainty quantification. Addressing this gap, our novel study integrates the principles of conformal inference (CI) with information theoretic measures to perform lightweight, Monte Carlo-free uncertainty estimation within a multimodal framework. Through a multivariate Gaussian product of the latent variables in a Variational Autoencoder (VAE), features from RGB camera and LiDAR sensor data are fused to improve the prediction accuracy. Normalized mutual information (NMI) is leveraged as a modulator for calibrating uncertainty bounds derived from CI based on a weighted loss function. Our simulation results show an inverse correlation between inherent predictive uncertainty and NMI throughout the model's training. The framework demonstrates comparable or better performance in KITTI 3D object detection benchmarks to similar methods that are not uncertainty-aware, making it suitable for real-time edge robotics.
Alex C. Stutts, Danilo Erricolo, Sathya Ravi, Theja Tulabandhula, Amit Ranjan Trivedi
2023-09-18T09:02:44Z
http://arxiv.org/abs/2309.09593v1
Mutual Information-calibrated Conformal Feature Fusion for Uncertainty-Aware Multimodal 3D Object Detection at the Edge ###### Abstract In the expanding landscape of AI-enabled robotics, robust quantification of predictive uncertainties is of great importance. Three-dimensional (3D) object detection, a critical robotics operation, has seen significant advancements; however, the majority of current works focus only on accuracy and ignore uncertainty quantification. Addressing this gap, our novel study integrates the principles of conformal inference (CI) with information theoretic measures to perform lightweight, Monte Carlo-free uncertainty estimation within a multimodal framework. Through a multivariate Gaussian product of the latent variables in a Variational Autoencoder (VAE), features from RGB camera and LiDAR sensor data are fused to improve the prediction accuracy. Normalized mutual information (NMI) is leveraged as a modulator for calibrating uncertainty bounds derived from CI based on a weighted loss function. Our simulation results show an inverse correlation between inherent predictive uncertainty and NMI throughout the model's training. The framework demonstrates comparable or better performance in KITTI 3D object detection benchmarks to similar methods that are not uncertainty-aware, making it suitable for real-time edge robotics. ## I Introduction The rapid development of artificial intelligence (AI) capabilities, as demonstrated with image recognition and large language models (LLMs), has enabled its adoption across various domains. However, concerns about its reliability persist for safety-critical applications, including robotics. Given that the accuracy of data-driven models cannot be assured, it becomes essential not only to question _what if the model is wrong?_, but also to determine _how wrong_ it might be by assessing its predictive uncertainties. Quantifying uncertainty in deep learning has, therefore, gained traction. Notably, data-driven models can suffer from two main types of uncertainties: _epistemic_ and _aleatoric_[1]. Epistemic uncertainty arises from inherent data variance and can often be mitigated with additional training data. Conversely, aleatoric uncertainty stems from random data distortions, such as blurriness, occlusions, and overexposure in images, and cannot be resolved merely by augmenting the training data. However, much of the current work has neglected considering platforms with time, cost, area, computing, and power constraints. Consequently, those existing uncertainty estimation methods, often reliant on distribution-based approximations, struggle under edge deployment due to their need for iterative sampling. Therefore, uncovering true, statistically confident uncertainties in point (mean) predictions that are intuitive and visualizable under considerable resource constraints remains challenging for critical edge robotics. To tackle these challenges, we explore conformal inference (CI) [2, 3, 4]. Rooted in information theory and probabilistic prediction, CI has emerged as a prominent uncertainty quantification method that is simple, generalizable, and scalable [5]. Unlike conventional statistical inference, which depends on intimate knowledge of the data distribution for uncertainty estimation and is vulnerable to modeling inaccuracies, CI produces reliable, uncertainty-aware prediction intervals without distributional assumptions given a finite set of training data. CI assesses the conformity of each incoming data point to the existing dataset and formulates uncertainty intervals based on a preset coverage rate. Importantly, CI is compatible with any core model with an inherent uncertainty notion, yielding both model-agnostic and statistically sound estimations. Despite these advantages, a key limitation of CI is its tendency to provide overly cautious uncertainty estimates that may prevent a prediction model from making meaningful decisions. For example, overly conservative uncertainty estimates in autonomous navigation can lead to suboptimal path planning, such as taking longer routes than necessary. While multimodal sensors have become prevalent in various robotics tasks to enhance the robot's perception and decision-making capabilities, they present a unique opportunity for CI to optimally calibrate the predicted uncertainty estimates by exploiting mutual information (MI) of multimodal sensor data streams. MI is an information-theoretic metric that measures the dependence between the marginal distributions of two random variables through their joint distribution. In this case, it can measure how much one sensor modality explains the output and prediction from another while operating in the same environment. Thus, leveraging MI to calibrate and Fig. 1: **Uncertainty-Aware Multimodal Inference at the Edge:** In this work, we present a generalizable, multimodal conformal inference framework for lightweight uncertainty awareness and apply it to 3D object detection. The proposed methodology is deeply rooted in information and statistical theory, allowing the framework to take full advantage of the benefits of conformal prediction in quantifying uncertainty while under considerable resource constraints. tighten CI's predictive uncertainty bounds while maintaining the guaranteed coverage rate is attractive. Towards this goal, we consider 3D object detection a driving application and present a systematic framework for including MI in optimizing CI-based uncertainty bounds. 3D object detection is essential for many autonomous systems to provide a semantic understanding of their environment through identifying, localizing, and categorizing various objects. However, various propositions in our work are also generalizable to other autonomy tasks. Our work makes the following key contributions: * We introduce a 3D object detection framework that integrates uncertainty-aware projections obtained through conformal prediction. Evaluated on the demanding 3D KITTI vision benchmark suite [6], this framework surpasses state-of-the-art models in inference runtime while achieving a competitive accuracy. Given these attributes, our approach is particularly suited for edge robotics platforms with limited time and computational resources. * We introduce a multitask loss function that can train a model to simultaneously provide point predictions and adaptive uncertainty confidence bounds that each take the form of 3D bounding boxes. The uncertainty boxes are demonstrated to enhance average precision and are combined to be more visually intuitive. Furthermore, we weight the loss function with an uncertainty-based distance metric, averaged over every dimension of each output, to influence the model to prioritize training samples that introduce more uncertainty. * Integrating conformal inference with information-theoretic measures, specifically MI, we discuss a method to fuse data from multimodal sensors using a multivariate Gaussian product of latent variables in a variational autoencoder (VAE). The proposed VAE-based multimodal data fusion captures salient features of each modality and enables us to compute normalized mutual information (NMI). This, in turn, allows us to optimally calibrate the uncertainty bounds in a sample-adaptive manner. In Sec. II, we discuss the current art of 3D object detection. In Sec. III, we present the proposed framework of uncertainty-aware multimodal 3D object detection. Sec. IV presents the simulation results and Sec. V concludes. ## II Current Art on 3D Object Detection In this study, we focus on 3D object detection as a case study to demonstrate the efficacy of MI-based conformal feature fusion in achieving uncertainty awareness in multimodal sensing, particularly at the edge. 3D object detection is fundamental for existing and emerging robotic platforms, such as robotaxis, to understand environments comprehensively by detecting, localizing, and classifying objects. While 2D object detection offers basic object localization and recognition, 3D detection further enriches applications by adding depth and distance insights. This necessitates a sophisticated perception system, integrating diverse sensors like RGB cameras, LiDAR, and mmWave radar, which mutually enhance their performance. For deep learning-based 3D object detection, we specifically focus on RGB camera images and LiDAR point cloud data. Prior works have developed state-of-the-art 3D object detection framework through early [7, 8], intermediate [9, 10, 11, 12], and late [13] information fusion of LiDAR and camera streams, as LiDAR features are rightfully superior to camera features in assessing depth for 3D tasks [14]. Early fusion improves data preprocessing and detection results, but often requires an additional network for initial image data processing, which increases inference runtime. Intermediate fusion offers deeper integration of multimodal features, which enhances bounding box prediction accuracy, but properly doing so remains an open problem due to the considerable distinctions in feature information and view points. Late fusion is more computationally efficient, but its performance is limited due to the lack of capturing the deep covariance between the modalities. Notably, the above frameworks vary in their processing of LiDAR as well, with three primary methods identified as point-based [15, 16, 17], grid-based [18, 19, 20], and range-based [21, 22] methods. Point-based methods involve direct predictions based on downsampled points and extracted features, which has influenced many subsequent state-of-the-art works but makes it difficult to balance appropriate sampling with efficiency. Grid-based methods rasterize point cloud data into grid representations such as voxels (volumetric pixels), pillars (vertically extended voxels), or bird's-eye view (BEV) 2D feature maps, which can provide richer and more organized 3D information, potentially leading to more accurate predictions, but require more time and memory to process. Range-based methods consist of processing 2D range views (spherical projections of point clouds), which inherently contain 3D distance as opposed to simple RGB and can therefore be easily integrated with existing efficient 2D backbones but nonetheless suffer from common 2D issues (e.g., occlusion and scale variation) that exacerbate aleatoric uncertainty. Among these prior works, PointPillars [19], introduced in 2018, remains the fastest inference model on the 3D KITTI vision benchmark suite [6] with a 16 ms runtime. PointPillars is a LiDAR-only model that also demonstrated comparable accuracy to other state-of-the-art models published around the same time. Most previous 3D object detection frameworks focus primarily on accuracy; there are relatively few works that have explored uncertainty quantification [23, 24, 25, 26, 27, 28]. While these works underscore the significance of uncertainty and its potential to improve performance, their methodologies largely hinge on Bayes' theorem, maximum likelihood, or coarse statistics such as standard deviation. Such methods, deeply tied to data, model, and specific assumptions (e.g., gaussianity), can face numerical instability and might not be optimal for resource-limited systems, such as edge robotics. Addressing this critical need, in this paper, we discuss an uncertainty-aware 3D object detection framework comparable to PointPillars in speed and accuracy while providing statistically rigorous and generalizable uncertainty estimations via conformal inference. ## III Uncertainty-Aware Multimodal 3D Object Detection by Conformal Inference This section provides an overview of the proposed uncertainty-aware 3D object detection framework with RGB camera and LiDAR sensors. The model architecture is shown in Fig. 2 and consists of a variational autoencoder (VAE) with parallel encoders for each sensor's extracted features and a single decoder that propagates information fused latent samples concatenated with 2D bounding box proposals to produce 3D bounding boxes and uncertainty bounds. To extract LiDAR features, the model relies on PointNet [15]. To obtain 2D bounding box proposals and subsequently extract camera features from cropped regions-of-interest (RoI), the model uses YOLOv5s [29] and MobileNetV2 [30]. This approach takes inspiration from PointFusion [9], as we opted for point-based LiDAR point cloud processing and intermediate LiDAR-camera fusion. These design choices were made primarily in consideration of efficiency and providing a solid information theoretic testbed for conformal inference. ### _Feature Fusion by Multivariate Gaussian Product_ To effectively merge features extracted from RGB camera images and LiDAR point clouds, we adopt an approach inspired by [31]. They showed that the univariate Gaussian product of RGB and infrared image features optimally combines information from both modalities, ensuring the network remains resilient even when one data stream is suboptimal. Leveraging the Variational Autoencoder's (VAE) ability to approximate Gaussian distributions through reparameterization and Kullback-Leibler (KL) divergence of artificial latent variables (e.g., mean and variance) [32], we extend this statistical approach with a multivariate Gaussian product. Shifting from univariate to multivariate variables requires significant changes and optimizations. Thus, our enhancement lies in operations on multivariate mean and covariance, ensuring richer representations of multimodal data. Instead of using the VAE's dual encoders for camera and LiDAR data to output variance, we utilize them to produce 4D Cholesky decompositions [33] of the presumed covariance matrices for each encoded feature set. A Cholesky decomposition (\(L\)) represents the square root of a covariance matrix and ensures symmetry and positive definiteness-two necessary criteria for subsequent matrix operations. Moreover, it encapsulates off-diagonal relationships of the latent variables, which often provide a truer representation of the covariance matrix but are commonly zeroed out in VAEs under the assumption of conditional independence. From the Cholesky decompositions, we derive symmetric 4D covariance matrices for each set of encoded features using the matrix product \(LL^{T}=V\). To fuse the feature information, we utilize both latent means and covariances. We refer to [34], which explains how to compute the mean \(\mu\) and covariance \(V\) of a joint Gaussian distribution, given by equations in (1), from the product of \(n\) marginal distributions. Importantly, the equations are generalizable, suggesting that our framework can, in principle, handle an arbitrary number of sensor modalities. \[\mu_{\text{joint}} =V_{\text{joint}}\sum_{i=1}^{n}V_{i}^{-1}\mu_{i} \tag{1a}\] \[V_{\text{joint}}^{-1} =\sum_{i=1}^{n}V_{i}^{-1} \tag{1b}\] Additionally, since these matrix computations involve inversion, we must address potential numerical instabilities that can ruin the approximations and cause divergence. To mitigate these concerns, we regularize the model with identity covariance and perform an eigen decomposition, denoted as \(Q\Lambda Q^{T}=V\), on the joint covariance matrix whenever a Cholesky decomposition cannot be performed. With the latter step, we can ensure positive definiteness and avoidance of near-singularity by reconstructing the matrix after we have set the non-positive eigenvalues to a small positive constant (e.g., 1e-6). Finally, with the proper joint mean and covariance, we can compute the mutual information (see below) between camera and LiDAR features and subsequently forward a sample from their fused distribution to the decoder along with 2D bounding box proposals. Fig. 2: **Model Architecture (Section 3):** The network designed for this work utilizes a variational autoencoder (VAE) featuring dual encoders for LiDAR point cloud and RGB camera image features. To extract the multimodal features, we rely on PointNet [15], YOLOv5s [29], and MobileNetV2 [30], keeping the network modular at a small expense of speed so that the feature extractors can be interchanged based on scene conditions. The information from the data streams are fused through a multivariate Gaussian product of their artificial 4D latent variables to approximate a proper covariant joint distribution. Mutual information between the multimodal features is computed using the joint mean and covariance, and a sample is extracted and concatenated with 2D bounding box proposals. Finally, a single decoder propagates the fused data to output \(K\) mean 3D bounding boxes of size \([8,3]\) along with conformal inference-based upper- and lower- bound uncertainty estimates. \(K\) represents any number of detected objects. ### _Uncertainty Calibration by Mutual Information (MI)_ Given the close relationship between conformal inference and information theory, we anticipate incorporating MI should improve our uncertainty-aware framework. MI quantifies the dependence between two random variables by examining the relationship between their marginal distributions and their joint distribution [35]. Effectively, MI assesses the uncertainty of one random variable in explaining the information of another. Previous studies have demonstrated that maximizing MI between the input feature space and latent space enhances the model's utilization of the latent representations [36, 37]. In this study, we utilize MI as a criterion for calibrating the conformal uncertainty intervals. To compute MI, we determine the determinants (\(|\cdot|\)) of the covariance matrices constructed for both the camera and LiDAR data and the covariance matrix of their combined joint distribution. \[\text{MI}=\frac{1}{2}\log_{2}\left(\frac{|V_{RGB}||V_{LiDAR}|}{|V_{joint}|}\right) \tag{2}\] Afterward, we approximate the Shannon entropy [35] of the two feature sets' covariances with (3) and use them to normalize the MI (i.e., compute NMI [38]) to be within the range of [0,1] with (4). \[\text{H}=\frac{1}{2}\log_{2}\left((2\pi e)^{4}|V|\right) \tag{3}\] \[\text{NMI}=\frac{2\text{MI}}{\text{H}_{RGB}+\text{H}_{LiDAR}} \tag{4}\] It is important to note that, in theory, MI is upper bounded by the maximum of the the Shannon entropies of the random variables involved. However, because the VAE's latent random variables typically have unbounded support (because activation functions such as ReLU and others have unbounded ranges), it is possible to run into stability issues where a network could continue optimizing its parameters leading to divergent MI estimates. To fix this, we add the \(softsign()\) activation function shifted by +1 to bind the latent variables to the range \([0,2]\) and stabilize the network. This activation function resembles the hyperbolic tangent but is less steep and therefore saturates slower. In the next subsection, we discuss the placement of the NMI metric into the loss function. ### _Uncertainty Weighted Loss by Conformal Inference (CI)_ CI offers a model-agnostic method for uncertainty quantification that seamlessly integrates with any foundational model possessing intrinsic uncertainty measures, such as quantile regression. The intervals guarantee marginal coverage of the truth based on a user-defined coverage rate [2, 3, 4]. Marginal coverage represents the average probability, taken over all considered samples, that true values will fall within the prediction intervals. It is analytically guaranteed by using a portion of the training data as a calibration set to compute conformity scores of new observations to prior information, which are used to calibrate the uncertainty intervals. The conformalized joint prediction (CJP) method presented in our prior work [39] demonstrated a unique form of multivariate cross-conformal inference where a model is jointly trained to output point (mean) predictions and conditional quantiles that serve as upper and lower prediction bounds, capturing true aleatoric and epistemic uncertainty. To construct the prediction bounds, the method requires calibration of the sample data during training so as to guide the model to center predictions and maintain marginal coverage. Cross-conformal inference involves performing a number of calibration steps over all of the training data, striking a balance between the statistical efficiency of full-conformal prediction and the speed of split-conformal prediction. The method performs the calibrations dynamically over the randomized training batches as part of a multi-task loss function that simultaneously prioritizes reconstruction, KL divergence, and uncertainty interval centeredness, tightness, and coverage. As a result, it is shown that the intervals are highly tunable, flexible, and adaptive. We make the following impactful modifications to the loss function presented in our prior work. First, we weight the reconstruction loss with a small uncertainty penalty to guide the network to prioritize resolving higher uncertainty in certain training batches. Secondly, we regularize the KL divergence with 4D covariance instead of variance. Finally, we dynamically tune the balance between interval sharpness (i.e., uncertainty distance) and marginal coverage with normalized mutual information (NMI). For completeness, the loss function is provided as: \[\mathcal{L}_{Total} =\text{SmoothL1}_{loss}(y,\hat{y})\times(1+0.01U) \tag{5}\] \[+\text{KL}_{div}(\mu_{joint},V_{joint})\] \[+\text{INTSCORE}_{loss}(y,\,Q_{l},\,Q_{h},\{\alpha_{l},\alpha_{h} \})\] \[+\text{COMCAL}_{loss}(y,p_{avg}^{cov},\,Q_{l},\,Q_{h},NMI)\] where \[\text{KL}_{div}=\frac{1}{2}\left(Tr(V)+\mu_{joint}\mu_{joint}^{T}-4-\text{ log}(|V|)\right) \tag{6}\] \[\text{INTSCORE}_{loss} =(Q_{h}-Q_{l})+\frac{2}{\alpha}(Q_{l}-y)\mathbb{I}\{y<Q_{l}\} \tag{7}\] \[+\frac{2}{\alpha}(y-Q_{h})\mathbb{I}\{y>Q_{h}\}\] \[\text{COMCAL}_{loss}=(1-NMI)\times\text{CAL}_{obj}+NMI\times\text{SHARP}_{ obj}\] (8a) and \[\text{CAL}_{obj}= \tag{8b}\] \[\mathbb{I}\{p_{avg}^{cov}<p\}\times\frac{1}{N}\sum_{i=1}^{N}[(y_{i}-Q_{l, h}(x_{i}))\mathbb{I}\{y_{i}>Q_{l,h}\}]\ +\] \[\mathbb{I}\{p_{avg}^{cov}>p\}\times\frac{1}{N}\sum_{i=1}^{N}[(Q_{ l,h}(x_{i})-y_{i})\mathbb{I}\{y_{i}<Q_{l,h}\}]\] \[\text{SHARP}_{obj} =\mathbb{I}\{p\leq 0.5\}\times\frac{1}{N}\sum_{i=1}^{N}Q_{l}(x_{i})-Q_{h} (x_{i}) \tag{8c}\] \[+\ \mathbb{I}\{p>0.5\}\times\frac{1}{N}\sum_{i=1}^{N}Q_{h}(x_{i})-Q _{l}(x_{i})\] In the equations, \(x\) represents input samples, \(y\) represents 3D bounding box labels, \(\hat{y}\) represents predictions, \(U\) represents a singular uncertainty distance metric calculated by averaging the prediction interval length of each output dimension per training batch, \(Q_{h}\) and \(Q_{l}\) are each dimension's upper and lower quantile estimates used to calculate \(U\), \(\alpha_{h}\) and \(\alpha_{l}\) are the 95\({}^{th}\) and 5\({}^{th}\) percentile coverage bounds that assert \(Q_{h}\) and \(Q_{l}\), \(p\) is the chosen marginal coverage rate (\(\alpha_{h}-\alpha_{l}=\) 90%), \(\mathbb{I}\) is the indicator function, \(Tr()\) is the trace function, and \(p_{avg}^{cov}\) is the estimated probability that the label values lie within \([Q_{l},\,Q_{h}]\), averaged over the randomized training batches. \(Q_{l,h}\) is meant to indicate that \(\text{CAL}_{obj}\) is computed separately for both \(Q_{l}\) and \(Q_{h}\) and then added together. Focusing on the two lesser-known loss components--INTSCORE\({}_{loss}\) is used to influence the model to maintain centered quantile intervals while \(\text{COMCAL}_{loss}\) is used to control the balance between minimizing the uncertainty intervals and increasing marginal coverage, as reflected in the sub-objectives \(\text{CAL}_{obj}\) and \(\text{SHARP}_{obj}\). Notably, we insert NMI from Section III-B, averaged in each training batch, into \(\text{COMCAL}_{loss}\) to dynamically control the calibration balance during training as opposed to setting a static value. Intuitively, the model is influenced to be less uncertain when the MI between the RGB camera and LiDAR features is high. Therefore, uncertainty and MI should be inversely correlated. ## IV Results and Discussions This section details our observations from applying the framework presented in Section III to 3D object detection involving RGB cameras and LiDAR point cloud inputs. Towards this, the primary goal of the proposed framework is to enable lightweight, conformalized uncertainty awareness while including principles of entropy, MI, and feature fusion based on a multivariate Gaussian product. This combination of theories is used to improve the model's uncertainty estimates through CI while operating under edge device constraints. Notably, uncertainty estimation can become difficult and unstable for a task such as 3D object detection, where there is a varied number of multivariate objects to be assessed per input sample. Therefore, taking an approach deeply rooted in information and statistical theory is imperative. As projected in Section III-B, it is shown in Fig. 3 that the average uncertainty and normalized mutual information (NMI) obtained via conformalized feature fusion are inversely correlated over the duration of training the model described in Fig. 2. It is important to note that while mutual information is static given a discrete input feature space, here we are deriving it from artificial latent representations that are optimized during training. Hence, the value of NMI can change during training as the embedded information is better understood. While the estimated NMI increased between the camera and LiDAR data, the overall uncertainty in the predictions decreased. This uncertainty metric \(U\) is used to weight the SmoothL1 reconstruction loss, while the NMI is used to calibrate this uncertainty in (8a). To the best of our knowledge, this is the first work demonstrating a stable combination of explicitly translatable uncertainty weighting and mutual information in a multitask loss function where both influence each other. Table I quantitatively compares our proposed framework to similar works predicting 3D bounding boxes for cars in the seminal KITTI 3D detection dataset. The easy, moderate, and hard percentage scores are of average precision in 3D bounding box regression (\(AP_{3D}\)), which is based on precision-recall calculations with an intersection-over-union (IoU) threshold of 0.7 in various scene conditions. Most metrics are taken directly from the KITTI source website, where the various referenced models have been submitted for result reproduction. From the table, our model is approximately 38% faster than PointPillars without suffering an equal accuracy loss, making it suitable for edge robotics. The runtime metrics we provide are adjusted for hardware performance differences, given PointPillars used an NVIDIA GTX 1080 Ti desktop, and we used an NVIDIA RTX 4090 laptop. Unlike the other works, the model maintains a relatively consistent accuracy across each benchmark case, a unique result of using a VAE and conformal inference. Furthermore, by factoring the marginal coverage of the upper- and lower-bound uncertainty boxes calibrated with NMI into the IoU calculations, the average precision increased by at least 39%. Accordingly, we propose an entirely new evaluation metric, _mean average uncertainty_ (MAU), to track the combined average uncertainty in predicting each corner of \(K\) 3D bounding boxes (i.e., \(\frac{1}{K}\sum_{i=1}^{K}u_{i}\)). A key observation here is that, with uncertainty included, the average precision slightly increased with more difficult predictions while MAU also increased. This indicates that the Fig. 3: **Average Uncertainty and Normalized Mutual Information (NMI) vs. Epoch:** Explicit uncertainty and NMI obtained from performing conformal inference and intermediate feature fusion are averaged across training batches in each epoch. Uncertainty and NMI are inversely correlated, influencing the model to be more confident in predictions when mutual information is high and _vice versa_. model appropriately prioritized predictions where uncertainty was greater. However, it is worth noting that there are fewer annotations in the moderate and hard cases, so the model has fewer chances of being imprecise compared to the easy case. Overall, we show that robust uncertainty-awareness can improve the reliability of a model's predictions in making critical decisions and considerably improve accuracy. By maintaining a generalizable methodology, our work can be integrated to improve metrics in other models for various tasks. Fig. 4 provides a qualitative assessment of the uncertainty in predicting the 3D bounding boxes. We display the ground truth box in black, the predicted box in green, and a combined uncertainty box in purple that encompasses the upper- and lower-bound boxes. This level of accuracy and precision in estimating and visualizing uncertainty in 3D object detection has not been demonstrated previously. A primary benefit of such assessment is that even if the model appears to be predicting well, a large uncertainty estimate can direct it to assert caution appropriately, such as when the sensors are not performing well or are impaired externally. ## V Conclusion We presented a novel framework for quantifying, calibrating, and leveraging true uncertainty in multimodal inference at the edge. The proposed methodology, applied to 3D object detection, includes conformal inference, elements of information theory, and Gaussian feature fusion. Our research demonstrates that integrating uncertainty awareness not only increases reliability of data-driven deep learning, but also improves prediction accuracy and precision. The approach is both generalizable and scalable, allowing it to be adapted to any task or dataset where uncertainty awareness should be considered, especially when under considerable resource constraints such as in edge robotics. The integration of information theory and conformal inference offers benefits that extend beyond individual results in the deep learning domain. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Reference** & **Modality** & **LiDAR Rep.** & **Fusion Type** & **Uncertainty** & **Runtime** & **Easy (\%)** & **Mod. (\%)** & **Hard (\%)** \\ \hline PointFusion [9] & Cam+LiDAR & Points & Intermediate & No & – & 74.71 & 61.24 & 50.55 \\ ContFuse [40] & Cam+LiDAR & Grid (BEV) & Intermediate & No & 60 & 83.68 & 68.78 & 61.67 \\ MVX-Net [10] & Cam+LiDAR & Grid (Voxels) & Intermediate & No & – & 83.2 & 72.7 & 65.2 \\ EPNet [41] & Cam+LiDAR & Points & Intermediate & No & 100 & 89.81 & 79.28 & 76.40 \\ MMF [42] & Cam+LiDAR & Grid (BEV) & Intermediate & No & 100 & 89.05 & 82.50 & 77.59 \\ MV3D [11] & Cam+LiDAR & Multiple & Intermediate & No & 360 & 74.97 & 63.63 & 54.00 \\ 3D-CVF [43] & Cam+LiDAR & Grid (BEV) & Intermediate & No & 60 & 89.20 & 80.05 & 73.11 \\ AVOD [12] & Cam+LiDAR & Grid (BEV) & Intermediate & No & 100 & 83.07 & 71.76 & 65.73 \\ CLOCs [13] & Cam+LiDAR & Multiple & Late & No & 100 & 89.16 & 82.28 & 77.23 \\ PointPillars [19] & LiDAR & Grid (Pillars) & Intermediate & No & 16 & 82.58 & 74.31 & 68.99 \\ \hline **Ours** & Cam+LiDAR & Points & Intermediate & Yes & **9.87\({}^{*}\)** & 62.84 & 58.66 & 60.89 \\ **Ours w/** & & & & & & 87.64 & **89.83** & **92.26** \\ **NMI-calibrated** & & & & & (MAU=3.52) & (MAU=3.59) & (MAU=3.62) \\ **Uncertainty** & & & & & & & \\ \hline \hline \multicolumn{9}{l}{\({}^{*}\)These models were characterized on an NVIDIA RTX 4090 laptop; the metrics are adjusted for an NVIDIA GTX 1080 Ti desktop.} \\ \end{tabular} \end{table} Table I: Comparison of Proposed 3D Object Detection Framework to Similar Work on KITTI Cars (\(AP_{3D}\)) Fig. 4: **Uncertainty in 3D Bounding Box Regression: Ground truth (black), predicted (green), and uncertainty (purple) 3D bounding boxes are visualized in a sample KITTI image of 8 cars with various occlusion and truncation status. As described in Section III-C, the uncertainty boxes represent a combination of upper- and lower-bound conditional quantiles obtained via conformal inference.**
2309.06878
Performance of a plastic scintillator developed using styrene monomer polymerization
This paper presents a newly developed plastic scintillator produced in collaboration with Turkiye Energy, Nuclear and Mineral Research Agency (TENMAK). The scintillator is manufactured using thermal polymerization of commercially available styrene monomer. The absorption spectrum of the scintillator exhibited two absorption bands at 225 nm and 340 nm, with an absorption edge observed at 410 nm. The wavelength of the emitted light was measured in the range of 400-800 nm, with a maximum intensity at 427 nm. Monoenergetic electrons from the 137Cs source were used to evaluate the characteristics of the new scintillator, particularly its light yield. As the light readout the MAPD-3NM type silicon photomultiplier array (4 x 4) with an active area of 15 x 15 mm2, assembled using single MAPDs with an active area of 3.7 x 3.7 mm2, was used. The light yield of the scintillator was determined to be 6134 photons/MeV. In addition, the efficiency of the scintillator for gamma rays with an energy of 662 keV was found to be approximately 1.8 %. A CmBe neutron source was employed to evaluate its fast neutron detection performance. However, neutron/gamma discrimination using pulse shape discrimination (charge integration) method was not observed. The results demonstrate the potential of a newly produced plastic scintillator for various applications, particularly in radiation monitoring and detection systems.
A. Sadigov, F. Ahmadov, G. Ahmadov, E. Aksu, D. Berikov, S. Nuruyev, R. Akbarov, M. Holik, J. Nagiyev, S. Gurbuz Guner, A. Mammadli, N. Suleymanova, C. Abbasova, S. Melikova, E. Yilmaz, O. Tagiyev, S. Lyubchyk, Z. Sadygov
2023-09-13T10:58:10Z
http://arxiv.org/abs/2309.06878v1
# Performance of a plastic scintillator developed using styrene monomer polymerization ###### Abstract This paper presents a newly developed plastic scintillator produced in collaboration with Turkiye Energy, Nuclear and Mineral Research Agency (TENMAK). The scintillator is manufactured using thermal polymerization of commercially available styrene monomer. The absorption spectrum of the scintillator exhibited two absorption bands at 225 nm and 340 nm, with an absorption edge observed at 410 nm. The wavelength of the emitted light was measured in the range of 400-800 nm, with a maximum intensity at 427 nm. Monoenergetic electrons from the \({}^{137}\)Cs source were used to evaluate the characteristics of the new scintillator, particularly its light yield. As the light readout the MAPD-3NM type silicon photomultiplier array (4 \(\times\) 4) with an active area of 15 \(\times\) 15 mm\({}^{2}\), assembled using single MAPDs with an active area of 3.7 \(\times\) 3.7 mm\({}^{2}\), was used. The light yield of the scintillator was determined to be 6134 photons/MeV. In addition, the efficiency of the scintillator for gamma rays with an energy of 662 keV was found out to be approximately 1.8 %. A CmBe neutron source was employed to evaluate its fast neutron detection performance. However, neutron/gamma discrimination using pulse shape discrimination (charge integration) method was not observed. The results demonstrate the potential of a newly produced plastic scintillator for various applications, particularly in radiation monitoring and detection systems. keywords: Micropixel avalanche photodiode, SiPM, plastic scintillator, gamma source, CmBe neutron source, styrene monomer, styrene monomer polymerization + Footnote †: journal: Radiation Measurements ## 1 Introduction Plastic scintillators with photo sensors have found wide application in high-energy physics, space exploration, medical diagnostics, and security systems [1; 2; 3; 4; 5]. The widespread use of plastic scintillators can be explained by advances in their manufacturing technology as well as a number of valuable properties of the scintillator itself, such as short decay time, high radiation resistance, operating temperature, resistance to mechanical stress, etc. Moreover, the ability to produce plastic scintillators in various shapes and sizes makes them suitable converters of ionizing energy into visible light, rendering them suitable for numerous experiments [4; 5; 6]. In light of these factors, the production of plastic scintillators continues to be of utmost significance, driving ongoing research and development efforts aimed at further improving their performance, cost-effectiveness, and compatibility with emerging de tection systems. One notable advantage of plastic scintillators is their exceptionally short decay time, which makes them indispensable for experiments that require precise time-of-flight measurements. Additionally, due to the low atomic number, they are effective in detecting charged particles and neutrons [6]. The interaction of gamma rays with plastic scintillators is primarily related to their atomic number (Z) and energy of gamma ray (\(E_{\gamma}\)), resulting in a minimal probability of the photoelectric effect (\(\sigma_{pe.}\sim\frac{Z^{*}}{E_{\gamma}^{3/2}}\)). Consequently, the energy of gamma rays in plastic scintillators is typically determined by the Compton edge (\(\sigma_{comp.}\sim const.\cdot Z\)) and the maximum energy of a Compton electron can be calculated using the following formula: \[E_{e}=\frac{2\cdot E_{\gamma}^{2}}{2\cdot E_{\gamma}+m_{e}\cdot c^{2}} \tag{1}\] where \(E_{e}\) - the maximum energy of a Compton electron, \(m_{e}\) - the mass of an electron and \(c\) - the speed of light [7; 8]. Table 1 lists the maximum energy of Compton electrons produced by various gamma rays. When ionizing radiation interacts with the scintillator material, the scintillator molecules get excited, and then they return to their ground state with emitting scintillation photons. All absorbed energy of ionizing radiation in the scintillator is not converted to visible scintillation light. Ideally, the number of photons generated should increase linearly with the incident radiation energy, although deviations from linearity can occur depending on the specific characteristics of the scintillation material. The light yield of the scintillator is calculated as follows [4]: \[Y=\frac{E_{dep.}\cdot\eta}{E(\lambda)} \tag{2}\] where \(E_{dep.}\) - deposited energy of ionizing radiation, \(\eta\) - efficiency of scintillator and \(E(\lambda)\) - the energy of an emitted photon. The light yield of the scintillator depends on the scintillation material, the type of incident particles, the energy of particles and the temperature [5]. In scintillation detectors, the phenomenon of quenching plays a significant role in the detection and measurement of ionizing radiation. Quenching occurs due to various mechanisms associated with the interaction of radiation with the scintillator material. Scintillator quenching is generally described by different models (Birks, Yoshida, Voltz) [9; 10]. In these models are taken into account affect electronic stopping, nuclear stopping, the singlet quenching, the triplet quenching and others effects. The luminescence yield per unit length in an organic scintillator (\(\frac{dY}{dx}\)) can be calculated given the ionization density (\(\frac{dE}{dx}\)), the energy transfer probability (k), a normalization factor (s), the constant of proportionality (B) to determine the number of damaged molecules, the energy emitted as light (Y), and particle energy dissipated in the scintillator (E) \[\frac{dY}{dx}=\frac{s\cdot\frac{dE}{dx}}{1+kB\cdot\frac{dE}{dx}}. \tag{3}\] Different types of particles, such as electrons, alpha particles, heavy ions, can exhibit distinct quenching behaviors. Quenching of the light yield is related to the energy transfer to damaged molecules, which do not convert the received energy into scintillation photons. These features, which leads to variations in the shape of the detected pulses, can be employed to distinguish different types of ionizing radiation [5; 9]. It is possible to discriminate particles type by analyzing pulse shapes, thereby enhancing the capabilities of radiation detection systems. This performance of plastic scintillation detectors allows use them in PSD experiments [4]. The light yield of the using plastic scintillators mainly changes in the range of 3000 - 20000 photons/MeV e.g : EJ-232-0.5% (2900 photons/MeV), EJ-256-5% (5200 photons/MeV), EJ-240G (6300 photons/MeV), EJ-254-5% (7500 photons/MeV), EJ-276G (8000 photons/MeV), EJ-290 (9000 photons/MeV), EJ-212 (10000 photons/MeV) and others [11; 12]. The main objective of this study is to investigate the parameters and performance of the plastic scintillator produced by TENMAK. The absorption and photoluminescence performance of this scintillator were measured, and the light yield was determined using the light output of the commercial standard \(LaBr_{3}(Ce)\) scintillator by Epic Crystal. To gain insight into the scintillation mechanism, the optical performance, scintillation responses, and pulse shape discrimination (PSD) performance on \(\gamma\)-rays, neutrons, and beta particles were investigated using MAPD-3NM type silicon photomultiplier as a light readout and various sources including \({}^{133}\)Ba, \({}^{137}\)Cs, and \({}^{60}\)Co, \({}^{90}\)Sr, and \({}^{244}\)CmBe neutron source. ## 2 Experimental In this study, we used a polystyrene plastic scintillator with a rectangular shape and dimensions of 15 \(\times\) 15 \(\times\) 50 mm\({}^{3}\) to convert the energy of ionizing radiation (gamma ray and beta particle) to visible light pulse. The plastic scintillator was produced through thermal polymerization of commercially available styrene monomer. The production process involved several steps. Firstly, purified styrene monomer was heated to 120 \({}^{\circ}\)C over a period of 5 - 10 hours. It was maintained at this temperature for 3 days to ensure complete polymerization. Afterward, the material was slowly cooled to a temperature below the glass transition temperature of polystyrene (T\({}_{g}\) = 100 \({}^{\circ}\)C) and underwent annealing to relieve internal stresses and avoid cracking. Finally, the scintillator was cut into the desired shape from the fabricated piece, and the surfaces were polished to achieve a mirror-like finish. To assess the properties of the scintillator, including scintillation properties and density, measurements were conducted. Silicon photomultiplier (SiPM) array (type MAPD-3NM) was used as a detector to read out scintillation photons produced by ionizing radiation in the scintillator (figure 1 left). MAPD-type SiPMs have been designed and developed by our research group, employing a deeply buried pixel design. This design incorporates a double n-p-n-p junction with micro-well structures located below the surface. The structure of these SiPMs does not include conventional quenching resistors, in contrast to SiPMs produced using standard surface-pixel technology. Triggered avalanche is quenched through the utilization of specially designed potential barriers. For a comprehensive understanding of the structure and operational principles of MAPD-type SiPMs, refer to [13; 14; 15; 16; 17; 18]. Additional insights into the performance of MAPD as both an individual device and an array, with various scintillation materials, can be found in references [19; 20; 21; 22; 23]. The plastic scintillator was covered with teflon layers and connected to the photodiode via optical glue (figure 1 right). The SiPM array used consisted of 16 elements, with each element being connected in parallel. A voltage was applied to SiPM array via red wire, which was linked to the cathode and the signal was taken from the black wire, connected to the anode (figure 1 left). The SiPM array has the following parameters: size - 15 \(\times\) 15 mm\({}^{2}\), operation voltage - 55.2 V, capacitance - 155 pF, breakdown voltage - 51 V, average gain - 2 \(\times\) 10\({}^{5}\), photon detection efficiency (PDE)- 30-35 %. The ionizing radiation measurements were carried out with the Spectrig MAPD device. Detailed information about the Spectrig MAPD device can be found in [22; 25; 26]. During measurement with the Spectrig MAPD, the following parameters were selected: gate width - 180 ns, variable gain - 1 - 15 dB, threshold - 43 mV, bias voltage - 55.2 V and measurement time - 200 sec and 500 sec. All measurements were carried out at a temperature of 22 \({}^{\circ}\)C. Figure 2 presents a photo of the experimental setup. \begin{table} \begin{tabular}{c c c} \hline \hline Radionuclide & Energy of gamma-rays (keV) & Maximum energy of Compton electrons (keV) \\ \hline \({}^{241}\)Am & 59.6 & 11.27 \\ \hline \({}^{133}\)Ba & 81, 276, 303, 356, 384 & 19.5, 143.3, 164.4, 207.2, 230.6 \\ \hline \({}^{137}\)Cs & 661.6 & 477.65 \\ \hline \({}^{60}\)Co & 1173, 1332 & 963.2, 1117.6 \\ \hline \hline \end{tabular} \end{table} Table 1: The maximum energy of Compton electrons produced by various gamma rays. Figure 1: Photo of the electronic system and the detector based on MAPD and plastic scintillator. Figure 2: Photo of the experimental setup. ## 3 Result and discussion Figure 3 shows the absorption and photoluminescence spectra of the plastic scintillators. The absorption and photoluminescence spectra were recorded by LS 55 Fluorescence Spectrometer (Perkin Elmer) [23] and Cary 50Scan UV-Vis Spectrophotometer (Varian) [24], respectively. The absorption spectrum was measured in the wavelength range of 200-800 nm. The absorption spectrum exhibits two absorption bands at 225 and 340 nm, with an absorption edge observed at 410 nm (figure 3). As shown in figure, the transmission of light through the scintillators decreases in the region of 400-800 nm, with the transmission loss increasing as the radiation wavelength increases up to 550 nm. Beyond 550 nm, no significant changes are observed in the spectra. The photoluminescence spectrum of the scintillator was measured using a xenon lamp with a continuous spectrum ranging from 230 to 50 nm at room temperature. The spectrum exhibits a wide band covering the wavelength range of 350-550 nm, with a maximum corresponding to 427 nm (figure 3). Energy spectra of \({}^{137}\)Cs measured with the SiPM array + \(LaBr_{3}(Ce)\)[25; 26] and the SiPM array + plastic scintillator were shown in figure 4. The purpose of using the \(LaBr_{3}(Ce)\) scintillator was to calculate the light yield of the plastic scintillator. It is well known that \({}^{137}\)Cs decays by beta emission to a metastable state (\({}^{137m}\)Ba), and directly the ground state of \({}^{137}\)Ba, with both cases resulting in the emission of gamma rays (\(\sim\) 662 keV). In the first case, there is approximately a 9.6 % probability that the emitted gamma rays are captured by K- shell electrons of \({}^{137}\)Ba, generating a monoenergetic beta particle with an energy of 626 keV. The gamma ray losses about 36 keV of energy due to the binding energy of \({}^{137}\)Ba's K-shell electrons [3]. The amplitude of the signal can be calculated as \(A=PDE\times M\times N\), where N (related to scintillator) is the number of scintillation photons produced by 1 MeV of ionization radiation (or light yield of scintillator), M is the gain of SiPM, and PDE (related to SiPM) is the photon detection efficiency, which depends on the over-voltage and wavelength of the photon [5]. Considering that the photon detection efficiency and gain of the SiPM are about the same for both emission wavelengths of scintillators, the amplitude of the signal will depend on the light yield of the scintillators. The light yield of the \(LaBr_{3}(Ce)\) scintillator is determined to be 68000 photons per MeV of incident energy, corresponding to approximately 45016 photons for the incident gamma ray with an energy of 662 keV [25; 26]. The detected gamma ray signal with an energy of 662 keV, using the \(LaBr_{3}(Ce)\) scintillator, has an amplitude of 2122 ADC channel. In this case, 626 keV energy will approximately correspond to the 2006.6 ADC channel, and the number of scintillation photons will be approximately 42568 photons. Conversely, when using the plastic scintillator, the amplitude of the detected mono-energetic beta particle signal with an energy of 626 keV is measured to be 181 ADC channel. Considering this information one can determine the light yield of the plastic scintillator using the following ratio: \[\frac{A_{LaBr_{3}(Ce)}(626\,keV)}{N_{LaBr_{3}(Ce)}(626\,keV)}=\frac{A_{pl}(6 26\,keV)}{N_{pl}(626\,keV)} \tag{4}\] Figure 4: Energy spectra of \({}^{137}\)Cs measured with the SiPM array + \(LaBr_{3}(Ce)\) and the SiPM array+plastic scintillator (the variable gain was selected 1 dB). Figure 3: The absorption and photoluminescence spectra of the plastic scintillator. \[N_{pl}\left(626keV\right)=N_{LaBr_{3}\left(Ce\right)}\left(626\,keV\right)\times \frac{A_{pl}\left(626\,keV\right)}{A_{LaBr_{3}\left(Ce\right)}}=42568\,. \tag{5}\] In the case of the plastic scintillator, the number of scintillation photons corresponding to the incident gamma ray with an energy of 626 keV was found to be 3840. The light yield of the plastic scintillator for 1 MeV of incident energy was determined to be 6134 photons. It is important to note that in the given calculation, factors such as light loss due to reflector materials, linearity of scintillators and the dependence of PDE on wavelength were not taken into account. Although in the works of the authors [27; 28] on the study of the non-proportionality of electron response in scintillators, it can be seen that for some plastic scintillators in the region of 626 keV, the electron response and relative light yield are approximately 100 %. The efficiency (\(\eta\)) of the scintillator can be calculated as the following formula [4]: \[\eta=\frac{E_{ph}(\lambda)\times N_{ph}}{E_{\gamma}} \tag{6}\] where \(E_{ph}\) - corresponding energy to a scintillation photon, \(N_{ph}\) - the number of scintillation photons produced by 1 MeV of ionization radiation, and E - the energy of the absorbed gamma ray. In this experiment, the energy corresponding to 427 nm was measured to be 2.9 eV, the light yield was determined to be 6134 photons/MeV. The obtained efficiency of the scintillator was approximately 1.8 %. The energy resolution of the monoenergetic electron (626 keV) measured with the plastic scintillator was 33.9 %. On the other hand, in the case of the measurement with \(LaBr_{3}(Ce)\), the energy resolution for 662 keV gamma rays was 3.3 %. In these measurements, a variable gain of MAPD Spectrig was selected 1 dB. Figure 5 shows the energy spectra of \({}^{133}\)Ba, \({}^{137}\)Cs and \({}^{60}\)Co measured using the Spectrig MAPD with the variable gain of 15 dB. To distinguish beta particles from gamma rays emitted from \({}^{137}\)Cs, a copper foil with a thickness of 1 mm was placed between the scintillator and the source. Consequently, the monoenergetic (770-1200 ADC channel) and low-energy parts (200-730 ADC channel) of the beta particles were effectively absorbed by the foil, resulting in the observation of the Compton continuum and edge in the spectrum. The measurement time was selected based on the activity of the sources. The measurement time was selected 200 seconds for \({}^{137}\)Cs source, while 500 seconds for \({}^{133}\)Ba, \({}^{137}\)Cs with the copper foil and \({}^{60}\)Co. Energy resolution of the monoenergetic electrons with an energy of 626 keV was measured to be 25% at 15 dB variable gain. Notably, the Compton edge of gamma rays was observed to increase with the energy of the incident gamma rays. Figure 6 shows the energy spectrum obtained from \({}^{90}\)Sr. \({}^{90}\)Sr undergoes decay, emitting an electron with a maximum energy of 546 keV, transforming into \({}^{90}\)Y, which further decays, emitting an electron with a maximum energy of 2274 keV, and finally resulting in \({}^{90}\)Zr [29]. When a copper foil was placed between the scintillator and the source, the beta particles were completely absorbed by the foil, and only gamma and x-ray events were observed in the spectrum. The obtained results were found to be in agreement with the data reported in other works [29]. tensity of gamma rays. This arrangement enabled the observation of the effect of fast neutrons on the plastic scintillator. However, due to the limited size of the scintillator, it was insufficient to fully detect the high-energy part of neutrons, resulting in only a few events being visible in the high-energy region of the spectra. The implementation of the pulse shape and zero crossing discrimination methods [30] proved ineffective in distinguishing between neutrons and gamma rays. Despite efforts to utilize this technique, the response of the scintillator did not exhibit the required differentiation between the two types of radiation. As a result, the pulse shape and zero crossing discrimination methods were not effective for distinguishing neutrons from gamma rays in this particular scintillator setup. Further investigations and alternative methods may be necessary to achieve the desired discrimination capability in future studies. ## 4 Conclusion The gamma ray, beta particle and neutron detection performance of the newly fabricated plastic scintillator by TENMAK was investigated. The plastic scintillator exhibits two absorption bands at 225 and 340 nm, with an absorption edge observed at 410 nm. The wavelength of maximum emission for the plastic scintillator is \(\sim\) 427 nm, with the light yield of 6134 photons for 1 MeV incident radiation. The obtained results showed that the Compton edge of detected gamma rays was observed to increase linearly with increasing the energy of gamma rays. The plastic scintillator demonstrated effective detection of beta particles emitted by \({}^{137}\)Cs and \({}^{90}\)Sr sources, with an energy resolution of 25 % for monoenergetic (626 keV) electrons. Furthermore, this type of plastic scintillator exhibited sensitivity to gamma rays and fast neutrons. The scintillator did not demonstrate the capability to discriminate between neutrons and gamma rays using the pulse shape discrimination method. Considering these advantageous characteristics, the combination of this plastic scintillator with SiPM technology makes it well-suited for detecting beta particles, gamma rays and neutrons in radiation monitoring and applications related to public safety. Further investigations are essential to attain optimal light yields and enhance the properties of the scintillator. ## Acknowledgments This project has received funding from the European Union's Horizon 2021 Research and Innovation Figure 6: Energy spectra of \({}^{90}\)Sr measured with the SiPM array + plastic scintillators (the variable gain was selected 15 dB). Figure 7: Energy spectra of CmBe measured with the SiPM array + plastic scintillators (the variable gain was selected 15 dB). Programme under the Marie Sklodowska-Curie grant agreement 101086178.
2310.20704
Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked Autoencoders
Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite their success, ViTs lack inductive biases, which can make it difficult to train them with limited data. To address this challenge, prior studies suggest training ViTs with self-supervised learning (SSL) and fine-tuning sequentially. However, we observe that jointly optimizing ViTs for the primary task and a Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the amount of training data is limited. We explore the appropriate SSL tasks that can be optimized alongside the primary task, the training schemes for these tasks, and the data scale at which they can be most effective. Our findings reveal that SSAT is a powerful technique that enables ViTs to leverage the unique characteristics of both the self-supervised and primary tasks, achieving better performance than typical ViTs pre-training with SSL and fine-tuning sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT significantly improves ViT performance while reducing carbon footprint. We also confirm the effectiveness of SSAT in the video domain for deepfake detection, showcasing its generalizability. Our code is available at https://github.com/dominickrei/Limited-data-vits.
Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, Soumyajit Karmakar, Shyam Marjit, Xiang Li, Abhijit Das, Michael S. Ryoo
2023-10-31T17:59:07Z
http://arxiv.org/abs/2310.20704v2
# Limited Data, Unlimited Potential: ###### Abstract Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite their success, ViTs lack inductive biases, which can make it difficult to train them with limited data. To address this challenge, prior studies suggest training ViTs with self-supervised learning (SSL) and fine-tuning sequentially. However, we observe that jointly optimizing ViTs for the primary task and a Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the amount of training data is limited. We explore the appropriate SSL tasks that can be optimized alongside the primary task, the training schemes for these tasks, and the data scale at which they can be most effective. Our findings reveal that SSAT is a powerful technique that enables ViTs to leverage the unique characteristics of both the self-supervised and primary tasks, achieving better performance than typical ViTs pre-training with SSL and fine-tuning sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT significantly improves ViT performance while reducing carbon footprint. We also confirm the effectiveness of SSAT in the video domain for deepfake detection, showcasing its generalizability. Our code is available at [https://github.com/dominickrei/Limited-data-vits](https://github.com/dominickrei/Limited-data-vits). ## 1 Introduction Vision Transformers (ViTs) have become a common sight in computer vision owing to their success across various visual tasks, and are now considered a viable alternative to Convolutional Neural Networks (CNNs). Despite this, ViTs are structurally deficient in inductive bias compared to CNNs, which necessitates training them with large-scale datasets to achieve acceptable visual representation, as noted by Dosovitskiy et al. [13]. As a result, when dealing with small-scale datasets, it is essential to utilize a ViT pre-trained on a large-scale dataset such as ImageNet [12] or JFT-300M [50]. However, in domains such as medical datasets, pre-training ViTs on ImageNet or JFT-300M may not result in an optimal model for fine-tuning on those datasets due to a significant domain gap. Thus, the aim of this research is to address the following question: _how can ViTs be trained effectively in domains with limited data_? Following the introduction of ViTs, second-generation vision transformers have emerged with two different approaches. The first approach is to use a hierarchical structure to introduce inductive bias in ViTs [18, 34, 54]. The second approach involves using hybrid architectures, such as introducing convolutional blocks within ViTs [39, 57]. However, both approaches primarily benefit medium-sized datasets and not small-scale datasets. Several efforts have been made to enhance their locality inductive bias, as reported in literature [15, 28, 29, 33]. Among these methods, SSL has demonstrated exceptional efficacy in training transformers from scratch on small datasets [5, 6, 15, 24, 33, 49, 56]. These methods typically involve sequentially conducting SSL and fine-tuning on the same small dataset to enhance ViT performance. Meanwhile, another straightforward approach that takes Figure 1: Relative classification accuracy on three datasets with different sizes: (i) Oxford Flower [37] (2K samples), (ii) CIFAR [25] (50K samples), and (iii) ImageNet-1K [12] (IN-1K, 1.2M samples). SSAT consistently outperforms others on all three datasets with two backbones. On the other hand, given the same SSL method, SSL+FT achieves a compromised performance than SSAT, especially on the tiny Oxford Flower dataset (even worse than training from scratch). advantage of SSL is to jointly optimize the self-supervised task along with the primary task like classification or segmentation. We name such SSL tasks as **S**elf-**S**upervised **A**uxiliary **T**ask (**SSAT**). Although SSAT has been explored in the vision community [29, 33, 42] and robotics community [27, 30], there are still many open questions, especially when the size of the dataset is limited. This paper empirically analyzes the aforementioned joint learning approach with SSAT, as an alternative to sequentially performing SSL and fine-tuning (SSL+FT) on the same dataset. Through an extensive amount of experiments on _ten_ image classification datasets of various sizes as well as _two_ video classification datasets, surprisingly, we observe that SSAT works significantly better than other baselines like SSL+FT and training from scratch, especially for ViT on small datasets (see Figure 1). Further experiments empirically show that it is most effective when the auxiliary task is image reconstruction from missing pixels among the well known SSL methods we tested. Finally, we perform a detailed model and feature analysis to highlight the unique properties of SSAT-driven models in comparison to other representative baselines. This distinction is particularly notable when comparing with the SSL+FT models which are trained with similar loss functions. We reveal that the advantages of SSAT in a limited-data regime come from better semantic richness, a distinct attention distribution, and an increased capability for feature transformation, which results in higher feature variance. ## 2 Related Work **Vision Transformers.** Several vision transformers [1, 2, 8, 13, 45, 54, 43, 58, 62, 64] have been introduced in recent times for a wide range of tasks. However, these models require large-scale pre-training to be effective on different datasets. In an effort to reduce their reliance on extensive training, DeiT [52] introduced extensive data augmentation, regularization, and distillation tokens from convolutions in ViTs. T2T [61], in a similar vein, employed a tokenization technique that flattened overlapping patches and applied a transformer to allow for learning local structural information around a token. Meanwhile, some ViT models [10, 23, 57] have introduced inductive bias into the transformers through the use of convolutional filters. Hierarchical transformers [34, 32, 14, 55] have introduced inductive bias by reducing the number of tokens through patch merging and thus operating at different scales. However, these architectures do not overcome the limitation of ViTs, which require at least a medium-sized dataset for pre-training [39]. **Self-supervised Learning.** Self-Supervised Learning (SSL) aims to learn visual representations through pre-text tasks. Contrastive methods, such as SimCLR [7] and MoCo [21], minimize the distance between differently augmented views of the same image (positive pairs) while maximizing it for dissimilar images (negative pairs). On the other hand, non-contrastive methods like BYOL [17] and DINO [4] only impose minimization between the positive pairs. In contrast, reconstruction based methods [51, 59, 16, 16] have shown to be effective self-supervised learners for various downstream computer vision tasks. In these methods, an encoder operates on a small portion of an image to learn a latent representation, and a decoder decodes the latent representation to reconstruct the original image in the pixel space. These SSL methods are commonly used for large-scale pre-training of ViTs to enhance their effectiveness in various downstream tasks. **ViTs for small datasets.** Liu et al. [33] proposed an auxiliary self-supervised task that improves the robustness of ViT training on smaller datasets. The task involves predicting relative distances among tokens and is jointly trained with primary tasks. On the other hand, Li et al. [29] conducted distillation in the hidden layers of ViT from a lightweight CNN-trained model. To address the lack of locality inductive bias, Lee et al. [28] introduced a ViT architecture with shifted patch tokenization and locality self-attention. Gani et al. [15] proposed an SSL+Fine-tuning methodology where the SSL is similar to the pretext task in DINO [4]. These methods eliminate the need for large-scale pre-training and allow ViTs to learn meaningful representations with limited data. In contrast to these methods, we propose SSAT akin to [33], but with an approach that combines the functionality of self-attention and MLPs through image reconstruction. ## 3 Preliminaries ViT utilizes a non-overlapping grid of image patches to process a given image \(X\), where each patch is linearly projected into a set of input tokens. ViT consists of a stack of multi-head attention and linear layers as in [13]. The transformer attention layers model the pairwise relationship between the input tokens [53]. For generalizability, We denote the transformer encoder as \(f\). For brevity, we have omitted the parameters of the encoder. In practice, \(f\) operates on an augmented version of the input image \(X\) to output a discriminative representation \(f(A(X))\) where \(A\) is the set of image augmentation. This representation is subsequently classified into class labels using a classifier \(h\). A class-wise cross-entropy loss \(L_{cls}\) is used to train the transformer encoder. ## 4 Self-supervised Auxiliary Task (SSAT) Our objective is to improve the ViT training on the dataset with limited samples. Consequently, we propose to jointly train the primary classification task of ViT alongside a self-supervised auxiliary task (SSAT). The joint optimization of the SSAT and classification task allows the ViT to capture inductive biases from the data without requiring any additional labels. An overview of our framework is depicted in Figure 2. In our joint optimization framework for SSAT, we have utilized the widely adopted Masked Autoencoder (MAE) approach [19] for reconstructing the missing pixels. Nonetheless, it is worth noting that any SSL method can be integrated into our framework, given its generic nature. Our decision to use MAE was based on its superior performance, as evidenced by our experimental analysis (Table 4). To the existing ViT frameworks, where the Transformer encoder \(f\) and Classifier \(h\) process the full image patches \(A(X)\) to compute the classification loss \(L_{cls}\). we introduce an augmentation set \(\tilde{A}=M(A(X))\), where operation \(M\) randomly masks out patches in the input image \(X\). The transformer encoder \(f\) also operates on the unmasked tokens, generating latent representation \(f(\tilde{A}(x))\) for these tokens. In parallel to the classifier \(h\), SSAT employs a shallow decoder \(g\) to reconstruct back the unseen image pixels from the latent representation of the seen tokens \(f(\tilde{A}(x))\). Following [19], the decoder takes as input the latent representation of the seen tokens \(f(\tilde{A}(x))\) and a learnable masked token. Each token representation at the decoder's output is linearly projected to a vector of pixel values representing a patch. The output \(g(f(\tilde{A}(X)))\) is reshaped to form the reconstructed image, thereafter computing the normalized Mean Square Error (MSE) loss \(L_{SSAT}\) between the original and reconstructed image. In practice, the MSE is computed only for the masked patches as in [20]. Thus, the entire framework performs a primary task, i.e. _classification_ and a self-supervised auxiliary task, i.e. _reconstruction_. This framework can be jointly optimized using a convex combination of the losses from the primary task and SSAT. Thus, the total loss is computed by \[L=\lambda*L_{cls}+(1-\lambda)*L_{SSAT} \tag{1}\] \(\lambda\) is the loss scaling factor. During inference, the decoder is discarded and the encoder \(f\) processes all input patches to generate the classification output only. Our framework supports training of any ViT model and SSAT variants. ## 5 Experimental Analysis In this section, we present the superiority of using SSAT while training any vision transformer. Our experiments are based on image and video classification tasks. We use 12 different datasets: (i) 4 small sized datasets: CIFAR-10 [25], CIFAR-100 [25], Oxford Flowers102 [37] (Flowers) and SVHN [36], (ii) 1 medium sized dataset: ImageNet-1K [12] (IN-1K), (iii) 2 medical datasets: Chaoyang [63] and PMNIST [60], (iv) 3 datasets of DomainNet [41]: ClipArt, Infograph, and Sketch, and (v) 2 video datasets for deepfake detection: DFDC [46] and FaceForensics++ [44]. Our experiments for image reconstruction in the context of SSAT generally follow the procedure outlined in [20], unless otherwise stated. In particular, we employ the decoder design from [20] for ViT and utilize the decoder design from ConvMAE [16] and SimMIM [59] for hierarchical encoders such as CVT and swin, respectively. To optimize hyper-parameters for the decoder, we conduct our experiments with the ViT encoder. For augmentation \(\tilde{A}\), we use a random masking with 75% masking ratio. Our decoder has a depth of 2 (i.e. 2 transformer layers) and an embedding dimension of 128. We provide ablations on the choice of these hyper-parameters in Appendix C. It is worth noting that our decoder is shallower than that in MAE [20]. The loss scaling factor \(\lambda\) is set to \(0.1\) for all the datasets. Our ViT encoders (\(f\)) are trained using the training recipe of DeiT [52], unless otherwise specified. The configuration of ViT-T, ViT-S, and ViT-B is identical to the configuration described in [52]. We borrow the network architecture for CVT-13, and Swin from the official code of [57], and [34], respectively. Training is conducted for 100 epochs, unless otherwise specified, using 8 A5000 24GB GPUs for IN-1K and one A5000 24 GB GPU for all other Figure 2: An overview of **ViT training with SSAT**. The input \(X\) to the ViT undergoes data augmentations, \(A(X)\) and \(\tilde{A}(X)=M(A(X))\), using a mask operation **M**. These augmented inputs are then fed to a Transformer Encoder \(f\), resulting in two latent representations: \(f(A(X))\) and \(f(\tilde{A}(X))\). These correspond to the masked and full image, respectively. The latent representation of the full image is utilized for the image classification task, while the masked image’s representation is used for the image reconstruction task. ViT training involves joint optimization of losses from both these tasks. datasets. Additional training details for each dataset can be found in Appendix B. ### Main Results **SSAT on small-sized dataset**: In Table 1, we present the classification accuracy on the small-sized datasets with different variants of vision transformers: ViT-T, ViT-S, CVT-13, and Swin-T. In this table, we demonstrate the impact of using SSAT while training the transformers for learning the class labels. All the models have been trained for 100 epochs from scratch. Although the models with SSAT have more training parameters, they have identical operations during inference. SSAT improves the classification accuracy on all the datasets for all the transformer encoders. It is worth noting that ViT-T with 5.4M parameters when trained with SSAT outperforms ViT-S with 21.4M parameters. The highest classification accuracy is achieved with CVT-13 (20M parameters) due to the introduction of convolutions that infuse inductive bias into the transformers. Although convolutions are generally more effective than transformers on small datasets, our experiments demonstrate that the most effective convolutional network (ResNet-50 [22]) for these datasets underperforms most of the transformers when trained with SSAT, except for ViT-T on CIFAR-10 and CIFAR-100 datasets. **SSAT on medium-sized dataset**: In Table 2, we present the impact of SSAT on ViTs that were trained on a medium-sized dataset, such as IN-1K [12]. Our results demonstrate that SSAT consistently enhances the classification accuracy of ViTs, even as the number of training samples increases. Notably, this improvement is more pronounced for smaller models, which have 5.4M parameters, than for larger ones. Specifically, we observed a relative performance improvement of 11.8% for ViT-T with SSAT, as compared to only 2.9% for ViT-S+SSAT. These findings suggest that SSAT can be effectively utilized to train lighter transformers that can be deployed on edge devices. **Does SSAT promotes overfitting?** In Table 2, we also analyse the robustness of ViTs to natural corruptions. Given that we recommend the use of SSAT to enhance representation learning in transformer training, it is reasonable to question whether this approach can lead to overfitting on small training samples. To address this concern, we evaluate the performance of our trained models on perturbed versions of the data, specifically, CIFAR-100-p and IN-1K-p, which are obtained by applying random perspective transformations to images following [48]. Our results demonstrate that ViTs trained with SSAT exhibit greater robustness to these natural corruptions compared to the baseline ViTs. We observe notable improvements in performance for tiny ViTs, as evidenced by the results for ViT-T+SSAT in Table 2, as well as for smaller datasets such as CIFAR-100-p. **Comparison of SSAT with SSL+FT**: In Table 3, we present the superiority of joint training of the SSL loss with the classification loss over the two-step sequential training approach, where the model is first trained with SSL and then fine-tuned (FT) for classification. Our empirical analysis is conducted on ViT-T, where we compare the performance of ViT trained from scratch and ViT + SSAT, which are trained for 100 epochs. To establish baselines for our SSL+FT model, we conducted experiments using four different training protocols: (1) 50 epochs of SSL training followed by 50 epochs of fine-tuning, (2) 50 epochs of SSL training followed by 100 epochs of fine-tuning, (2) 100 epochs of SSL training followed by 50 epochs of fine-tuning, and (4) 100 epochs of SSL training followed by 100 epochs of fine-tuning. Additionally, we quantify the carbon emission of the models trained using different methods with the help of a tool provided by [26]. Note that the GFLOPs, training time, and Kg CO\({}_{2}\) eq. are specified for the model trained on IN-1K for better generalizability. Our empirical results show that all models incorporating SSL outperform those trained from scratch, highlighting the importance of self-supervised learning when training transformers on small datasets. Moreover, even when requiring an additional 4 hours of training time and resulting in approximately 0.6 Kg CO\({}_{2}\) equivalent of additional carbon emissions, our SSAT models demonstrate superior performance compared to the SSL+FT model (50 epoch \begin{table} \begin{tabular}{c|c|c c c c} \hline \hline **Method** & **\# params. (M)** & **CIFAR-10** & **CIFAR-100** & **Flowers102** & **SVHN** \\ \hline ViT-T [52] & 5.4 & 79.47 & 55.11 & 45.41 & 92.04 \\ **+SSAT** & 5.8 & **91.65** (+12.18) & **69.64** (+14.53) & **57.2** (+11.79) & **97.52** (+5.48) \\ \hline ViT-S [52] & 21.4 & 79.93 & 54.08 & 56.17 & 94.45 \\ **+SSAT** & 21.8 & **94.05** (+14.12) & **73.37** (+19.29) & **61.15** (+4.98) & **97.87** (+3.42) \\ \hline CVT-13 [57] & 20 & 89.02 & 73.50 & 54.29 & 91.47 \\ **+SSAT** & 20.3 & **95.93** (+6.91) & **75.16** (+1.66) & **68.82** (+14.53) & **97** (+5.53) \\ \hline Swin-T [34] & 29 & 59.47 & 53.28 & 34.51 & 71.60 \\ **+SSAT** & 29.3 & **83.12** (+23.65) & **60.68** (+7.4) & **54.72** (+20.21) & **85.83** (+14.23) \\ \hline ResNet-50 [22] & 25.6 & 91.78 & 72.80 & 46.92 & 96.45 \\ \hline \hline \end{tabular} \end{table} Table 1: Top-1 classification accuracy (%) of different ViT variants with and without SSAT on CIFAR-10, CIFAR-100, Flowers102, and SVHN datasets. All models were trained for 100 epochs. SSL + 100 epoch FT). Although accuracy improves when SSL+FT models are trained on CIFAR-10 and CIFAR-100 for 104 GPU hours, our SSAT approach remains superior, requiring 26 GPU hours less training time and burning approximately 2.8 Kg CO\({}_{2}\) equivalent. However, the SSL+FT model outperforms SSAT when a large amount of training data is available. **Appropriate SSL for joint training**: Table 4 presents a comparison of the performance of the SSAT approach, implemented with different SSL strategies, namely, contrastive (SimCLR [7]), non-contrastive (DINO [4]), and reconstruction based (MAE [20]), on the ViT model. Our analysis reveals that the use of SimCLR results in a decrease in the ViT's performance, which can be attributed to the conflicting losses that arise while optimizing the cross-entropy loss to learn class labels and the contrastive loss. However, DINO and MAE both enhance the ViT's performance when jointly trained with cross-entropy. Notably, the improvement observed with MAE is more significant than that with DINO. The superior performance of MAE can be attributed to the centering and sharpening technique employed in DINO, which impedes the learning of class labels while only facilitating the SSL. On the other hand, as mentioned in [40], MAE encourages MLPs in ViTs to be more representative. While the cross-entropy loss primarily contributes more to the self-attention blocks. Thus, SSAT implemented with reconstruction based SSL harmonizes the impact of both tasks, thus improving the ViT's learning capabilities. **Superiority of SSAT over Large-scale pre-training**: In situations where training samples are limited and the data distribution differs from that of natural images, large-scale pretraining can be challenging. The main obstacle is the lack of data that accurately represents the downstream data distribution. Consequently, we conducted experiments using ViTs on medical and domain adaptation datasets (Tables 5 and 6) where data is scarce. In Table 5, we demonstrate how SSAT significantly enhances the classification performance of ViT-T on the Chaoyang and PMNIST datasets. The resulting model not only surpasses a comparable ViT model that was pre-trained on ImageNet [12], but also outperforms its larger ViT-S model when trained without SSAT. We observed similar trends of improvement on three datasets from DomainNet [41] in Table 6. It is worth mentioning that our CVT model, when trained using SSAT, outperforms \(\mathcal{L}_{drloc}\)[33], which is another state-of-the-art self-supervised loss designed to enhance transformer performance on small datasets. **Loss scaling factor**: In Figure 3 we perform an empirical analysis to determine the optimal value for the loss scaling factor \(\lambda\). Our analysis focused on CIFAR datasets show that the choice of \(\lambda=0.1\) is an optimal choice when SSAT positively impacts the primary classification task. **Extended training**: In this experiment, we extend the training schedules of both the scratch and SSAT model as illustrated in Figure 4. Our findings indicate that the performance enhancement of our SSAT model, relative to the ViT \begin{table} \begin{tabular}{c|c c c} \hline \hline **Method** & **ClipArt** & **Infograph** & **Sketch** \\ \hline ViT-T & 29.66 & 11.77 & 18.95 \\ **+SSAT** & 47.95 & 16.37 & 46.22 \\ \hline CVT-13 & 60.34 & 19.39 & 56.98 \\ **+\(\mathcal{L}_{drloc}\)[33]** & 60.64 & 20.05 & 57.56 \\ **+SSAT** & **60.66** & **21.27** & **57.71** \\ \hline \hline \end{tabular} \end{table} Table 6: Top-1 accuracy on DomainNet datasets. All models are trained for 100 epochs \begin{table} \begin{tabular}{c|c c c} \hline \hline **Method** & **IN-1K** & **CIFAR-100-\(p\)** & **IN-1K-\(p\)** \\ \hline ViT-T & 65.0 & 25.1 & 48.3 \\ \hline **+SSAT** & **72.7** & **37.6** & **59.6** \\ \hline ViT-S & 74.2 & 22.5 & 62.7 \\ **+SSAT** & **76.4** & **43.9** & **64.5** \\ \hline \hline \end{tabular} \end{table} Table 2: Top-1 classification accuracy (%) on ImageNet-1K (IN-1K), perturbed CIFAR-100 (CIFAR-100-\(p\)), and perturbed ImageNet-1K (IN1K-\(p\)) \begin{table} \begin{tabular}{c|c|c} \hline \hline **SSAT (SSL)** & **CIFAR-10** & **CIFAR-100** \\ \hline SimCLR [7] & 55.21 & 36.49 \\ DINO [4] & 80.07 & 60.6 \\ MAE [20] & **91.65** & **69.64** \\ \hline \hline \end{tabular} \end{table} Table 4: Top-1 accuracy of existing SSL strategies used as SSAT. MAE as the SSAT achieves the best result on both CIFAR-10 and CIFAR-100. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline \hline **Method** & **GFLOPs** & **CIFAR-10** & **CIFAR-100** & **IN-1K** & \begin{tabular}{c} **Train** \\ **time** & **Kg CO\({}_{2}\)** \\ \end{tabular} \\ \hline Scratch & 1.26 & 79.47 & 55.11 & 65.0 & 60 & 5.96 \\ (1) SSL+FT & 0.43+1.26 & 85.33 & 60.43 & 70.09 & 55 & 5.46 \\ (2) SSL+FT & 0.43+1.26 & 86.48 & 63.28 & 71.1 & 82 & 8.15 \\ (3) SSL+FT & 0.43+1.26 & 85.3 & 60.3 & 70.5 & 74 & 7.35 \\ \hline (4) SSL+FT & 0.43+1.26 & 88.72 & 67.53 & **74.07** & 104 & 10.33 \\ Ours & 1.67 & **91.65** & **69.64** & 72.69 & 78 & 7.55 \\ \hline \hline \end{tabular} \end{table} Table 3: Top-1 accuracy and efficiency of ViT-T trained from scratch, with SSL+FT, and with SSAT. We provide the GFLOPs, training time (GPU hours), and CO\({}_{2}\) emissions (kg eq) for IN-1K. \begin{table} \begin{tabular}{c c|c c} \hline \hline **Method** & **Cinaoyang** & **PMNIST** \\ \hline \multirow{4}{*}{ViT-T} & Scratch & 77.37 & 90.22 \\ & IN-1K pretrained + FT & 78.78 & 91.99 \\ & Scratch + **SSAT** & **82.52** & **93.11** \\ \hline \multirow{4}{*}{ViT-T} & Scratch & 80.04 & 91.19 \\ & IN-1K pretrained + FT & 80.18 & 92.63 \\ \cline{1-1} & Scratch + **SSAT** & **81.25** & **93.27** \\ \hline \hline \end{tabular} \end{table} Table 5: Top-1 accuracy on medical image datasets. All models are trained for 100 epochs. baseline, remains consistent throughout the entire training period. These results suggest that the improvement in the SSAT model's performance is not due to a faster convergence rate, but rather to superior optimization capabilities. **Training for different subsets of IN-1K**: Figure 5 presents our analysis of the performance of the ViT baseline and SSAT model for varying training sample sizes, specifically on subsets of IN-1K. Our results demonstrate that the performance enhancement of the SSAT model, relative to the baseline model, is consistent across all subsets (i.e., different sizes of the training data). These findings substantiate that models with low training parameters, such as ViT-T, can benefit from SSAT at all scales of training data. ### Diagnosis of features learned by SSAT In this section, we differentiate the properties of ViTs learned from scratch, SSL+FT, and our SSAT method. We investigate the learned ViT properties by analyzing their attention weights, token representation, feature transformation, and loss landscape. We answer the following key questions: **How are the attention weights distributed?** The objective of this experiment is to examine the mean attention weights received from other tokens in a sample in the data distribution. As outlined in [53], the sum of all values in a column of an \(n\times n\) self-attention matrix, where \(n\) denotes the number of tokens, represents the aggregated attention associated with a token. Figure 6 displays the attention weight distribution across the \(n\) tokens for various ViT blocks on both Flower (top row) and CIFAR-100 (bottom row) datasets. The attention weights are uniformly distributed in the first transformer block of the scratch model on both datasets, implying an equal focus on all image regions. However, this distribution changes slightly in the deeper layers. Intriguingly, SSL+FT and SSAT models display sharply peaked attention distributions in the initial and middle transformer layers, but the distributions do not necessarily align with each other. Specifically, in the first transformer block, the attention weight distributions of SSL+FT and SSAT models complement each other, indicating that lower-level features learned by these models are complementary. Moreover, the SSL+FT models exhibit sharp peaks in the final layers, whereas the peaks in SSAT models have a lower magnitude, possibly because the latter model has a better inductive bias. Therefore, although both models are trained on the same set of losses, they use different mechanisms to learn attention weights that differ in the initial layers, and the attention weights learned by the SSAT model are smoother in the final layers, indicating a better inductive bias of the model. **What is the quality of the learned tokens?** In this study, we investigated the average distance between tokens within a sample across different transformer blocks. Our analysis involves plotting the average Euclidean distance between tokens in images from the Flower and CIFAR-100 datasets at the output of the transformer layers, as shown in Figure 7. Our results indicate that the scratch model yields a lower inter-token distance than the other models, implying homogeneous token representation. We also observe that SSL+FT models yield higher inter-token distances than SSAT models at the middle transformer layers, but this distance diminishes as we go deeper into the ViTs. Consequently, the SSL+FT models suffer from homogeneous token representation, which adversely affects the ViT training, leading to sub-optimal classification accuracy. In contrast, the inter-token distance of SSAT models increases with ViT depth, indicating that the token representations are discriminative and are semantically rich. **How are representations transformed?** The aim of our experiment is to showcase the variation in feature map evolution between ViTs that are trained using different mechanisms. We conducted feature variance measurements across the ViT layers on Flower and CIFAR-100 datasets, and the results are presented in Figure 8. Our analysis confirms the findings of previous studies that the feature variance across ViTs trained from scratch remains constant. However, we observed that the SSL+FT models exhibit an increase in feature variance until a certain layer, after which the rate of an increase either decreases (in Flower dataset) or begins to fall (in CIFAR-100 dataset). Conversely, the feature variance in our SSAT models accumulates with each ViT layer and tends to increase as the depth increases. Consequently, as we go deeper in the SSAT models, the feature map uncertainty decreases, which facilitates optimization through ensembling and stabilizing the transformed feature maps [38]. **Why is SSAT better than SSL+FT?** In this study, we investigated the loss landscapes of ViT models trained using different training mechanisms. We follow [39] to display the Eigenvalue Spectral Density of Hessian for the different ViT models trained (see Figure 9). Our results indicate that the scratch ViT model exhibits a wide range of negative Hessian eigenvalues, implying non-convex loss landscapes. Interestingly, the number of negative Hessian eigenvalues is slightly higher in the SSL+FT ViT model than in the scratch model (9622 vs 9667). However, the lower magnitude of some of the negative Hessian eigenvalues in the SSL+FT model makes their qualitative visualization difficult. In contrast, SSAT reduces the number of negative Hessian eigenvalues by 12% in comparison to the SSL+FT model. This finding suggests that the SSL approach convexifies losses and suppresses negative eigenvalues in the small data regime. Additionally, the SSAT ViT model reduces the average magnitude of negative Hessian eigenvalues by 70% compared to the SSL+FT models. Therefore, SSAT effectively reduces the magnitude of large Hessian eigenvalues and enhances the ViTs' ability to learn better representations. ### Comparison with the state-of-the-art Table 7 presents a comparison of SSAT with state-of-the-art (SOTA) methods. To ensure a fair evaluation, we implemented SSAT with the ViT encoder used in the respective methods. We find that MAE as SSAT outperforms Drloc [33] which takes predicting relative distance between patches as SSAT. This shows that the choice of SSAT plays a crucial role in the effective training of ViTs. Moreover, we find that SSAT outperforms SL-ViT [28] and [15] when trained for an equal number of epochs. This indicates that SSAT, without any architectural modifications, can surpass SOTA methods through its joint training strategy. Additionally, we trained a ViT with SSAT and feature-level distillation from a light-weight CNN as described in [29]. The improvement over the baseline [29], which involves training ViT with feature-level distillation only, demonstrates the complementary nature of the representations learned by ViT when trained with SSAT. In Figure 10, we present the Grad-CAM visualizations [47]. SSL+FT (3rd col) focuses on few specific pixelwise regions, while our method (4th col) focuses on areas corresponding to the entire primary object. We also provide the attention visualization of ViTs trained using different strategies in Appendix E (see Figure 12). \begin{table} \begin{tabular}{c|c|c|c|c} \hline **Method** & **\# enc.params.** & **epochs** & **CIFAR-10** & **CIFAR-100** \\ \hline CVT-13+\(\mathcal{L}_{Disc}\)[33] & 20M & 100 & 90.30 & 74.51 \\ CVT-13+SSAT & & & **95.93** & **75.16** \\ \hline ViT (search) & & & 93.58 & 73.81 \\ SL-ViT [28] & & & 94.53 & 76.92 \\ ViT (SSL+FT) [15] & & & 94.2 & 76.08 \\ ViT + SSL & & & **95.1** & **77.8** \\ \hline DeiT-Th\(\mathcal{L}_{Disc}\)[29] & & & & 78.15 \\ DeiT-Th\(\mathcal{L}_{Disc}\) + SSL & & & & **79.46** \\ \hline \end{tabular} \end{table} Table 7: Comparison of our SSAT to existing state-of-the-art approaches on small datasets. \({}^{\dagger}\) indicates that [15] is replicated with 300 epochs. Results of [29] is not reported on CIFAR-10. Figure 8: **Feature Map Variance** of ViTs trained from scratch, using SSL+FT and using SSAT, for two different datasets: Oxford Flowers (on the left) and CIFAR-100 (on the right). Figure 6: The **distribution of attention weights** across the \(n\) tokens for different ViT-T blocks on two datasets: Oxford Flower (top row) and CIFAR-100 (bottom row). The first, second, and third columns correspond to the attention distributions of the first, sixth, and twelfth ViT-T blocks, respectively. Figure 7: **Average Euclidean Inter-token Distance** of ViTs trained from scratch, using SSL+FT and using SSAT, for two different datasets: Oxford Flowers (on the left) and CIFAR-100 (on the right). ### Performance of SSAT in video domain We have also assessed the efficacy of the SSAT within the video domain for the task of deepfake detection. In this experiment, the model's generalization capabilities for deepfake detection is validated, as presented in Table 8. For video encoding, we employ ViT as in [51]. Our VideoMAE + SSAT model is a direct extension of the MAE+SSAT model designed for image data; the only modification lies in the choice of encoder. The primary task involves binary classification to distinguish between real and manipulated videos. Notably, we experimented with two masking ratios, 0.75 and 0.95, during the training of VideoMAE + SSAT. To assess model generalizability, we conducted cross-manipulation training based on the FaceForensics++ dataset [11]. We trained the model using videos generated by all possible combinations of three manipulation techniques (Deepfakes, Face2Face, FaceSwap, and NeuralTextures) plus original videos, and then evaluated its performance on the videos generated by the remaining technique. This approach simulates real-world scenarios where multiple manipulation techniques might be encountered post-training. All models, except the scratch model, are pre-trained on the DFDC dataset [46] before being evaluated on the FaceForensics++ dataset. To enable a fair comparison with VideoMAE + SSAT, the baseline VideoMAE SSL models are first pretrained and fine-tuned on DFDC, and are subsequently employed for deepfake classification task. The evaluation involved both (1) cross-dataset fine-tuning on FaceForensics++ and (2) zero-shot transfer assessment where pre-trained models are evaluated on FaceForensics++ without additional training. Our findings, as detailed in Table 8, reveal that the VideoMAE+SSAT models demonstrate a superior generalized capability than the other baselines to distinguish between real and manipulated videos. Note that the scratch model outperforms all models on detecting videos generated using NeuralTextures without any pretraining but it is not suitable for zeros-shot transfer. Interestingly, the VideoMAE models exhibit complementary behavior when subjected to different masking ratios, which warrants a future investigation. More details including implementation and training details of these experiments can be found in Appendix D. ## 6 Conclusion The main focus of this paper was on the use of self-supervised learning (SSL) to effectively train ViTs on domains with limited data. We demonstrate that by jointly optimizing the primary task of a ViT encoder with SSL as an auxiliary task, we can achieve discriminative representations for the primary task. This simple and easy-to-implement method called SSAT outperforms the traditional approach of sequentially training with SSL followed by fine-tuning on the same data. Our joint training framework learns features that are different from those learned by the dissociated framework, even when using the same losses. These results highlight the potential of SSAT as an effective training strategy with a lower carbon footprint. We anticipate that SSAT will become a standard norm for training vision transformers on small datasets. ## Acknowledgments We thank the members of the Charlotte Machine Learning Lab at UNC Charlotte for valuable discussion. This work is supported by the National Science Foundation (IIS-2245652). Figure 10: **GradCAM visualizations of our SSAT model and the representative baselines.** \begin{table} \begin{tabular}{c|c c c c|c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{4}{c}{**cross-training evaluation**} & \multicolumn{4}{c}{**zero-shot transfer**} \\ & **Deepfakes** & **Face2Face** & **FaceSwap** & **NeuralTextures** & **Deepfakes** & **FaceFace** & **FaceSwap** & **NeuralTextures** \\ \hline Scratch & 84.48 & 79.21 & 56.63 & **82.08** & - & - & - \\ \hline Cross-efficient-v4 [9] & 82.67 & 69.89 & 79.93 & 64.87 & - & - & - \\ DPDC winner [46] & 96.43 & 73.93 & 86.07 & 58.57 & 85.70 & 80.36 & 54.64 \\ \hline Videowalk SSL (0.95) & 82.07 & 64.16 & 58.42 & 63.44 & 86.28 & 49.82 & 69.18 & 51.97 \\ VideoMAE SSL (0.75) & 78.34 & 65.59 & 57.35 & 61.65 & 82.67 & 48.39 & 65.23 & 51.97 \\ \hline **VideoMAE (0.95) + SSAT** & 92.42 & 79.21 & 89.61 & 81.36 & **92.42** & **61.65** & **92.83** & **62.37** \\ **VideoMAE (0.75) + SSAT** & **96.75** & **80.65** & **91.40** & **72.76** & 87.73 & 60.57 & 88.17 & **61.65** \\ \hline \hline \end{tabular} \end{table} Table 8: Cross training evaluation and zero-shot transfer results of DeepFake detection on FaceForensics++ with SSAT. [9] is trained on both DFDC and FaceForensics++, thus zero-shot transfer results have not been provided. Figure 9: **Hessian max eigenvalues spectra** of ViTs trained from scratch (on the left), SSL + FT (in the middle), and SSAT (on the right).
2309.11492
Visible in the laboratory and invisible in cosmology: decaying sterile neutrinos
The expansion history and thermal physical process that happened in the early Universe before big bang nucleosynthesis (BBN) remains relatively unconstrained by observations. Low reheating temperature universes with normalcy temperatures of $T_\mathrm{RH}\sim 2\,\mathrm{MeV}$ remain consistent with primordial nucleosynthesis, and accommodate several new physics scenarios that would normally be constrained by high-temperature reheating models, including massive sterile neutrinos. We explore such scenarios' production of keV scale sterile neutrinos and their resulting constraints from cosmological observations. The parameter space for massive sterile neutrinos is much less constrained than in high-$T_\mathrm{RH}$ thermal histories, though several cosmological constraints remain. Such parameter space is the target of several current and upcoming laboratory experiments such as TRISTAN (KATRIN), HUNTER, MAGNETO-$\nu$, and PTOLEMY. Cosmological constraints remain stringent for stable keV-scale sterile neutrinos. However, we show that sterile neutrinos with a dark decay to radiation through a $Z^\prime$ or a new scalar are largely unconstrained by cosmology. In addition, this mechanism of sterile neutrinos with large mixing may provide a solution to the Hubble tension. We find that keV-scale sterile neutrinos are therefore one of the best probes of the untested pre-BBN era in the early Universe and could be seen in upcoming laboratory experiments.
Kevork N. Abazajian, Helena García Escudero
2023-09-20T17:47:15Z
http://arxiv.org/abs/2309.11492v2
# Visible in the laboratory and invisible in cosmology: decaying sterile neutrinos ###### Abstract The expansion history and thermal physical process that happened in the early Universe before big bang nucleosynthesis (BBN) remains relatively unconstrained by observations. Low reheating temperature universes with normalcy temperatures of \(T_{\rm RH}\sim 2\,{\rm MeV}\) remain consistent with primordial nucleosynthesis, and accommodate several new physics scenarios that would normally be constrained by high-temperature reheating models, including massive sterile neutrinos. We explore such scenarios' production of keV scale sterile neutrinos and their resulting constraints from cosmological observations. The parameter space for massive sterile neutrinos is much less constrained than in high-\(T_{\rm RH}\) thermal histories, though several cosmological constraints remain. Such parameter space is the target of several current and upcoming laboratory experiments such as TRISTAN (KATRIN), HUNTER, MAGNETO-\(\nu\), and PTOLEMY. Cosmological constraints remain stringent for stable keV-scale sterile neutrinos. However, we show that sterile neutrinos with a dark decay to radiation through a \(Z^{\prime}\) or a new scalar are largely unconstrained by cosmology. In addition, this mechanism of sterile neutrinos with large mixing may provide a solution to the Hubble tension. We find that keV-scale sterile neutrinos are therefore one of the best probes of the untested pre-BBN era in the early Universe and could be seen in upcoming laboratory experiments. + Footnote †: preprint: UCI-HEP-TR-2023-09 ## I Introduction Neutrino oscillations provide strong evidence for non-zero neutrino masses and are one of the most clear pieces of evidence of physics beyond the Standard Model (SM) [1]. By measuring the fluxes and energy spectra of neutrinos coming from various sources, such as the Sun, nuclear reactors, and cosmic rays interacting with the Earth's atmosphere, numerous experiments have provided compelling evidence for neutrino oscillations and, by extension, non-zero neutrino masses [2]. In near unanimity, models for neutrino mass generation require the presence of new sterile neutrino states through either Majorana or Dirac neutrino mass mechanisms [1]. Specifically, the addition of two sterile neutrinos can explain both solar and atmospheric neutrino oscillations, while a third massive sterile neutrino has considerable freedom as to its mass and mixing properties [3; 4; 5], and can be a natural dark matter candidate [6]. In this scenario, the sterile neutrino would be a neutral particle that does not participate in the Weak interactions, but could be produced by neutrino oscillations or other mechanisms, and could survive from the early Universe to the present day as a dark matter candidate [7]. The mass scale of the sterile neutrino would need to be in the range of a few to tens of keV in order to be consistent with the observed properties of dark matter [8]. Sterile neutrinos can also affect neutrino oscillation experiments by introducing additional oscillation channels and modifying the observed oscillation patterns. Although these particles have not been definitively detected, on-going experiments have reported anomalies that could potentially be explained by these particles [9; 10; 11; 12; 13; 14]. Further experimental investigations are ongoing to explore the possibility of sterile neutrinos and their role in neutrino physics [15]. The predominant model for the early Universe postulates that it underwent inflation, which diluted any prior constituents to cosmological irrelevance, assured cosmological flatness, and created the primordial density perturbations. When inflation comes to an end, its potential steepens, violating the slow roll, leading to the beginning of the reheating. During this phase, all particles that are kinematically permitted are directly created or generated through the thermal bath that the inflaton decay creates. The reheating temperature, \(T_{\rm RH}\), refers to the temperature of the Universe after the period of inflation when particle decays transfer their energy into SM thermalized particles, establishing the initial hot and dense state. Radiation domination evolution leading into the required era of big bang nucleosynthesis (BBN) places the lower limit of the cosmological reheating temperature to be as low as \(T_{\rm RH}=1.8\,{\rm MeV}\) and be consistent with primordial nucleosynthesis, and with new physics that adds relativistic energy density, can be consistent with all observations [16; 17]. There are a variety of histories prior to reheating (e.g., kination and scalar-tensor cosmologies) [18; 19; 20], and even the lowest \(T_{\rm RH}\) models can accommodate other key important components required of the early Universe, _viz._ baryogenesis and dark matter production [21]. Cosmology, therefore, permits \(T_{\rm RH}\) to be anywhere from above the Grand Unified Theory scale \(T_{\rm RH}\gtrsim 10^{15}\) GeV and the weak freezeout or BBN scale of \(2\,\mathrm{MeV}\), so that \(T_{\mathrm{RH}}\) remains a frontier for cosmology. In the case of a high scale, the long period of weak scattering at high \(T\) allows for the thermalization of sterile neutrinos that oscillate with the active neutrinos for much of the parameter space of interest for neutrino oscillations at the eV to sub-eV sterile neutrino mass scale [22]. For high \(T_{\mathrm{RH}}\), \(\sin^{2}(2\theta)\) is tightly constrained, typically \(\sin^{2}(2\theta)<10^{-7}\), to ensure that the production of sterile neutrinos in the early Universe is suppressed. However, for sufficiently low \(T_{\mathrm{RH}}\) universes, the scattering epoch is significantly reduced, so that even short-baseline-motivated eV-scale sterile neutrinos are allowed [23]. In such low reheating temperature (LRT) universes, keV-scale sterile neutrinos with larger mixing angles are also allowed for much of their parameter space [18; 19; 24]. Importantly, sterile neutrinos at the keV-scale in LRT models cannot be the dark matter (see discussion in Sec. III). However, sterile neutrinos in LRT universes still undergo radiative decay, which can be detected by astronomical X-ray telescopes, as in the case of high reheating temperature (HRT) universes [25; 26], and could even be responsible for the unidentified X-ray line at \(\sim\)3.5 keV [27; 28]. In Ref. [29], the authors consider several mechanisms to decouple astrophysical and cosmological constraints on large-mixing angle keV-scale sterile neutrinos, including cancellation of the \(\nu_{s}\) decay rate with new particles, new particles that mediate \(\beta\)-decay differently than \(\nu_{s}\) decay, CPT violation, lepton number suppression, as well as suppression of production of sterile neutrinos in LRT universes with an additional reduction of their contribution to the dark matter density with no specified mechanism (that paper's "cocktail" model). However, if they are not associated with any other new beyond the Standard Model (BSM) physics, sterile neutrinos in LRT universes remain significantly constrained from radiative decay and structure formation, which we show in Fig. 1. Sterile neutrinos are a BSM extension that may be embedded in a richer phenomenology of their dark sector. It is known that cosmological constraints on active neutrino masses can be alleviated if the active neutrinos annihilate [30] or decay [31]. Similarly, sterile neutrinos that are partially or fully thermalized in the early Universe may decay into lighter states through a new \(Z^{\prime}\), \(\nu_{s}\to\nu_{\varkappa^{\prime}}+\bar{\nu}_{\varkappa^{\prime}}+\nu_{ \varkappa^{\prime}}\), or through a new scalar \(\nu_{s}\to\nu_{\varkappa^{\prime}}+\phi\)[32], altering their cosmological impact and related constraints. In this paper, we study the decay of keV-scale sterile neutrinos to a dark sector, which can allow for their presence at larger mixing angles. Interestingly, the disparate redshifting of sterile neutrinos when they are relativistic vs. non-relativistic can augment low cosmological relativistic energy density, \(N_{\mathrm{eff}}\), resulting from LRT models to match that inferred from cosmic microwave background (CMB) and large-scale structure observations. (\(N_{\mathrm{eff}}\) is defined in Sec. IV.) In addition, this mechanism can provide a higher \(N_{\mathrm{eff}}\) to match that preferred by the Hubble tension (e.g., see [33; 34]). Further, we show how dark decay of the sterile neutrino, when combined with LRT, opens up all of the parameter space of interest for nuclear decay searches for keV-scale sterile neutrinos, including HUNTER [35; 36], TRISTAN [37], MAGNETO-\(\nu\)[38; 39], and PTOLEMY [40; 41]. If a keV-scale sterile neutrino is detected in the parameter space in which these experiments are sensitive, it would be a new probe of the pre-BBN epoch and indicate new physics in the early Universe. In Sec. III, we briefly review LRT models and the constraints on keV-scale sterile neutrinos that mix with the active neutrinos. We introduce two dark decay models and show how dark decay can enhance the cosmological consistency of LRT models in Sec. IV as well as provide a solution to the Hubble tension. We conclude in Sec. V. ## II Neutrinos in a low-reheating temperature universe The active neutrinos remain in contact with the plasma through weak interactions until temperatures of \(\sim\!2\) to 5 MeV--the so-called temperature of weak decoupling. For a standard lepton-number symmetric background, sterile neutrinos are produced through oscillation-based scattering production at the highest rates at \(T\approx 130\,\mathrm{MeV}(m_{s}/1\,\mathrm{keV})^{1/3}\)[6]. However, sterile neutrinos can still be produced in the epoch of weak decoupling when their mixing is sufficiently large. Therefore, keV-scale sterile neutrinos are still subject to cosmological constraints at the largest mixing angles [19; 24]. In LRT cases, the cosmological relativistic energy density, \(N_{\mathrm{eff}}\), in active neutrinos is reduced. Even though LRT uni Figure 1: Shown here is the parameter space for a possible low reheating temperature universe with \(T_{\mathrm{RH}}=5\,\mathrm{MeV}\), for the case of \(\nu_{s}\leftrightarrow\nu_{e}\) mixing. The regions of this figure are described in the beginning of Sect. III. verses of \(T_{\rm RH}=1.8\,\)MeV are consistent with BBN [17], they produce \(N_{\rm eff}=1.0\), which is highly discrepant with \(N_{\rm eff}\) determined from Planck's observations of the CMB and galaxy surveys' baryon acoustic oscillations, which find \(N_{\rm eff}=2.99^{+0.34}_{-0.33}\) (95% CL) [42]. For reheating temperatures close to \(T_{\rm RH}=5\) MeV, \(N_{\rm eff}\) can be within 10% of its canonical value, so that it remains consistent with current constraints from the CMB and large scale structure. LRT universes with \(T_{\rm RH}=1.8\,\)MeV are still possible when there is another source for relativistic energy density. In this and other scenarios we explore below, extra relativistic energy density contributing to \(N_{\rm eff}\) is from the decay of massive sterile neutrinos. For the case of LRT universes, the production of sterile neutrinos proceeds via partial thermalization due to low temperatures, which accommodates larger mixing angles than in HRT universes. Following the notation of Ref. [24], the \(\nu_{s}\) distribution function produced in the early Universe turns out to be \[f_{s}(E,T)\approx 3.2\,d_{\alpha}\left(\frac{T_{\rm RH}}{5\,\text{MeV}}\right) ^{3}\sin^{2}2\theta\left(\frac{E}{T}\right)f_{\alpha}(E,T)\,, \tag{1}\] where \(\sin^{2}2\theta\) is the mixing angle between active and sterile neutrino states, \(d_{\alpha}=1.13\) for \(\nu_{\alpha}=\nu_{e}\), and \(d_{\alpha}=0.79\) for \(\nu_{\alpha}=\nu_{\mu,\tau}\). The fraction of the sterile neutrino distribution produced is then \[f\equiv\frac{n_{\nu_{s}}}{n_{\nu_{\alpha}}}\approx 10\,d_{\alpha}\sin^{2}2\theta \left(\frac{T_{\rm RH}}{5\,\text{MeV}}\right)^{3}\,. \tag{2}\] This scattering-based non-resonant production mechanism for the sterile neutrinos is the minimal case we consider in this work, and represents the conservative level of sterile neutrinos in LRT universes. ## III Current constraints In this section, we discuss constraints, regions of interest in the mass-mixing plane, and potential signals for sterile neutrinos in the case of a LRT universe. At very high mixing angles, \(\sin^{2}2\theta\sim 0.1\), BBN is affected due to thermalization of the sterile neutrinos and their contribution to the relativistic energy density through BBN [24; 43]. These constraints are above (weaker) than the other constraints we consider. A fundamental cosmological constraint comes from the exclusion of \(\Omega_{s}>\Omega_{\rm DM}\)1 at large mixing angles that overproduces the sterile neutrinos to be above the dark matter density, and this is shown in the blue region in Fig. 1. The edge of this region represents where \(\Omega_{s}=\Omega_{\rm DM}\). However, this line is excluded by hot dark matter (HDM) constraints [44], up to masses at which the sterile neutrinos act as warm dark matter (WDM) (\(m_{s}\sim 0.1\,\)keV). Above the 0.1 keV mass scale, pure WDM constraints exclude the possibility of sterile neutrinos as the totality of dark matter: WDM constraints on Dodelson-Widrow nonresonantly-produced sterile neutrino dark matter are at the level of \(\gtrsim\)80 keV, from combined lensing plus galaxy counts constraints [45; 8]. Since LRT scattering-produced sterile neutrinos are kinematically more energetic (i.e., "hotter") [43], then constraints on LRT sterile neutrinos are more stringent than 80 keV, and well into the diffuse extragalactic background limit at 1 keV along the \(\Omega_{s}=\Omega_{\rm DM}\) line. This exclusion is independent of \(T_{\rm RH}\) within LRT models (\(T_{\rm RH}\lesssim 7\,\)MeV). The combined HDM and WDM constraints therefore exclude LRT models from producing sterile neutrinos as all of the dark matter, when combined with the diffuse extragalactic background radiation constraints (discussed below) [43]. When \(\Omega_{s}<\Omega_{\rm DM}\), mixed cold plus warm dark matter (CWDM) constraints are relevant [46]. Sterile neutrinos as fractions of the dark matter are therefore constrained by HDM and CWDM considerations, and these are shown in Fig. 1. Note that sometimes the HDM constraint is extended to masses \(m_{s}>0.1\,\)keV, up to even 10 keV [23], but that is inaccurate as the sterile neutrinos are considered to be WDM above approximately \(m_{s}\sim 0.1\,\)keV, and either WDM or mixed CWDM limits become appropriate. Footnote 1: Here, \(\Omega_{i}\equiv\rho_{i}/\rho_{\rm crit}\), where \(\rho_{\rm crit}\) is the critical density of the Universe Below the edge where \(\Omega_{s}=\Omega_{\rm DM}\), sterile neutrinos comprise a fraction of the dark matter such that \(f_{\rm DM}\equiv\Omega_{s}/\Omega_{\rm DM}\). We show two representative cases of \(f_{\rm DM}=0.1\) and \(f_{\rm DM}=7\times 10^{-4}\) in Fig. 1. The lower fraction is commensurate with central values of the candidate signals of an X-ray line at approximately \(3.55\,\)keV, seen in the Perseus galaxy cluster, stacked galaxy clusters [27], and M31 [28]. We calculate X-ray limits in the LRT parameter space using the fraction of dark matter as \(\nu_{s}\) at each point in the parameter space. In Fig. 1, we show five X-ray constraints using the commensurate fractional dark matter in the parameter space: 1. An analysis of 51 Msec of _Chandra X-ray Space Telescope_ deep sky observations across the entirety of the sky, sensitive to the Milky Way halo signal, by Sicilian et al. [47], are shown in magenta and labeled S21; 2. M31 _Chandra_ observations analyzed by Horiuchi et al. [48] are shown in purple and labeled H14, which we adopt because of their wider energy range than the first _Chandra_ constraints; 3. NuSTAR observations toward the Milky Way Galactic Bulge for higher masses [49], are shown in brown and labeled NuSTAR; 4. NuSTAR observations of the full sky, sensitive to the Milky Way halo, are complementary to the prior NuSTAR constraints [50], and are also shown in brown and labeled NuSTAR; and, 5. The conservative but broad-band constraints on excess electromagnetic diffuse emission is labeled as the diffuse extragalactic background radiation (DEBRA) limit [51]. We do not show constraints from Ref. [52] as the limits are a factor of \(\sim\)20 weaker than claimed, which was acknowledged within Ref. [52], and in subsequent comments [53; 54]. And, we do not show limits from Ref. [55] as that work does not include instrumental and on-sky lines present at 3.3 and 3.7 keV in their stated limits. Another astrophysical consideration comes into play in the orange vertically hatched region, where sterile neutrinos deplete energy in the core of a Type II supernova [56; 57; 58; 25], though portions of this region may also be responsible for supernova shock enhancement [56] or the origination of pulsar kicks [59]. We also show regions that are constrained by laboratory experiments, independent of any astrophysical or cosmological models, in Fig. 1. Constraints exist from neutrinoless double-beta decay searches in the hatched region labeled \(0\nu\beta\beta\)[60], though a cancellation may exist that alleviates this constraint [61; 62; 63]. We also show the constraints from a collection of nuclear beta decay kink searches in the solid black region labeled \(\beta\)-decay [64]. Results from the \(\beta\)-decay search by BeEST are also shown in golden yellow [65]. When photons are produced in the decay of sterile neutrinos before recombination, and when these photons are produced after the thermalization time \(t_{\rm th}\simeq 10^{6}\) sec, then this can distort the thermal nature of the CMB spectrum [66; 67] (see e.g. the discussion in Ref. [68]). The COBE FIRAS limit [69] on distortions of the thermal CMB rejects lifetimes \(t_{\rm rec}>\tau>t_{\rm th}\), where \(t_{\rm rec}\) is the recombination time. The red region in the upper right corner of Fig. 1 shows the CMB distortion limits. Several current and upcoming laboratory experiments are sensitive to the parameter space of sterile neutrinos we are considering here. In Fig. 1, the black dot-dashed line is the forecast \(1\sigma\) sensitivity of time-of-flight measures from the TRISTAN detector on the KATRIN \(\beta\)-decay experiment, with the lower line showing their statistical limit [37]. The three dashed black lines show the sensitivity of the three stages of MAGNETO-\(\nu\)[38; 39] The solid lines are the forecast sensitivity for the upcoming K-capture experiment HUNTER (Heavy Unseen Neutrinos by Total Energy-Momentum Reconstruction), in its three stages [35; 36]. PTOLEMY is a tritium \(\beta\)-decay experiment aimed at detecting the cosmological relic neutrino background which is expected to start collecting data within few years, and may have sensitivity to this parameter space [40; 41]. Ref. [41] provides event rates for this parameter space, but not sensitivity curve is available, so we do not show one for PTOLEMY. We presented constraints at \(T_{\rm RH}=5\,\)MeV in this section. Other values for \(T_{\rm RH}\) would change the constraint considerations to some degree. For lower \(T_{\rm RH}\), the constraint regions shift upward in \(\sin^{2}2\theta\), as less thermalization occurs at a given \(\sin^{2}2\theta\). Conversely for higher \(T_{\rm RH}\), as \(T_{\rm RH}\) approaches the peak of production of sterile neutrinos at \(T\approx 130\,\)MeV\((m_{s}/1\,\)keV\()^{1/3}\), the constraints of an HRT universe apply. As discussed in the introduction, LRT universes with \(T_{\rm RH}=1.8\,\)MeV are allowed when there is a new source for relativistic energy density, such as sterile neutrinos with a dark decay mode, which we now explore. ## IV Dark Decay Model The population of partially to fully thermalized sterile neutrinos may not be cosmologically long-lived. In the cases of relatively large mixing that we consider, the sterile neutrinos may decay more rapidly into another sterile neutrino, \(\nu_{s}^{\prime}\), plus other dark sector particles [70; 32]. Such decays are known to alleviate constraints when they occur to the active neutrinos (e.g. [31]), and they will also alleviate constraints on sterile neutrinos. In one class of such models, a generic scalar, \(\phi\), is introduced with an interaction Lagrangian associated with the decay of the keV-scale \(\nu_{s}\) to an arbitrarily lighter \(\nu_{s}^{\prime}\), \(\nu_{s}\to\nu_{s}^{\prime}\phi\): \[\mathcal{L}\supset\frac{g_{i,j}}{2}\bar{\nu_{j}}\nu_{i}\phi+\frac{g^{\prime}_ {i,j}}{2}\bar{\nu_{j}}i\gamma_{5}\nu_{i}\phi+{\rm h.c.} \tag{3}\] where \(\nu_{i}\) and \(\nu_{j}\) are the largely-sterile neutrino mass eigenstates, and \(g^{(\prime)}_{i,j}\) are the scalar (pseudoscalar) couplings. Decays of keV-scale sterile neutrinos induced by this coupling are unconstrained except for the cosmological considerations we present below. Another possible channel for sterile neutrino decay is \(\nu_{s}\to\nu_{s}^{\prime}\bar{\nu_{s}^{\prime}}\nu_{s}^{\prime}\), mediated by a new \(Z^{\prime}\) boson: \[\mathcal{L}_{Z^{\prime}}^{\nu}=g\sum_{\alpha}(\bar{\nu}_{\alpha,L}\gamma^{\mu} \nu_{\alpha,L})Z^{\prime}_{\mu}\,, \tag{4}\] where \(g\) is the coupling constant associated with the new SU(2) interaction, and \(\alpha\) goes over the sterile neutrino states, which in our case is the minimal case of two. Both of these models in Eq. (3) & Eq. (4) introduce a new mechanism of sterile neutrino decay within a dark sector. For our interests in this work, only the lifetime of the decay associated with these new interactions, \(\tau\), is important, as well as the requirement that the decay products are arbitrarily light, so as to act as dark radiation for all of cosmological history. The decay products of \(\phi\), \(\nu_{s}^{\prime}\) therefore act as pure dark radiation in contribution to \(N_{\rm eff}\), which we define generally as a combination of the radiation energy density in the active neutrinos \(N_{\rm eff,act}\), plus the sterile neutrinos \(N_{\rm eff,ster}\), plus any relativistic decay products of the sterile neutrinos, \(N_{\rm eff,*}\), \[N_{\rm eff}=N_{\rm eff,act}+N_{\rm eff,ster}+N_{\rm eff,*}\,, \tag{5}\] where \[N_{\rm eff,act}\equiv \frac{1}{\rho_{\nu}}\sum_{i}\frac{1}{4\pi^{3}}\int E(p)f_{i}(p){\rm d }^{3}p\,, \tag{6}\] \[N_{\rm eff,ster}\equiv \frac{1}{\rho_{\nu}}\sum_{j}\frac{1}{4\pi^{3}}\int E(p)f_{j}(p){ \rm d}^{3}p\,. \tag{7}\] Here, \(\rho_{\nu}\) is the relativistic energy density in a thermal single neutrino species, \(i\) sums over the partial or fully thermalized active neutrino species with energy distributions \(f_{i}\), and \(j\) sums over the energy densities of the relic stable \(\nu_{s}^{\prime}\) and \(\phi\). In general, only one of \(N_{\rm eff,ster}\) and \(N_{\rm eff,*}\) will be nonzero as the sterile neutrinos become nonrelativistic and then decay into dark radiation. In the case of the lowest LRT models, e.g. \(T_{\rm RH}\approx 1.8\,\)MeV, \(N_{\rm eff,act}\approx 1\)[17], so that \(N_{\rm eff}\) is predominantly dark radiation, while in higher LRT models, e.g. \(T_{\rm RH}\approx 7\,\)MeV, \(N_{\rm eff}\) is predominantly active neutrinos (\(N_{\rm eff,act}\)), with dark sector particles (\(N_{\rm eff,*}\)) contributing a small perturbation. ### Evolution of the Abundance of Decaying Sterile Neutrinos in an LRT universe In the case of sterile neutrinos with appreciably mixing, their abundance in an LRT universe is initially set by their approach to equilibrium after \(T_{\rm RH}\). This process is described by the Boltzmann equation. Their subsequent evolution is set by their redshifting as radiation and, when \(T\lesssim m_{s}\), as matter components, followed by their subsequent decay. With no direct coupling to the reheating mechanism, sterile neutrinos are not present at \(T_{\rm RH}\), and are never in thermal equilibrium in the early Universe [71; 72]. Nevertheless, there are different mechanisms by which the relic population of sterile neutrinos could have been produced subsequent to reheating [59; 73; 6; 74]. In this paper, we focus on the minimal model in which the production of sterile neutrinos requires no new physics other than neutrino mass and mixing, and production arises from non-resonant flavor oscillations between the active neutrinos \(\nu_{\alpha}\) of the SM and the sterile neutrino \(\nu_{s}\), as originally proposed by Dodelson and Widrow (DW) [6]. In the LRT model, prior \(T_{\rm RH}\), the entropy in radiation and matter is not conserved and consequently, the \(T\) dependence on the scale factor \(a\) is different than the usual \(T\propto a^{-1}\). In this scenario, prior to the radiation dominated standard epoch, a scalar field oscillates coherently around its true minimum and dominates the energy density of the Universe. The decay of this scalar leads to nonthermal decay products that subsequently thermalize to a \(T=T_{\rm RH}\), followed by standard radiation domination and evolution (see e.g. Refs. [75; 76; 77] ). Interactions of active neutrinos with the surrounding plasma during the oscillations act as measurements and force the propagating neutrino energy eigenstates into determinate flavor states, which with some probability results in a sterile neutrino. For the parameter space of interest here, the production rate is usually not fast enough for sterile neutrinos to thermally equilibrate, and the process is a freeze-in of the final abundance. Assuming that only two neutrinos mix, \(\nu_{s}\) and one active neutrino \(\nu_{\alpha}\) (\(\nu_{e}\) in all of the figures we present in this paper), the time evolution of the phase-space density distribution function of sterile neutrinos \(f_{\nu_{s}}(p,t)\) with respect to the density function of active neutrinos \(f_{\nu_{\alpha}}(p,t)\) is given by the following Boltzmann equation [78; 25] \[\frac{d}{dt}f_{\nu_{s}}(p,t) = \frac{\partial}{\partial t}f_{\nu_{s}}(p,t)-Hp\frac{\partial}{ \partial p}f_{\nu_{s}}(p,t) \tag{8}\] \[= \Gamma(p,t)\Big{[}f_{\nu_{\alpha}}(1-f_{\nu_{s}})-f_{\nu_{s}}(1-f _{\nu_{\alpha}})\Big{]}\.\] Here \(H\) is the expansion rate of the Universe, \(p\) is the magnitude of the neutrino momentum and \(\Gamma(p,t)\) is the conversion rate of active to sterile neutrinos. The active neutrinos are assumed to have a suppressed to full Fermi-Dirac distribution, depending on their thermalization state determined by \(T_{\rm RH}\). Since \(f_{\nu_{s}}\ll 1\) and \(f_{\nu_{s}}\ll f_{\nu_{\alpha}}\), we take \((1-f_{\nu_{s}})=1\), and the second term in brackets on the right-hand side of Eq. (8) can be neglected. Thus, changing variables, Eq. (8) can be rewritten as [18; 25] \[-HT\left(\frac{\partial f_{\nu_{s}}(E,T)}{\partial T}\right)_{E/T}\simeq\ \Gamma(E,T)f_{\nu_{\alpha}}(E,T)\, \tag{9}\] Figure 2: Shown here is the calculated production and energy density evolution for massless standard neutrinos (black line) and an example of \(m_{s}=100\,\)keV sterile neutrinos. We show five different \(\sin^{2}2\theta\) cases of sterile neutrino energy density evolution, \(\rho_{\nu_{s}}\). The sterile neutrinos decay at different times (temperatures) ranging from a case that matches the neutrino’s mass \(T_{\rm decay}=0.1\,\)MeV (red line) down to the temperature of matter radiation equality (green line). Note that sterile neutrinos with very different initial production densities can all match the same density relative to the active neutrinos at their decay. Therefore, a wide range of sterile neutrino masses and \(\sin^{2}2\theta\) can match a designated \(N_{\rm eff}\), when combined with either a nearly fully thermalized or non-thermalized \(\rho_{\nu_{\alpha}}\) in LRT cosmologies. The reheating temperature in this example is chosen to be 5 MeV. The moments of \(T=m_{s}\) and \(T_{\rm{rms}}\) are shown with the dotted and dashed vertical lines, respectively. where the derivative on the left-hand side is computed at constant \(E/T\). The conversion rate \(\Gamma\) is the total interaction rate \(\Gamma_{\alpha}=d_{\alpha}G_{F}^{2}\epsilon T^{5}\) of the active neutrinos with the surrounding plasma weighted by the average active-sterile oscillation probability \(\langle P_{m}\rangle\) in matter (see Eq. (6.5) of Ref. [25]). We obtain the sterile neutrino density distributions by integrating the Boltzmann Eq. (9). We show the resulting production of sterile neutrino density evolution at the far left of Fig. 2, where the density rises from zero. Here, we recover the results of Ref. [19]. Following production, the \(\nu_{s}\) energy density dilutes as radiation, \(\rho_{\nu_{s}}\propto a^{-4}\), as long as \(T\gg m_{s}\). As the Universe cools to \(T\sim m_{s}\), there is a transition of the decrease of the \(\nu_{s}\) energy density to redshifting as matter, \(\rho_{\nu_{s}}\propto a^{-3}\). For the example shown in Fig. 2, \(m_{s}=100\,\mathrm{keV}\), but onset of pure matter-like redshifting occurs somewhat later than \(T\approx m_{s}\), as LRT-produced sterile neutrinos are slightly "hotter" than thermal, with \(\langle p\rangle\approx 4.11T\)[19]. The presence of sterile neutrinos with masses between the epoch of BBN and the photon last-scattering time allows the \(\nu_{s}\) to augment their energy density with respect to the active neutrinos' by becoming nonrelativistic and redshifting more slowly. They then deposit their energy density back into relativistic dark decay products (\(\nu_{s^{\prime}}\) and/or \(\phi\)) denominated as \(N_{\mathrm{eff,*}}\). Therefore, the massive \(\nu_{s}\) can boost \(N_{\mathrm{eff}}\) above \(N_{\mathrm{eff,act}}\) produced by reheating alone in an LRT model, Eq. (5). The amount of relativistic energy deposited can be approximated by matching the density of the nonrelativistic sterile neutrinos with the targeted boost in dark radiation, \[m_{s}n_{\nu_{s}}=N_{\mathrm{eff,*}}\rho_{\nu_{\alpha}}\,. \tag{10}\] Using Eq. 2, we solve for the fraction of sterile neutrino production for a given \(N_{\mathrm{eff,*}}\) as \[f=\frac{N_{\mathrm{eff,*}}\rho_{\nu_{\alpha}}}{m_{s}n_{\nu_{\alpha}}}\,. \tag{11}\] This relation then stipulates what production of \(\nu_{s}\) is needed to hit the target \(N_{\mathrm{eff,*}}\). The energy density boost needed for smaller levels of production at smaller mixing angles requires longer matter-like redshifting before decay. We take the maximum decay time to be that corresponding to matter radiation equality, \(T_{\mathrm{rm}}\), so that the decays do not directly affect the photon decoupling epoch. The relativistic energy density contribution of sterile neutrino decays can range from a majority of \(N_{\mathrm{eff}}\) (\(N_{\mathrm{eff,*}}\gtrsim N_{\mathrm{eff,act}}\)) to a small perturbation onto \(N_{\mathrm{eff}}\) (\(N_{\mathrm{eff,*}}<N_{\mathrm{eff,act}}\)), depending on the \(T_{\mathrm{RH}}\), \(m_{s}\), \(\sin^{2}2\theta\), and \(\tau\). Since \(N_{\mathrm{eff}}\) can be augmented by this mechanism, it may be responsible for any potential evidence for \(N_{\mathrm{eff}}\) above its standard value. For higher \(T_{\mathrm{RH}}\), more production occurs for a given \(\sin^{2}2\theta\) and \(m_{s}\), so that higher \(T_{\mathrm{RH}}\) models probe smaller mixing angles (see Fig. 3). We go through several examples in the following section. In summary, the sterile neutrinos can decay at a wide range of time scales, as shown in Fig. 2. As an example, we illustrate five decay timescale scenarios. The most rapid decay time is commensurate with \(T=m_{s}\) (this case's outcome would be similar to any decay timescale at \(T<m_{s}\), as the relativistic energy density in \(\nu_{s}\) is simply transferred to the dark states). The slowest decay we consider occurs at the time of radiation-matter equality \(T_{\mathrm{rm}}\). We also show three intermediate cases. In addition, we plot the energy density evolution of the massless active neutrinos (black line). ### Decaying Sterile Neutrino Parameter Space in LRT Cosmologies In an LRT cosmology, sterile neutrinos are produced just after \(T_{\mathrm{RH}}\), then redshift as radiation, and potentially as matter, before ultimately decaying to dark radiation species. The amount of decay products' contribution to \(N_{\mathrm{eff}}\) varies with several parameters, as determined by Eq. (11). We consider three potential final values for \(N_{\mathrm{eff}}\) (Eq. (5)): * \(N_{\mathrm{eff,Std}}=3.044\), the standard value with a standard enhancement from electron-positron annihilation [79; 80]; * \(N_{\mathrm{eff,Upper}}=3.33\), the 95% CL upper bound from Planck 2018 [Eq. (67b)] in Ref. [42]]; * \(N_{\mathrm{eff,H0}}=3.48\), the central value preferred by solutions to the Hubble, \(H_{0}\), tension [81; 34]. The precise value of \(N_{\mathrm{eff,act}}\) in LRT models depends on \(T_{\mathrm{RH}}\). We use the bottom panel of Fig. 1 in Ref. [17], for the relation between \(T_{\mathrm{RH}}\) and \(N_{\mathrm{eff,act}}\). For the highest temperature LRT cosmology we consider, \(T_{\mathrm{RH}}=7\,\mathrm{MeV}\), the active neutrinos are almost fully thermalized, and \(N_{\mathrm{eff,act}}=3.0\). Therefore, to match the standard \(N_{\mathrm{eff}}\), we require \(N_{\mathrm{eff,*}}=0.044\), while matching the \(H_{0}\) tension requires \(N_{\mathrm{eff,*}}=0.48\), respectively. For a \(T_{\mathrm{RH}}=1.8\,\mathrm{MeV}\), \(N_{\mathrm{eff,act}}=1.0\), therefore, the standard density requires \(N_{\mathrm{eff,*}}=2.044\) and solving the \(H_{0}\) tension requires \(N_{\mathrm{eff,*}}=2.48\), respectively. For the case where there is no matter-dominated evolution of the sterile neutrino before it decays, there is no energy boost from differential redshifting of active and sterile neutrinos, and all density must come from oscillation production. Using Eqs. (2) & (11), this corresponds to a value of \(\sin^{2}2\theta>0.1\), above the parameter space we consider in Fig. 3. Another limiting case is when the decay occurs at radiation-matter equality \(T_{\mathrm{rm}}\). We calculate \(T_{\mathrm{rm}}\) decay contours and plot them in Fig. 3, for the reheating temperatures of 1.8 MeV (left panel) and 7 MeV (right panel). We show two lines: one where the active neutrino and sterile neutrino decay products' density matches the standard \(N_{\mathrm{eff,Std}}=3.044\) (darker, lower line)[42] and where the active neutrino and sterile neutrino decay products' density matches the \(H_{0}\) tension alleviating value of \(N_{\mathrm{eff,H0}}=3.48\) (lighter, upper line)[81; 34]. In the right panel's lighter shaded area, \(N_{\rm eff}\) spans from its standard value to the value favoring the mitigation of the Hubble tension, denoted as \(N_{\rm eff,H0}\). Above the lighter diagonal line, the parameter space is capable of supporting either a resolution to the Hubble tension with \(N_{\rm eff,H0}\), or maintaining the standard density, \(N_{\rm eff,Std}\). That is, sterile neutrinos with mass and mixing above the the lighter curves are consistent with cosmology at the specified \(N_{\rm eff}\) values. This is achieved because there is a cancellation between enhanced production at higher \(\sin^{2}2\theta\) and earlier decay providing less matter-redshift boost (see Fig. 2). Because the sterile neutrinos mix with active neutrinos, they have a loop radiative decay to a lighter predominantly-active neutrino mass eigenstate and a photon [82, 83]. Since the decays take place necessarily during the photon-coupled era, the effects of the decay photon is on distorting the thermal nature of the CMB photons. Hu & Silk [67] calculated spectral distortions to the CMB radiation originated by the decay of unstable relic particles during the thermalization epoch. The appropriate constraints are from Fig. 1 in Ref. [67], where these limits constraints are most stringent for the largest masses in late-decay scenarios (maximizing the coefficient of the \(y\)-axis in Fig. 1 of Ref. [67]). For the models plotted in Fig. 3, the decay happens at \(T_{\rm rm}\), or an age of the Universe of \(t_{\rm rm}\approx 1.6\times 10^{12}\,\)sec. The limits in Ref. [67] are presented in \(y\equiv m_{X}(bn_{X}/n_{\gamma})\), where \(m_{X}\) is the decaying particle mass, \(n_{X}/n_{\gamma}\) is its abundance relative to the photons, and \(b\) is the branching ratio to photons for the decay. For the highest mass of our parameter space, \(m_{s}=400\,\)keV, \(y=4.6\times 10^{-15}\,\)GeV for \(T_{\rm RH}=7\,\)MeV. However, for \(T_{\rm RH}=1.8\,\)MeV, the abundance of \(\nu_{s}\) is much greater at late times, and \(y=1.3\times 10^{-11}\,\)GeV. So, no radiative decay constraint exists for the case of \(T_{\rm RH}=7\,\)MeV. We find the highest \(m_{s}\) compatible with the \(T_{\rm RH}=1.8\,\)MeV curve, and that is 120 keV, and the constraint is independent of \(\sin^{2}2\theta\). We show the CMB thermal constraint in Fig. 3, and it applies to \(T_{\rm RH}=1.8\,\)MeV cosmologies, and is non-existent in our parameter space for cases approaching \(T_{\rm RH}=7\,\)MeV. The only other constraints that are present in the parameter space in this scenario are the \(\beta-\)decay (black contour) [64] and BeEST (yellow contour) [65]. As shown in Fig. 3, laboratory experiments such as HUNTER, TRISTAN, or MAGNETO-\(\nu\) can detect the signal of sterile neutrinos in much of the allowed regions of this parameter space. ## V Discussion & Conclusions The reheating temperature of the Universe is unknown beyond the requirement that it is \(T_{\rm RH}>1.8\,\)MeV, as long as there is a new source of relativistic energy density in addition to the active neutrinos, and \(T_{\rm RH}\gtrsim 5\,\)MeV, in the case of no new physics [17]. Therefore, \(T_{\rm RH}\) is a free parameter in studies of the energy content arising from the hot big bang. Baryogenesis and dark matter production can be accommodated in the reheating process [84, 21]. In LRT cosmologies, the weak-coupling epoch is significantly reduced, suppressing active-sterile neutrino oscillations. As a result, regions of keV-scale sterile neutrinos' parameter space that were previously forbidden by cosmological or astrophysical constraints can become Figure 3: Shown are the updated parameter spaces of dark decay in an LRT Universe for two cases, \(T_{\rm RH}=1.8\,\)MeV and \(7\,\)MeV, left and right, respectively. The diagonal reddish and blueish lines correspond to the cases when the decay happens at the temperature of matter-radiation equality, and match different \(N_{\rm eff}\) values. For each pair, the darker color (lower) is associated with a value of \(N_{\rm eff,Std}=3.044\), and the lighter color one \(N_{\rm eff,H0}=3.48\)[34]. For the darker shaded region, \(N_{\rm eff}\) ranges from the minimal provided by \(N_{\rm eff,vac}\) to the standard value with a contribution from sterile neutrino decay. In the lighter shaded region on the right panel, \(N_{\rm eff}\) ranges from the standard value to that preferred to alleviate the Hubble tension, \(N_{\rm eff,H0}\). Above the lighter diagonal, the parameter space can accommodate an alleviation of the Hubble tension with \(N_{\rm eff,H0}\), or provide the standard density, \(N_{\rm eff,Std}\). For the case of \(T_{\rm RH}=1.8\,\)MeV, the thermal nature of the CMB [67] constrains the red hatched portion. This constraint does not apply for \(T_{\rm RH}=7\,\)MeV in this parameter space. viable [24]. If the dark sector in which the sterile neutrino participates includes dark decay channels, we have shown here that the parameter space in LRT cosmologies is even more significantly alleviated. The energy density described by \(N_{\rm eff}\) in such dark-decay LRT cosmologies can have both pure, decay-produced, non-thermal, radiation components, as well as massive active neutrino components. High sensitivity to \(N_{\rm eff}\) will be provided by current and upcoming CMB experiments such as CMB-S4 [85]. The discovery of a sterile neutrino with parameters in this region could indicate a rich LRT thermal history. Several current and upcoming laboratory experiments are sensitive to keV-scale sterile neutrino parameter space, including HUNTER [35, 36], TRISTAN [37], MAGNETO-\(\nu\)[38, 39], and PTOLEMY [40, 41]. Laboratory direct dark matter detection experiments employing xenon, including LZ [86] and XENONnT [87], could be sensitive to our considered parameter space [88]. However, only the case of all of the dark matter being sterile neutrinos has been considered, and the constraints proportionately alleviate for fractional dark matter models, with all cases lying above the DEBRA constraints in Fig. 1, but they could considerably improve. The appreciable mixing between active and sterile neutrinos we consider here may also arise from non-standard interaction (NSI) searches. Currently, the constraints in this NSI parameter space are largely the ones we have shown: \(\beta\)-decay and \(0\nu\beta\beta\)-decay [89]. The next few years could provide the potential discovery of laboratory-accessible sterile neutrinos, whose existence is in conflict with HRT cosmologies, with the aforementioned \(\beta\)-decay, K-capture, and neutrino capture experiments. In this work, we showed that the presence of decaying keV-scale sterile neutrinos could also be indicated by the \(H_{0}\) tension. The discovery of keV-scale sterile neutrinos with appreciable mixing would be an important finding for particle physics, astrophysics, and cosmology, not only for its own discovery, but it would alter the usual assumptions of the early Universe and provide a new paradigm. ## VI Acknowledgements We would especially like to thank Graciela Gelmini for detailed discussions, as well as James Alvey, Z. Chacko, Philip Lu, Alex Kusenko, and Tim Tait for helpful discussions. We also thank the referee for helpful comments. KNA is partially supported by U.S. National Science Foundation (NSF) Theoretical Physics Program, Grants PHY-1915005 and PHY-2210283. HGE was supported in part by the UC Southern California Hub, with funding from the UC National Laboratories division of the University of California Office of the President. HGE was partially supported by a fellowship from the "La Caixa" Foundation (ID 100010434). The fellowship code is LCF / BG / AA19 / 11720045.
2309.10954
In-Context Learning for Text Classification with Many Labels
In-context learning (ICL) using large language models for tasks with many labels is challenging due to the limited context window, which makes it difficult to fit a sufficient number of examples in the prompt. In this paper, we use a pre-trained dense retrieval model to bypass this limitation, giving the model only a partial view of the full label space for each inference call. Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the art performance in few-shot settings for three common intent classification datasets, with no finetuning. We also surpass fine-tuned performance on fine-grained sentiment classification in certain cases. We analyze the performance across number of in-context examples and different model scales, showing that larger models are necessary to effectively and consistently make use of larger context lengths for ICL. By running several ablations, we analyze the model's use of: a) the similarity of the in-context examples to the current input, b) the semantic content of the class names, and c) the correct correspondence between examples and labels. We demonstrate that all three are needed to varying degrees depending on the domain, contrary to certain recent works.
Aristides Milios, Siva Reddy, Dzmitry Bahdanau
2023-09-19T22:41:44Z
http://arxiv.org/abs/2309.10954v2
# In-Context Learning for Text Classification with Many Labels ###### Abstract In-context learning (ICL) using large language models for tasks with many labels is challenging due to the limited context window, which makes it difficult to fit a sufficient number of examples in the prompt. In this paper, we use a pre-trained dense retrieval model to bypass this limitation, giving the model only a partial view of the full label space for each inference call. Testing with recent open-source LLMs (OPT, LLaMA), we set new state of the art performance in few-shot settings for three common intent classification datasets, with no fine-tuning. We also surpass fine-tuned performance on fine-grained sentiment classification in certain cases. We analyze the performance across number of in-context examples and different model scales, showing that larger models are necessary to effectively and consistently make use of larger context lengths for ICL. By running several ablations, we analyze the model's use of: a) the similarity of the in-context examples to the current input, b) the semantic content of the class names, and c) the correct correspondence between examples and labels. We demonstrate that all three are needed to varying degrees depending on the domain, contrary to certain recent works. ## 1 Introduction In-context learning (ICL) using large language models (LLMs) has recently exploded in popularity. Models pre-trained on massive amounts of textual data are able to reach reasonable performance on a wide variety of tasks with only a few examples of input and output for a given task provided in the model's input prompt in natural language Brown et al. (2020); Rae et al. (2021); Chowdhery et al. (2023). In this work, we study whether ICL can handle challenging classification tasks with many possible labels, by augmenting the LM with a secondary pre-trained retrieval model. The main problem with applying ICL to tasks involving classification with many labels is the limited context window these models have. Ordinarily with ICL, at minimum one example from each class is provided in-context to allow the model to make a choice between all the labels of the task. Because of this limitation, ICL has not been directly applied to these sorts of problems. In this work we relax this requirement, allowing the model to see only a subset of the most relevant labels for the given datapoint we are performing inference on. By testing on intent classification (upwards of 50 classes) and fine-grained sentiment analysis (upwards of 25 classes), we demonstrate that the resulting performance with this method can reach SoTA. By coupling the LLM with an external pre-trained dense retriever model Reimers and Gurevych (2019); Karpukhin et al. (2020), we can dynamically retrieve a set of examples to provide to the LM in-context, that reflects only the most relevant labels to the current example in the label space. Most existing work on augmenting LMs with retrieval models Ram et al. (2023); Shi et al. (2023) focuses on tuning the retrieval and/or LM. We demonstrate that even without tuning either, when the pre-trained models are strong enough we can still achieve SoTA across various tasks using ICL. We evaluate LLMs in this setting with three intent classification datasets: BANKING77 Casanueva et al. (2020), HWU64 Liu et al. (2019), and CLINC150 Larson et al. (2019), as well as one fine-grained sentiment classification dataset: GoEmotions Demszky et al. (2020). Experiments are done using the LLaMA models Touvron et al. (2023) and the OPT models Zhang et al. (2022) as LLMs. We compare the performance achieved against adapter-based fine-tuning of MLM models (DeBERTa-v2-XXLarge with the "Pfeiffer" bottleneck-style adapter Pfeiffer et al. (2020) implemented with AdapterHub Pfeiffer et al. (2020)) and the previous SoTA for intent detection ConvFit; Vulic et al. (2021), as well as comparing against SetFit (Tunstall et al., 2022), a recent lightweight method involving contrastive training of small MLM models. The contributions of this work are: 1. We show that retrieval-augmented ICL is an effective way to tackle text classification tasks with many labels without additional tuning of either the retriever or the LM, either matching or outperforming fine-tuned adapter-based and contrastive-pre-training-based methods. Notably, truncating the dataset by showing only a subset to the LM at a time does not prevent us from achieving SoTA performance, and allows us to apply LLMs to problems that they have not been applied to before, 2. We analyze ICL performance over different numbers of examples and demonstrate that larger models better are able to take advantage of more examples in-context than smaller models, which mostly plateau and/or see decreasing performance, 3. We perform several ablation studies to determine what aspects of the inputs and outputs the model is using for ICL. Certain recent works investigating ICL (Min et al., 2022; Razeghi et al., 2022) have recently called into question how much models are actually "learning" with ICL and what they are learning from. We ablate three different elements (semantic label names, correct input-output correspondences, and semantically similar demonstrations to the current input). Contrary to this emerging literature, our experiments demonstrate that they are all used to varying degrees, depending on the dataset and domain. ## 2 Method Retrieval-Augmented ICL:Our setup assumes \(N\) classes (unique labels) with \(K\) examples in each class. Each example is composed of an (input, label) tuple. We assume that we have a limited number of examples \(M\) to fit in the prompt, based on the model's context length. \(M\) can be fixed or based on "saturating" the prompt greedily by selecting examples until we run out of room in the context window. From our total pool of examples of size \(N\times K\), we retrieve the \(M\) examples using the cosine similarity values given by our retrieval model. Having retrieved our \(M\) examples, we then produce the prompt by concatenating the (input, label) tuples in a set prompt format (see Figure 1), similar to existing in-context learning setups. The final prediction is then taken from the LM by having it produce a continuation based on our prompt. A full visual description of the retrieval process is visible in Figure 1. Retrieval model:The retrieval model used is a Sentence-BERT model trained in a Siamese dual-network setup to be able to retrieve text based on cosine similarity of the embedding vectors it produces, described in Reimers and Gurevych (2019). The model we use is a contrastively trained model which has been pre-trained on a massive generic dataset of text pairs. We use the retrieval model as-is in all experiments. Cosine similarity is used to retrieve examples from the retrieval pool of examples (tested in 5-shot and 10-shot scenarios, signifying the number of examples from each class in the retrieval pool). Figure 1: Complete pipeline for intent detection with retrieval-augmented in-context learning Experimental Setup Specific retrieval model:For our sentence encoder/retriever, we use the SentenceTransformers library Reimers and Gurevych (2019), and use the pre-trained "all-mpnet-base-v2" model (a 110M parameter model pre-trained on over 1 billion training pairs). The SetFit results are based on contrastively tuning the same pre-trained model trained by Microsoft through the Setfit library1. Footnote 1: [https://github.com/huggingface/setfit](https://github.com/huggingface/setfit) Prompt saturation:The number of examples that fit in-context when greedily filling the context window depends on the specific dataset. For the intent detection datasets, this number was around 110 examples. For GoEmotions, this number was around 70 (140 using the full 4K context length of the LLaMA-2 models). Splits:For the intent detection experiments, to allow for direct comparison with previous works, we use the same 5-shot and 10-shot sets as DialoGLUE Mehri et al. (2020). Experiments are run 3 times and the accuracies are averaged, except the zero-training LLM setups, which are deterministic. For the GoEmotions experiments we average the results across 3 different random 10 and 5-shot splits, as no pre-existing few-shot splits exist. The GoEmotions experiments are composed of the subset of GoEmotions data (84% of training set, 85% of testing set) where the there is only one emotion label, to avoid issues of enforcing an ordering on a linearized version of multiple labels in sequence, as well as to mimic the single-label intent detection datasets setup more closely. Default library parameters were used. Computing Hardware and model differences:All experiments were performed on a single A100 80GB GPU, except those with OPT 175B, which were performed with 8 A100 GPUs. For LLaMA 65B and 70B 8-bit quantization was used. The main difference between the OPT and LLaMA models is the amount of pre-training data used. The LLaMA models were trained on 1T-1.4T tokens, while the OPT models were only trained on 180B tokens (see Zhang et al. (2022) and Touvron et al. (2023) for more details). LLaMA-2 models were trained on 2T tokens. Restricting model output:To reduce computational load and make inference easier, instead of using the logits of the LLM to rank our many classes (requiring multiple forward passes, as class names consist of multiple tokens), we let the LLM generate freely. Having generated an output text, we then use the retrieval model (SBERT) to retrieve the most similar class label from our set of classes. This allows us to restrict the model output to the set of classes we want without incurring additional inference cost. Instances of generated predictions that do not match our class list are few regardless, and shrink proportionately to the number of examples provided in-context. Baselines:Several baselines are provided. The baseline "Pre-trained SBERT 1-NN" refers to using the SBERT retrieval model to retrieve the most similar example in the retrieval pool and use its label directly as the prediction (1-nearest-neighbor). The ConvFit baseline is taken from the reported numbers in the ConvFit paper directly. The baseline "DeBERTa (Pfeiffer)" is the DeBERTa-XXL model released by Microsoft, trained via AdapterHub with the Pfeiffer-style bottleneck adapters Pfeiffer et al. (2020, 2020). Preliminary results with other adapter types (LoRA, IA3, etc.) showed that the Pfeiffer-style adapters were the most effective in this particular use-case. The DeBERTa-XXL model was fine-tuned until performance saturation (early stopping). SetFit Tunstall et al. (2022) results are also provided, a method involving contrastive fine-tuning of a retriever model with a classification head, as it is also a competitive and lightweight baseline in this setup. The selection of baselines was done based on recent strong progress on few-shot classification using parameter-efficient fine-tuning, in certain cases having been shown to perform better than full fine-tuning Liu et al. (2022). Footnote 3: [https://github.com/huggingface/setfit](https://github.com/huggingface/setfit) ## 4 Results Example ordering:We provide a brief study regarding how to order examples in-prompt by similarity, since previous work has been inconclusive on this front, suggesting that the ideal ordering is dataset dependent Liu et al. (2022). As seen from Table 3, least-to-most (LTM) similar was the most effective ordering across all datasets. Larger models are significantly less sensitive to ordering. SoTA performance:Tables 1 and 2 shows the performance comparison of all methods. Performance of the retrieval+ICL pipeline on BANKING, HWU and CLINC is state of the art in both the 5 and 10-shot settings. Not only this, but to significantly surpass the previous state of the art for all three intent classification datasets only LLaMA-2 7B is necessary, which with 8-bit quantization can be run on consumer hardware. In the most challenging evaluation setting (the highly-specialized intent classes of the BANKING dataset in the most data-scarce 5-shot setting), the margin between DeBERTa and LLaMA-2 70B is 7.49%. In general the DeBERTa model showed lower performance in the 5-shot scenarios, likely due to the extremely limited data. In the case of GoEmotions (Table 2), when using the neutral category, the Retrieval+ICL pipeline manages to clearly win against the strongest baseline (SetFit) only in the 5-shot case. In the 10-shot case, we can see that Retrieval+ICL performs at least on par, but more likely better than SetFit. Table 4 shows the difficulty of the GoEmotions task, specifically with regards to how granular the classes are. Performance degredation:We also provide a study of how performance changes given the number of examples provided in-context. Figure 2 shows this variation for the HWU64 dataset. The x-axis value of 110 indicates a fully saturated context window, which is on average this number of examples. In the case of LLaMA-7B, performance somewhat degrades after a certain number of demonstrations. Looking at Tables 1 and 2, comparing \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**BANKING 77**} & \multicolumn{2}{c}{**HWU 64**} & \multicolumn{2}{c}{**CLINC 150**} \\ \cline{2-7} & **5-shot** & **10-shot** & **5-shot** & **10-shot** & **5-shot** & **10-shot** \\ \hline Pre-trained SBERT 1-NN & 78.41 & 85.39 & 69.89 & 75.46 & 82.51 & 84.84 \\ ConvFit (reported) & - & 87.38 & - & 85.32 & - & 92.89 \\ SetFit & 79.89 \(\pm\) 0.14 & 84.51 \(\pm\) 0.60 & 78.38 \(\pm\) 0.73 & 83.35 \(\pm\) 0.57 & 88.68 \(\pm\) 0.20 & 90.67 \(\pm\) 0.29 \\ DeBERTa (Pfeiffer) & 81.47 \(\pm\) 1.6 & 88.41 \(\pm\) 0.19 & 79.80 \(\pm\) 0.81 & 86.93 \(\pm\) 0.052 & 91.86 \(\pm\) 0.66 & 95.05 \(\pm\) 0.33 \\ \hline OPT 13B & 81.23 & 85.65 & 78.90 & 83.64 & 85.27 & 89.24 \\ OPT 175B & 81.30 & 86.14 & 83.74 & 84.94 & 90.96 & 93.09 \\ LLaMA 7B & 84.42 & 87.63 & 85.87 & 87.55 & 88.58 & 91.73 \\ LLaMA 65B & 87.73 & 90.71 & 89.03 & 90.06 & 91.89 & 94.47 \\ \hline LLaMA 2 7B & 86.40 & 89.45 & 87.55 & 87.82 & 94.13 & 95.20 \\ LLaMA 2 7B 4K & 85.91 & 89.48 & 87.17 & 90.33 & 95.35 & 96.02 \\ LLaMA 2 70B & 87.56 & 90.58 & 88.20 & 89.77 & 96.42 & 97.13 \\ LLaMA 2 70B 4K & **88.96** & **92.11** & **90.61** & **91.73** & **97.56** & **98.18** \\ \hline \hline \end{tabular} \end{table} Table 1: Intent classification accuracy for retrieval+ICL and baseline methods. All retrieval+ICL results are with 20 in-prompt examples unless otherwise specified. The retrieval/training dataset size is given by the second row of the header (10-shot is 10 examples per class, 5-shot is 5). \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**GoEmotions**} \\ \cline{2-5} & **5-shot** & **10-shot** & **5-shot** +**Neut** & **10-shot** +**Neut** \\ \hline Pre-trained SBERT 1-NN & 9.48 \(\pm\) 0.58 & 11.02 \(\pm\) 1.0 & 7.55 \(\pm\) 0.79 & 8.38 \(\pm\) 0.48 \\ SetFit & 25.44 \(\pm\) 4.5 & 34.69 \(\pm\) 3.6 & 21.40 \(\pm\) 3.18 & 27.78 \(\pm\) 0.73 \\ DeBERTa (Pfeiffer) & 18.43 \(\pm\) 2.9 & 32.33 \(\pm\) 0.77 & 13.86 \(\pm\) 1.49 & 25.42 \(\pm\) 1.9 \\ \hline LLaMA 7B & - & - & 22.99 \(\pm\) 0.64 & 24.61 \(\pm\) 0.47 \\ LLaMA 65B & - & - & 24.31 \(\pm\) 0.73 & 25.63 \(\pm\) 0.86 \\ \hline LLaMA 2 7B & 29.60 \(\pm\) 1.5 & 31.40 \(\pm\) 0.83 & 23.78 \(\pm\) 1.1 & 24.75 \(\pm\) 0.43 \\ LLaMA 2 7B 4K & 28.01 \(\pm\) 1.2 & 30.33 \(\pm\) 1.64 & 23.79 \(\pm\) 1.9 & 23.57 \(\pm\) 0.52 \\ LLaMA 2 70B & **36.14** \(\pm\) 1.7 & **37.81** \(\pm\) 1.3 & 24.20 \(\pm\) 0.13 & 25.29 \(\pm\) 0.42 \\ LLaMA 2 70B 4K & - & 37.17 \(\pm\) 0.37 & **28.26** \(\pm\) 0.19 & **29.10** \(\pm\) 0.68 \\ LLaMA 2 70B 4K Retrieval w/o Neutral & - & - & - & 28.95 \(\pm\) 0.52 \\ \hline \hline \end{tabular} \end{table} Table 2: Sentiment classification macro F1 score (following prior work) over 3 random splits for retrieval+ICL and baseline methods. All retrieval+ICL results are from saturating the prompt with in-prompt examples (with a 2K prompt length unless otherwise specified). The retrieval/training dataset size is given by the second row of the header (10-shot is 10 examples per class, 5-shot is 5). \(+\)Neut refers to the case where the “neutral” class (lack of emotion) is included in the dataset. LLaMA-2-7B and LLaMA-2-70B in the regular and 4K context window scenarios, we see very clearly that only the 70B model is able to continually improve with the full 4K context. The 7B model instead sees matching (no improvement) or degraded performance in most cases. Impact of "Neutral" on GoEmotions:From the results in Table 2, by comparing the results with and without the "neutral" category, we see that the difference between the baselines and Retrieval+ICL grows, implying that "neutral" disproportionately hurts the Retrieval+ICL performance. We note that correctly predicting the neural class is challenging for the LM. We demonstrate that removing "neutral" from the retrieval pool does not harm performance ("Retrieval without Neutral" in Table 2). Analyzing the results for one of the runs, we see that out of the 1605 examples of the "neutral" class in the test set, "neutral" only appears in the top 3 classes retrieved by the retriever (by number of examples) only 9% of the time (in the top 5 classes 18%). This suggests that the retriever may be limiting the performance. ## 5 Ablation Studies Several ablations studies are done to test what aspects of the retrieved examples the LLM is using to make the predictions. The ablation studies were done on a random split of the HWU dataset and the GoEmotions dataset. Ablation results for HWU are shown visually in Figure 3 and for GoEmotions in Figure 4. 1. **Obfuscated labels:** We change all the class names to randomly set enumerated names ("Class 1", "Class 2", etc.). The intent is to disentangle the model's use of prior (pre-training) knowledge to perform the task (based on the semantic content of the label names) from the input-output provided in the prompt. 2. **Resampled in-context examples:** To test if similarity between the demonstrations provided in the prompt and the current input example is actually necessary for effective performance. By resampling from the classes initially retrieved by the retriever model, we preserve the distribution of labels but change the input demonstrations themselves so that they are no longer the nearest in the embedding space for each class. 3. **Shuffled labels:** Similarly to Min et al. (2022), after the retrieval step we shuffle the correspondence between the inputs and labels of the retrieved examples, such that inputs are matched randomly from the set of labels the inputs originally belonged to. The intent of this ablation is to examine if the model requires correct input-label correspondences (something that Min et al. (2022) calls into question), or if the model is simply using structural (e.g. prompt format) and distributional (e.g. the distribution of labels in the prompt) elements to produce a prediction. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**BANKING**} & \multicolumn{2}{c}{**HWU**} & \multicolumn{2}{c}{**CLINC**} & \multicolumn{2}{c}{**GoEmotions**} \\ \cline{2-9} & **MTL** & **LTM** & **MTL** & **LTM** & **MTL** & **LTM** & **MTL** & **Random** & **LTM** \\ \hline OPT 13B & 73.64 & **85.65** & 76.39 & **83.64** & 81.11 & **89.24** & - & - & - \\ LLaMA 7B & 83.64 & **87.63** & 86.99 & **87.55** & 90.20 & **91.73** & 15.91 & 20.89 \(\pm\) 0.85 & **23.58** \\ LLaMA 65B & 88.08 & **90.71** & 89.03 & **90.06** & 93.47 & **94.47** & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of LLaMA 7B and OPT 13B model prompt orderings on intent detection datasets (20 examples in prompt, 10-shot), random split. MTL is most-to-least similar and LTM is the inverse. Figure 2: HWU performance as a function of the number of examples in prompt. The x-axis scale is non-linear, meaning that there are diminishing returns with more examples. “Sat” (saturated) indicates filling the prompt greedily until the max length is reached. ## 6 Discussion ### Small models cannot use long contexts as effectively as large models One trend noticeable from the performance graph as a function of the number of examples for HWU (see Figure 2) is that small models seem to be unable to use more examples as effectively as large models. The smaller OPT model is unable to effectively make use of the entire context window when it is filled and remains at relatively low performance. In contrast, OPT 175B shows continual improvement when more examples are added. A similar trend is visible for the LLaMA models, where the performance of the 7B model does not change significantly (see 2), but the 65B model is able to continuously improve. The smaller models either level off (OPT-13B) or lose performance (LLaMA-7B). In the 4K full context window settings for LLaMA-2, the difference between model scales is even more apparent (Tables 1 and 2). We see the small model showing inconsistent use of the longer contexts; sometimes improving, but mostly staying the same or worsening performance. Meanwhile, the large model consistently improves with the full context in almost all cases. ### Similarity to current datapoint matters for intent classification In the resampling ablation for HWU (see Figure 3) we see that resampling from the initial class distribution provided by the retriever model damages the performance across both OPT 175B and LLaMA 7B. This supports the strong performance numbers of the LLMs, showing that the similarity between in-context demonstrations and the current input matters. This implies that the LM is doing more than just selecting the most common class or just using the shortlist of class labels from the full set of classes to select in a more zero-shot fashion. One interesting difference to note is that OPT 175B, the larger model, shows a larger drop from the resampling as the number of in-context demonstrations increases, compared to LLaMA-7B, whose performance stays roughly constant (but lower than non-resampled). This may indicate that the LLaMA models with their additional training data are more robust to the resampling process, due to stronger pre-training knowledge and/or more robust performance overall. In the case of GoEmotions, we see almost no variation with resampling, showing that similarity to the input example is less influential, though the ordering of the examples relative to each other does seem to make a difference for the 7B model (Table 3). ### Semantically significant label names matter greatly for sentiment classification In the obfuscation ablation (see Figure 3), we see that all models are hurt by obfuscating label names. We see however that models are still able to learn to perform the task effectively, and in fact show similar improvement curves with increasing number of examples, just with a lower starting performance. This demonstrates that the semantic content of the Figure 4: Classification accuracy for three ablations for GoEmotions: obfuscated labels (left), resampled in-context examples (center), shuffled labels (right). Figure 3: Classification accuracy for three ablations for HWU64: obfuscated labels (left), resampled in-context examples (center), shuffled labels (right). labels is significantly useful to the models but simultaneously it is not integral to performing the task, which can also be done without semantically significant labels. In the case of GoEmotions, we see that the obfuscated labels particularly hurt the model, bringing it down significantly.It seems to be the case that the class names are integral to performance, but at the same time more examples are still helpful to the model, as in the 4K context window it still sees improved performance. ### Input-label correspondence matters for all datasets Shuffling the input-label correspondence is the ablation in which we see the performance of all the models decrease the most in the intent detection case (see Figure 3). Specifically, we see that the performance drop is proportional to the number of examples (more shuffled examples brings a larger drop). That being said, it is noteworthy that the performance of both models in this shuffled regime is still significantly above random chance for every number of demonstrations shown, implying perhaps that the LM's prior knowledge based on the label names is still contributing significantly to performance. In all 4 datasets (intent classification and GoEmotions), shuffling the labels hurts the large model more in particular. This aligns with the results of Wei et al. (2023), whose authors show that larger models are more able to learn perturbed input correspondences than smaller models, which manifests in this experiment as lower performance. In other words, the larger model is trying to learn the perturbed input correspondence, and thus losing more and more performance with more examples, while the smaller model is able to more effectively ignore the perturbation. ## 7 Retriever and LM Generalization One interesting result from our experiments is the fact that generic retrievers seem to be able to quite effectively generalize across domains and tasks. Using the same exact retriever model across 3 different intent detection datasets (which according to the taxonomy of Hupkes et al. (2022) constitutes cross-task generalization) as well as a sentiment classification dataset (according to the previous taxonomy, a cross-domain generalization) demonstrates SoTA or better performance in almost all cases. The distribution shift locus, for both the retriever and the language model generating the final prediction, is from pretraining to testing time. This is because they are both pre-trained on massive generic data before being tested in a zero-shot setting. ## 8 Related Work Nearest neighbor selection of in-context examples:One of the earliest studies of the role of example selection in ICL is "KATE" (Liu et al., 2022). In this paper, the authors probe the performance of GPT-3 on NLP tasks using KNN retrieval (RoBERTa) for example selection. They compare this method against random selection and using the retrieval model directly (plain KNN). They also examine the effect of example ordering on performance and conclude that the most performant ordering (least-to-most and most-to-least similar orderings are tested) depends on the dataset. In our work, we also experiment with example ordering, and conclude that least-to-most ordering is the \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline \hline **Text** & **Prediction** **LLaMA-2-70B** & **Gold label** \\ \hline Lmao the brigading is real & amusement & amusement \\ Enjoy the void & neutral & neutral \\ I really relate to this. & realization & approval \\ This is the only viable way out of Brexit. & optimism & approval \\ want* a source on that, sorry. & desire & remorse \\ I didn’t know that, thank you for teaching me something today! & gratitude & gratitude \\ Well it obviously helps you rationalize your total unwillingness to take action to make the world a better place. I hope that you grow past that. & sadness & admiration \\ Damm, we need healthy PGs. & sadness & annoyance \\ Welcome to The Church of Jesus Christ of Latter Day Saints, where families can be SEPARATED forever & sadness & gratitude \\ \hline \hline \end{tabular} \end{table} Table 4: Sample datapoints from GoEmotions most effective across all datasets tested. Works demonstrating order instability:Several recent works have demonstrated that the order of in-context examples makes a larger difference in performance, including Lu et al. (2022); Zhao et al. (2021). These works demonstrate such order instability that certain permutations bring near SoTA performance on tasks while others perform at near random guessing. Fine-tuned retrieval:Several works employ the use of fine-tuned retrievers, re-rankers, and/or LMs, including Rubin et al. (2022); Ram et al. (2023); Shi et al. (2023). Some, like REPLUG (Shi et al., 2023), use LM feedback in the form of using the LM to score documents to train the retriever. The goal of both Ram et al. (2023) and Shi et al. (2023) is to improve language modeling and not ICL ability. Rubin et al. (2022) uses a similar LM-score-based feedback to train a retriever (like REPLUG) but for ICL. The difference between all of these works and this work is that we demonstrate that an off-the-shelf retriever is sufficient out-of-the-box for SoTA performance with no additional tuning. Works calling into question efficacy of ICL:Certain recent works have called into question the efficacy of ICL and models' ability to learn tasks they were not exposed to during pre-training (Min et al., 2022; Razeghi et al., 2022). In Min et al. (2022) authors show that randomly perturbing input-label pairings for some tasks can still lead to reasonably good performance, calling into question whether any "learning" is happening at all with ICL. The work in Razeghi et al. (2022) demonstrates that models perform better on data instances they have seen frequently during pre-training, implying that models are primarily memorizing and that their generalization capabilities in terms of ICL remain limited. Xie et al. (2022) suggests that ICL ability emerges due to the specific structure of the training data, specifically long-range dependencies. Use of long contexts:Several works have demonstrated that long contexts are difficult for LMs to handle and show certain peculiarities. Kazemnejad et al. (2023) investigates the relationship between length generalization and positional embedding types, showing that in certain cases no positional embeddings can perform better. This work is closely related to use of long contexts for ICL, as it demonstrates the difficulty involved in generalizing to long context lengths, as well as providing an explanation for LMs' sensitivity to ordering (positional embeddings). In Liu et al. (2023), the authors investigate the impact of long contexts on document question answering, finding that the positions of the answers within the context matter greatly for performance, and generally demonstrating that longer contexts cause lower performance. In this work we show that larger models are needed to effectively take advantage of long contexts for ICL. Few-shot intent detection:The current state of the art in few-shot intent detection is the ConvFit method (Vulic et al., 2021). ConvFit uses a pre-trained LM in a dual-encoder configuration (e.g. BERT or RoBERTa) with two training stages. The first stage is a conversational fine-tuning stage using a generic conversational corpus with a retrieval task (using tuples of (context, response) retrieve the correct response for each context). The second stage is fine-tuning on the specific intent classification dataset with a contrastive loss, allowing the resulting LM to be used in a KNN fashion. ## 9 Conclusion In this work, we show that ICL with off-the-shelf frozen pre-trained retriever models can provide strong performance for text classification tasks with many labels. We show state of the art performance across three different intent classification datasets, and competitive performance with fine-grained sentiment classification. We also show that larger models are necessary to make use of more in-context examples, whereas small models mostly plateau or even show decreasing performance after a point. Through several ablation experiments, we demonstrate that LMs make use of all aspects of the input examples: semantically significant label names, correct input-label correspondences, as well as the similarity between the in-context demonstrations and the current input point, however to varying degrees depending on the dataset and domain. ## 10 Acknowledgement SR is supported by the Canada CIFAR AI Chairs program and the NSERC Discovery Grant program. AM is supported by an IVADO Excellence Scholarship. ## 11 Limitations One limitation of the research in this paper is that the experiments of this paper use the pre-existing DialoGLUE few-shot splits for each dataset, following the example of prior works and to remain comparable to them (with the exception of the ablation study, which uses a separate split). However, since experiments were done only on this split, it is not necessarily the case that the results/model rankings are transferable to other splits as well (although it is worth noting from Figure 3 that performance on the random ablation split is very similar to the DialoGLUE split, and the model ranking remains the same). This limitation is not the case with GoEmotions, whose results are given as averages across three random splits. Another limitation is the relatively small number of runs/seeds (only 3) due to limitations on compute. One further limitation is that the experiments are all performed on English-language data.
2309.12825
OmniDrones: An Efficient and Flexible Platform for Reinforcement Learning in Drone Control
In this work, we introduce OmniDrones, an efficient and flexible platform tailored for reinforcement learning in drone control, built on Nvidia's Omniverse Isaac Sim. It employs a bottom-up design approach that allows users to easily design and experiment with various application scenarios on top of GPU-parallelized simulations. It also offers a range of benchmark tasks, presenting challenges ranging from single-drone hovering to over-actuated system tracking. In summary, we propose an open-sourced drone simulation platform, equipped with an extensive suite of tools for drone learning. It includes 4 drone models, 5 sensor modalities, 4 control modes, over 10 benchmark tasks, and a selection of widely used RL baselines. To showcase the capabilities of OmniDrones and to support future research, we also provide preliminary results on these benchmark tasks. We hope this platform will encourage further studies on applying RL to practical drone systems.
Botian Xu, Feng Gao, Chao Yu, Ruize Zhang, Yi Wu, Yu Wang
2023-09-22T12:26:36Z
http://arxiv.org/abs/2309.12825v1
# _OmniDrones_: An Efficient and Flexible Platform for Reinforcement Learning in Drone Control ###### Abstract In this work, we introduce _OmniDrones_, an efficient and flexible platform tailored for reinforcement learning in drone control, built on Nvidia's Omniverse Isaac Sim. It employs a bottom-up design approach that allows users to easily design and experiment with various application scenarios on top of GPU-parallelized simulations. It also offers a range of benchmark tasks, presenting challenges ranging from single-drone hovering to over-actuated system tracking. In summary, we propose an open-sourced drone simulation platform, equipped with an extensive suite of tools for drone learning. It includes 4 drone models, 5 sensor modalities, 4 control modes, over 10 benchmark tasks, and a selection of widely used RL baselines. To showcase the capabilities of _OmniDrones_ and to support future research, we also provide preliminary results on these benchmark tasks. We hope this platform will encourage further studies on applying RL to practical drone systems. For more resources including documentation and code, please visit: [https://omindrones.readthedocs.io/](https://omindrones.readthedocs.io/). ## I Introduction Multi-rotor drones and multi-drone systems are receiving increasing attention from both industry and academia due to their remarkable agility and versatility. The ability to maneuver in complex environments and the flexibility in configuration empower these systems to efficiently and effectively perform a wide range of tasks across various industries, such as agriculture, construction, delivery, and surveillance [1]. Recently, deep reinforcement learning (RL) has made impressive progress in robotics applications such as locomotion and manipulation. It has also been successfully applied to drone control and decision-making [2, 3, 4, 5], improving the computational efficiency, agility, and robustness of drone controllers. Compared to classic optimization-based methods, RL-based solutions circumvent the need for explicit dynamics modeling and planning and allow us to approach these challenging problems without accurately knowing the underlying dynamics. Moreover, for multi-drone systems, we can further leverage Multi-Agent RL (MARL), which is shown to be effective in addressing the complex coordination problems that arise in multi-agent tasks [6, 7, 8]. Efficient and flexible simulated environments play a central role in RL research. They should allow researchers to conveniently build up the problem of interest and effectively evaluate their algorithms. Extensive efforts have been made to develop simulators and benchmarks for commonly studied robot models like quadrupedals and dexterous arms [9, 10, 11, 12]. However, although a range of drone simulators already exists, they suffer from issues such as relatively low sampling efficiency and difficult customization. To help better explore the potential of RL in building powerful and intelligent drone systems, we introduce _OmniDrones_, a platform featuring: * Efficiency. Based on Nvidia Isaac Sim [13, 14], _OmniDrones_ can notably achieve over \(10^{5}\) steps per second in terms of data collection, which is crucial for applying RL-based methods at scale. * Flexibility. By default, we provide 4 drone models commonly used in related research, along with 4 control modes and 5 sensor modalities, all being easy to extend. We also make it straightforward for users to import their own models and add customized dynamics. * RL-support. _OmniDrones_ includes a diverse suite of 10+ single- and multi-agent tasks, presenting different challenges and difficulty levels. The tasks can be easily extended and seamlessly integrated with modern RL libraries. To demonstrate the features and functionalities of _OmniDrones_ while also providing some baseline results, we implement and benchmark a spectrum of popular single- and Fig. 1: A visualization of the various drone systems in _OmniDrones_, for which we offer highly efficient simulation, reinforcement learning environments, and benchmarking of baselines. multi-agent RL algorithms on the proposed tasks. ## II Related Work Simulated environments play a crucial role in the RL literature. We highlight the motivation of our work by reviewing the solutions developed out of various considerations, and related research in RL-based control of drones. Simulated Environments for dronesA common option in the control literature is to use Matlab to perform numerical simulations. This approach enjoys simplicity but has difficulty building complex and realistic tasks and is less friendly to reinforcement learning. Flightmare [15] and Airsim [16] leverage game engines such as Unity and Unreal Engine that enable visually realistic simulation. Flightmare's efficient C++ implementation can notably achieve \(10^{6}\) FPS but at the cost of being inflexible to extend. Simulators based on the Robot Operating System (ROS) and Gazebo [17] have also been widely used [18, 19] as they provide the ecosystem closest to real-world deployment. For example, RotorS [18] provides very fine-grained simulation of sensors and actuators and built-in controllers for the included drone models, enabling sim-to-real transfer of control policies with less effort. However, Gazebo suffers from poor scalability and sample efficiency. Additionally, the working mechanism of ROS makes environment interaction asynchronous, which violates the common implementation practice in RL. To provide an RL-friendly environment, PyBullet-Drones [20] introduced an OpenAI Gym-like environment for quadrotors based on PyBullet physics engine [21]. However, it relies on CPU multiprocessing for parallel simulation, which limits its scalability and leads to fewer steps per second. Our platform aims firstly for efficiency and a friendly workflow for RL. While the highly-parallelized GPU-based simulation ensures a high sampling performance, it is also convenient to customize and extend the environment at Python level and seamlessly work with modern RL libraries such as TorchRL. Reinforcement learning of drone controlReinforcement learning is seen as a potential approach for control and decision-making for multi-rotor drones. Prior works explored end-to-end training of visual-motor control policies [22, 23, 24, 2] to avoid the need for explicit dynamics modeling and hand-engineered control stack. Model-based reinforcement learning can combine learned forward dynamics models with planning methods, such as model predictive control (MPC), and has been investigated in [25, 3]. Applications to agile drone racing [26, 4] also demonstrated RL-based policies' ability to cope with highly dynamic tasks, generating smooth and near-time-optimal trajectories in real-time. [27] benchmarked different choices of action spaces and control levels regarding learning performance and robustness. More recently, [5] trains a single adaptive policy that can control vastly different quadcopters, showing the potential of reinforcement learning in terms of generalization and adaptation capabilities. To fully uncover what possibilities RL brings to drones, a flexible and versatile platform that supports various research purposes is highly desirable. In light of that, _OmniDrones_ aims to be suitable for a range of challenging topics, such as multi-agent coordination, adaptive control, design of modular drones, etc. ## III OmniDrones Platform At a high level, _OmniDrones_ consists of the following main components: (1) A simulation framework featuring GPU parallelism and flexible extension; (2) Utilities to manipulate and extend the drone models and simulation for various purposes; (3) A suite of benchmark task scenarios built from (1) and (2), serving as examples and starting points for customization. An overview of _OmniDrones_ is presented in Fig. 2. For comparison, Tab. I contrasts _OmniDrones_ with existing drone simulators, highlighting the advantages of our platform. In the following subsections, we describe the details of these components and provide examples to demonstrate the overall workflow. ### _Simulation Framework_ Drones have garnered significant attention from both industry and academia due to their remarkable agility and versatility. For example, a single drone can execute acrobatics or deliver lightweight items independently, while multiple drones can work together to aid in rescue operations in dense forests or transport bulky cargo collaboratively. Our simulation framework employs a bottom-up modular design approach to cater to the diverse needs of drone applications. This approach begins by setting up all basic modules of a drone system. Afterward, these modules can be integrated procedurally to simulate complex task scenarios. Following this strategy, our simulation includes a range of basic modules: (1) drone models, (2) sensor stacks, (3) control modes, (4) system configurations, and (5) task specifications. Regarding the multi-rotor dynamics, we use the general model given by: \[\dot{\mathbf{x}}_{W}=\mathbf{v}_{W} \dot{\mathbf{v}}_{W}=\mathbf{R}_{WB}\mathbf{f}+\mathbf{g}+\mathbf{ F} \tag{1}\] \[\dot{\mathbf{q}}=\frac{1}{2}\mathbf{q}\otimes\omega \dot{\mathbf{o}}=\mathbf{J}^{-1}(\eta-\omega\times J\omega) \tag{2}\] where \(\mathbf{x}_{W}\) and \(\mathbf{v}_{W}\) indicate the position and velocity of the drone in the world frame. \(R_{WB}\) is the rotation matrix from the body frame to the world frame. \(\mathbf{J}\) is the diagonal inertia matrix, and \(\mathbf{g}\) denotes Earth's gravity. \(\mathbf{q}\) is the orientation represented with quaternion, and \(\omega\) is the angular velocity. \(\otimes\) denotes the quaternion multiplication. \(\mathbf{F}\) includes other external forces, e.g., those introduced by the drag and downwash effects. The collective thrust \(\mathbf{f}\) and body torque \(\eta\) are derived from single rotor thrusts \(\mathbf{f}_{i}\) as: \[\mathbf{f}=\sum_{i}\mathbf{R}_{B}^{(i)}\mathbf{f}_{i} \tag{3}\] \[\eta=\sum_{i}\mathbf{T}_{B}^{(i)}\times\mathbf{f}_{i}+k_{i} \mathbf{f}_{i} \tag{4}\] where \(\mathbf{T}_{B}^{(l)}\) and \(\mathbf{R}_{B}^{(l)}\) are the local translation and orientation (tilt) of the \(i\)-th rotor, \(k_{i}\) the force-to-moment ratio, represented in the body frame. We offer a range of popular drone models for various applications. We detail four representative drones in this paper, including the _Crazyflie_, a small X-configuration quadrotor; the _Hummingbird_, an H-configuration quadrotor; the _Firefly_, a hexacopter; and the _Omav_, an omnidirectional drone with tiltable rotors. These models vary in size and design, from compact quadrotors to larger omnidirectional drones, each with unique dynamical features. Moreover, our simulator provides an array of sensors such as IMUs, RGB-D cameras, segmentation sensors, force sensors, and contact sensors. This range ensures drones can be easily tailored with the preferred sensor combinations, addressing specific requirements for state estimation and perception. We also implement for most drone models three PD controllers acting on different levels of commands, including position/velocity, body rate, and attitude. Before delving into the rest of the paper, we outline the primary features of the simulation framework based on the designs mentioned earlier: Multi-rotor drone dynamics: OmniDrones supports drone simulations with variable rotor numbers through a general implementation of drone dynamics, as described above. We also account for external forces in the dynamics, expanding the range of potential tasks. Parallelism and scalabilitySimilar to other GPU-based simulators, _OmniDrones_ also benefits from the high parallelism and subsequent near-linear scalability of Isaac Sim [30]. This enables us to achieve a high-performing policy within a short amount of time. Physical configuration and rigid dynamicsThe physical configuration of a drone model is specified by a Universal Scene Description (USD) file, which can be converted from the URDF format commonly used in Gazebo-based simulations. That means that _OmniDrones_ is compatible with drone models that have been used in the community. \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c|c c|c c} \hline & **Papiss Engine** & **Renderer** & \begin{tabular}{c} **Vendorcasting** \\ **OmniDrones** \\ **Om Notably, with Isaac Sim, it is possible to programmatically modify the physical configuration, e.g., changing its physical properties and assembling with other drones to form multi-drone systems as shown in Fig. 2. ### _Extending the Drone Models_ Certain applications may require additional payloads to be attached. Also, it might be desirable to create multi-drone systems to cope with tasks beyond a single drone's capability. With the flexible simulation framework, one feature of our platform is the ability to procedurally build and extend a drone system's physical/logical **configuration** for diverse interests. Notably, they can be generated programmatically from existing drone models and a set of primitives in a highly parameterizable fashion. Here, we introduce examples of interesting configurations provided in _OminiDrones_. The formed configurations may cause considerable changes in the drone's dynamics and thus present challenges for conventional controller design. * Payload & InvPendulum: A single drone is connected to a weight through a rigid link. The attached weight will alter and destabilize the drone's dynamics. The arrangement with the payload at the bottom is called Payload, while the arrangement with the payload on top is called InvPendulum. * Over-actuated Platform (Over): An over-actuated platform consists of multiple drones connected through rigid connections and 2-DoF passive gimbal joints, similar to [31]. Each drone functions as a tiltable thrust generator. By coordinating the movements of the drones, it becomes possible to control their positions and attitudes independently, allowing for more complex platform maneuvers. * Transport: A transportation system comprises multiple drones connected by rigid links. This setup allows them to transport loads that exceed the capacity of a single drone. Drones need to engage in coordinated control and collaboration for stable and efficient transportation. * Dragon: A multi-link transformable drone as described in [32]. Each link has a dual-rotor gimbal module. The links are connected via 2-Dof joint units sequentially. The ability to transform enables highly agile maneuvers and poses a challenging control problem. ### _Randomization_ Since there are unavoidable gaps between the simulated dynamics and reality, randomization is an important and necessary technique for obtaining robust control policies that can be easily transferred and deployed to real-world robots. One particular advantage of having a large number of parallel environments is that we can collect a large volume of diverse data from the randomized distribution, making _OminiDrones_ appealing for research regarding Sim2Real transfer and adaptation. We list example factors that users can manipulate in Tab. II. ### _Benchmarking Tasks_ Based on the simulation framework and utilities introduced above, 15 tasks of varying complexity and characteristics are developed for benchmarking. They are formulated as decentralized partially observable Markov Decision Process (Dec-POMDP) [33], where partial observability comes from the fact that only a limited part of the system state is known or measured by the sensors and that agents do not have full access to states about each other in a decentralized multi-agent setting. A **task** specifies the POMDP on top of a certain **configuration**, similar to DMControl [34]. For example, InvPendulum-Hover is a task in which the agent (drone) is required to hover an inverted pendulum system introduced before at a desired state. For those that do not have a special configuration, we omit the first part. According to their formulations and challenges, we divided the task specifications into categories that each might apply to a set of configurations. Here, we list and introduce several representative examples: * Hover: The drone(s) need to drive the system to reach and maintain a target state. This basic task is simple for most configurations except the inherently unstable ones, e.g., InvPendulum. * Track: The drone(s) are required to track a reference trajectory of states. The ability to (maybe not explicitly) predict how the trajectory would evolve and plan for a longer horizon is needed for accurate tracking. * FlyThrough: The drone(s) must fly the system through certain obstacles in a skillful manner, avoiding any critical collision. The obstacles are placed such that a long sequence of coherent actions is needed. Such a task often challenges the RL algorithm in exploration. * Formation: A group of drones needs to fly in a specific spatial pattern. This task examines the ability to deal with coordination and credit assignment issues. For detailed specifications on these tasks, please refer to the code. Generally, each drone observes kinematic information such as relative position, orientation (expressed in quaternions), and linear and angular velocities. Additional sensors can be attached or mounted to RGB-D images if needed. Regarding the action space, the drones are commanded target throttles for each motor, which the underlying motors strive to attain during the control process. Additionally, by integrating given with controllers, we can transform the action space to allow for the usage of higher-level control commands. We provide 4 control modes (rotor, velocity, rate, and attitude) for ordinary multi-rotor drones. \begin{table} \begin{tabular}{l l l l} \hline \hline Aspect & Examples & Startup & Runtime \\ \hline Physical config. & rigid connection, object scale & \(\checkmark\) & X \\ Inertial prop. & mass, inertia & \(\checkmark\) & \(\checkmark\) \\ Rotor param. & force constant, motor gain & \(\checkmark\) & \(\checkmark\) \\ External forces & wind, drag & \(\checkmark\) & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Randomizable Simulation Aspects ### _Reinforcement Learning with OmniDrones_ It is common in robotics to have RL tasks with complex input and output structures. For example, we might have sensory data from different modalities or want to adopt the teacher-student training scheme where some privileged observation is only visible to a part of the policy. The presence of multiple and potentially heterogeneous agents could introduce further complications. Therefore, to have a flexible interface that conveniently handles tensors in batches, we follow TorchRL's environment specification and use TensorFlowDict as the data carrier, both initially proposed by [35]. We also provide utilities to transform the observation and action space for common purposes, such as discretizing action space, wrapping a controller, and recording state-action history. With that, we implement and evaluate various algorithms to provide preliminary results and serve as baselines for subsequent research. They include PPO [36], SAC [37], DDPG [38], and DQN for single-agent tasks and MAPPO [8], HAPPO [39], MADDPG [40], and QMIX [41] for multi-agent ones. ## IV Experiments Leveraging the simulation framework and benchmark tasks, our platform provides a fair comparative basis for different RL algorithms, serving as a starting point for subsequent investigations. In this section, we showcase the features and functionalities of _OmniDrones_ through experiments and evaluate a range of popular RL algorithms on the proposed tasks. In all the following experiments, we use a simulation time step \(dt=0.016\), i.e., the control policy operates at around \(60\)Hz. ### _Simulation Performance_ We select a single-agent (Track) and a multi-agent (Over-Hover) task, respectively, to demonstrate the efficient simulation capabilities of our simulator under different numbers of environments. As shown in Tab. III, the efficient PyTorch dynamics implementation and Isaac Sim's parallel simulation capability allow _OmniDrones_ to achieve near-linear scalability with over \(10^{5}\) frames per second (FPS) during rollout collection. The results were obtained on a desktop workstation with NVIDIA RTX4090, Isaac Sim 2022.2.0. The control policy is a 3-layer MLP with 256 hidden units per layer implemented with PyTorch. Note that there are additional computations for the observations/rewards and logging logic besides simulation. ### _Benchmarking RL baselines_ The algorithms are adapted following open-source implementations and modified to be compatible with large-scale training. All runs follow a default set of hyper-parameters without dedicated tuning. Note that the experiments in this part all use direct rotor control. For single-agent tasks, we evaluate PPO, SAC, DDPG, and DQN using two drone models, namely Hummingbird and Firefly. The two drone models have 4 and 6 action Fig. 3: Benchmarking results. \begin{table} \begin{tabular}{c c c} \hline \hline \#Ens & \begin{tabular}{c} Track \\ (1 agents) \\ \end{tabular} & \begin{tabular}{c} Over-Hover \\ (4 agents) \\ \end{tabular} \\ \hline 1024 Envs & \(196074\pm 3754\) & \(115244\pm 1973\) \\ 2048 Envs & \(385027\pm 6688\) & \(204556\pm 7511\) \\ 4096 Envs & \(732109\pm 10362\) & \(310027\pm 12233\) \\ \hline \hline \end{tabular} \end{table} TABLE III: Simulation performance (FPS) of _OmniDrones_. dimensions, respectively, and differ in many inertial properties. For DQN, we discretize the action space by quantizing each dimension into its lower and upper bounds. We train each algorithm in 4096 parallel environments for 125 Million steps. The results are shown in Fig. 2(a). It can be observed that PPO, SAC, and DDPG are all good baselines for most tasks. However, various failures are observed in some tasks that require substantial exploration to discover the optimal behavior, i.e., FlyThrough. DQN fails to make progress in all tasks. Notably, PPO-based agents can be trained within 10-20 minutes. On the other hand, SAC and DDPG generally exhibit better sample efficiency. However, they require a longer wall time, since they need a significantly higher number of gradient steps with more data for each update. For the more challenging multi-agent coordination tasks, we evaluate MAPPO, HAPPO, MADDPG, and QMIX using _Hummingbird_. We train all algorithms for 150M steps. The results are shown in Fig. 2(b). The two PPO-based approaches are similar, and both achieve reasonable performance. The failure of MADDPG is potentially due to its exploration strategy being insufficient in multi-agent settings without careful tuning of the exploration noise. To apply the value-decomposition method, QMIX, we discretize the action space as we did for DQN. The results suggest that PPO-based algorithms may serve as strong and robust baselines for obtaining cooperative control policies, which would otherwise require involved analysis of the multi-agent system dynamics. ### _Drone Models and Controllers_ Different drone models render different properties and hence flight performance. It also decides the difficulty of the fundamental aspect of each learning task. The comparison of 4 drone models is shown in Fig. 4. Interestingly, although being the most complex (with 12 rotors and 6 tilt units), _Omav_ can be trained to achieve comparable or even better performance on the same budget. This reveals the potential of RL in quickly obtaining a control policy for unusual drone models. The choice of action space can have a vital impact on the performance and robustness of learned policies [27]. Considering the usage of a controller as a transform of the action space, we verify this point by comparing the following four approaches using _Firefly_ and the implemented controllers: (1) Direct control, i.e., the policy directly commands the target throttle for individual rotors; (2) Velocity control, where the policy outputs the target velocity and yaw angle; (3) Rate control, where the policy outputs the target body rates and collective thrust; (4) Attitude control, where the policy outputs the target attitude and collective thrust. The actions are scaled and shifted to a proper range for each approach. As shown in Fig. 5, direct and rate control consistently gives the best performance, while velocity control appears to be insufficient for tasks that demand more fine-grained control. We remark that tuning the controller parameters tuning and carefully shaping the action space might give a considerable performance boost. Nonetheless, the results suggest that a relatively low-level action space, despite being more subtle to transfer, is still necessary for agile and accurate maneuvers when dynamic changes are present. ## V Conclusion and Future Work In this paper, we presented the _OminiDrones_: a platform for conducting RL research on multirotor drone control. Leveraging the parallel simulation capabilities of more GPUs, _OminiDrones_ provides efficient and flexible simulation and a suite of RL tasks for multi-rotor drones. Through experiments, we demonstrate the features of the proposed platform and offer initial results on the tasks. We hope _OminiDrones_ serves as a good starting point toward building more powerful drone systems regarding control and system design with reinforcement learning. In the future, we will provide long-term support and continue our development to provide utilities for sim-to-real deployment. Current limitations, such as the bottle-necked rendering performance, should be addressed. While this work focuses more on low-level control in an end-to-end setting, more complex and realistic scenarios, and higher-level tasks will be incorporated to complete the picture. ## Acknowledgment This research was supported by National Natural Science Foundation of China (No.62325405, U19B2019, M-0248), Tsinghua University Initiative Scientific Research Program, Tsinghua-Meituan Joint Institute for Digital Life, Beijing National Research Center for Information Science, Technology (BNRist) and Beijing Innovation Center for Future Chips. The abstractions and implementation of _OminiDrones_ was inspired by Isaac Orbit[14]. Some of the drone models (assets) and controllers are adopted from or heavily based on the RotorS [18] simulator. We also thank Eric Kuang from NVIDIA for valuable tips on working with the standalone workflow of Isaac Sim. Fig. 4: Comparison of different drone models on three selected tasks. Fig. 5: Comparison of different choices of action space.
2303.17906
Environmental path-entropy and collective motion
Inspired by the swarming or flocking of animal systems we study groups of agents moving in unbounded 2D space. Individual trajectories derive from a ``bottom-up'' principle: individuals reorient to maximise their future path entropy over environmental states. This can be seen as a proxy for keeping options open, a principle that may confer evolutionary fitness in an uncertain world. We find an ordered (co-aligned) state naturally emerges, as well as disordered states or rotating clusters; similar phenotypes are observed in birds, insects and fish, respectively. The ordered state exhibits an order-disorder transition under two forms of noise: (i) standard additive orientational noise, applied to the post-decision orientations (ii) ``cognitive'' noise, overlaid onto each individual's model of the future paths of other agents. Unusually, the order increases at low noise, before later decreasing through the order-disorder transition as the noise increases further.
Harvey L. Devereux, Matthew S. Turner
2023-03-31T09:05:05Z
http://arxiv.org/abs/2303.17906v2
# Environmental path-entropy and collective motion ###### Abstract Inspired by the swarming or flocking of animal systems we study groups of agents moving in unbounded 2D space. Individual trajectories derive from a "bottom-up" principle: individuals reorient to maximise their future path entropy over environmental states. This can be seen as a proxy for _keeping options open_, a principle that may confer evolutionary fitness in an uncertain world. We find an ordered (co-aligned) state naturally emerges, as well as disordered states or rotating clusters; similar phenotypes are observed in birds, insects and fish, respectively. The ordered state exhibits an order-disorder transition under two forms of noise: (i) standard additive orientational noise, applied to the post-decision orientations (ii) "cognitive" noise, overlaid onto each individual's model of the future paths of other agents. Unusually, the order increases at low noise, before later decreasing through the order-disorder transition as the noise increases further. Collective motion occurs in both living and synthetic systems. In living systems this arises in a wide variety of species over different length scales, e.g. micro-organisms, cells, insects, fish, birds [1; 2; 3; 4; 5; 6] and even dinosaurs [7]. Interest in the physics community often lies in developing models of collective motion that are analogous to living systems, many of which exhibit ordered (coaligned) motion and support a noise-induced transition to disorder [8; 9; 10; 11; 12; 13; 14; 15]. Long-ranged behavioural interactions may arise in nature and there have been some attempts to analyse such interactions [16; 17; 18; 19; 20]. These can naturally be traced to the nature of information transfer between agents [21; 22], noting that senses like vision are long ranged. Other models of swarming behaviour incorporate explicit alignment, cohesion, and/or collision avoidance rules directly into an agent-based model [17; 13; 23]. However, such models cannot easily explain the underlying reason for the emergence of properties like cohesion and coalignment as these are essentially incorporated into the models at the outset. One recent alternative approaches is to utilise machine learning based on using a simple form of perception to maintain cohesion directly [24]. Another involves the study of large deviations of non-aligning active particles that are biased, e.g. by effective alignment of self-propulsion with particle velocity [25; 26; 27; 28]. While it is possible neural circuitry of animals encodes an algorithm that _directly_ targets coalignment and cohesion in the same mathematical manner as in these models it, seems much more likely that some lower-level principle is involved. This principle, almost certainly associated with evolutionary fitness in some way, might then be the origin of cohesion and coalignment. We argue that more satisfactory explanations for the phenomenon of swarming may be offered by testing candidates for this lower level principle. In this letter we analyse one such model. There is a small but growing literature focussing on the causal understanding of complex behaviour, cast as an entropy or state maximisation approach. Here some measure of variation across _future_ paths accessible from a particular system configuration is computed and an action that maximises this variation is selected, e.g. [29; 30; 31; 32; 33]. It is argued that agents that can retain access to the most varied future environments can better select from these to satisfy any immediate requirements or objectives, e.g. resource acquisition or predator evasion. For these reasons such strategies are expected to generally confer evolutionary fitness in an uncertain world. The present work shares similar motivation to [32] but provides a rigorous mathematical model based on path-entropies and focuses on the emergence of order. We believe that such models offer clear advantages in terms of their conceptual clarity and prospects for future development. To realise such a model here, agents are treated as oriented unit disks that move in discrete time \(t\), defining our length and time units, respectively. The position of the \(i^{\text{th}}\) agent in the next time step is \[\mathbf{x}_{i}^{t+1}=\mathbf{x}_{i}^{t}+\mathbf{v}_{i}^{t+1}. \tag{1}\] At each discrete time step \(t\) agents choose from \(z=5\) velocities: either to move along their current orientation with one of three speeds: nominal \(v_{0}\), fast \(v_{0}+\Delta v\) or slow \(v_{0}-\Delta v\); or to reorientate by an angle \(\pm\Delta\theta\) while moving at the nominal speed \(v_{0}\). Unless noted otherwise we take \(v_{0}=10\), \(\Delta v=2\) and \(\Delta\theta=\pi/12=15^{\circ}\). The agent's velocity is updated by an operator \(A_{\alpha_{i}^{t}}\) acting on its previous velocity \(\mathbf{v}_{i}^{t}\) \[\mathbf{v}_{i}^{t+1}=A_{\alpha_{i}^{t}}[\mathbf{v}_{i}^{t}]. \tag{2}\] Actions \(\alpha\) change the velocity according to \[A_{\alpha}[\mathbf{v}]=v_{\alpha}R(\theta_{\alpha})\mathbf{\hat{v}}. \tag{3}\] The index \(\alpha\in[1,z]\) labels possible actions, here with indices dropped for clarity. Hat accents denote unit vectors according to \(\mathbf{\hat{v}}=\mathbf{v}/|\mathbf{v}|\) throughout, with \(|\cdot|\) the Euclidean norm. The action chosen at each time step determines the corresponding speed of the agent \(v_{1}=v_{4}=v_{5}=v_{0}\), \(v_{2}=v_{0}+\Delta v\), \(v_{3}=v_{0}-\Delta v\) in that time step. Where \(R\) generates a rotation \[R(\theta)=\left(\begin{array}{cc}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right), \tag{4}\] with rotation angles \(\theta_{1}=\theta_{2}=\theta_{3}=0\), \(\theta_{4}=\Delta\theta\) and \(\theta_{5}=-\Delta\theta\). The sequence of such actions realised by this agent \(\alpha_{i}^{t}\) over time \(t\) completely determine the dynamics. In order to select actions, i.e compute hypothetical path entropy over future states, this model requires that agents model positions of themselves and other agents into the future. Therefore we adopt the notation \(\tilde{\mathbf{x}}_{k}^{\prime}\), \(\tilde{\alpha}_{k}^{\prime}\), \(v_{\tilde{\alpha}_{k}^{\prime}}\) and \(\theta_{\tilde{\alpha}_{k}^{\prime}}\), involving a tilde accent, to indicate virtual positions, actions, speeds and rotation angles of all agents \(k\) at time \(t^{\prime}\). Hence \[\tilde{\mathbf{x}}_{k}^{t+s}=\mathbf{x}_{k}^{t}+\sum_{t^{\prime}=t}^{t-1+s} \tilde{v}_{\tilde{\alpha}_{k}^{\prime}}\prod_{t^{\prime\prime}=t}^{t^{\prime} }R(\theta_{\tilde{\alpha}_{k}^{\prime\prime}})\hat{\mathbf{v}}_{k}^{t}, \tag{5}\] with \(1\leq s\leq\tau\) reflecting the time horizon \(\tau\). Equation (5) generates the hypothetical position of both the \(k=i\) (self) and \(k=j\neq i\) (other) agents. However, we make a simplifying assumption for the motion of the \(j\neq i\) (other) agents. Here our default model corresponds to "ballistic" translation of the \(j\neq i\) agents in which \(v_{\tilde{\alpha}_{i}^{\prime\prime}}=v_{0}\) and \(\theta_{\tilde{\alpha}_{i}^{\prime\prime}}=0\), \(\forall t^{\prime}\geq t\). The speeds and rotations depend neither on the particle index \(j\) nor the future time index and so they can be stated in more condensed form simply as \(\tilde{v}=v_{0}\) and \(\tilde{\theta}=\theta_{\tilde{\alpha}}=0\). The ballistic assumption can often be rather good, in the sense that the trajectories that are realised can have a very high degree of orientational order and so the assumption is broadly self-consistent [34]. Later in this article we consider models that generate different virtual actions for the \(j\neq i\) agents that incorporate noise. See Fig 1(b) for a sketch of this dynamical scheme. The environmental state of an agent is assumed to be perceived using only vision, see Fig 1(a). This state encodes information on the relative positions of the other agents in a manner that is broadly consistent with animal vision, abstracted to \(d=2\) dimensions: visual sensing involves a radial projection of all other agents onto a circular sensor array at each agent. Loosely speaking, the radial projection registers \(0\) "white" along lines of sight not intersecting agents, and \(1\) "black" along those that do. We discretise this into an \(n_{s}\)-dimensional visual state vector \(\mathbf{\psi}_{i}\), for angular sub regions of size \(2\pi/n_{s}\). This then resembles a spin state, e.g. \((0,1,0,0,1\cdots)\). Mathematically we use two indicator functions, first the distance of shortest approach along a line-of-sight \(\hat{\mathbf{n}}_{i}=R(\chi)\hat{\mathbf{v}}_{i}\) originating from the \(i^{\text{th}}\) agent, \[I_{ij}=\Theta[1-|\tilde{\mathbf{x}}_{ij}\times\hat{\mathbf{n}}_{i}(\chi)|]. \tag{6}\] Where the Heaviside function \(\Theta[x]=1\) for \(x\geq 0\) and \(0\) otherwise, \(\tilde{\mathbf{x}}_{ij}\) is the separation vector \(\tilde{\mathbf{x}}_{j}-\tilde{\mathbf{x}}_{i}\), with \(|\cdot|\) the Euclidean norm. Equation (6) indicates an agent is visible along this line of sight in _either_ direction from the \(i^{\text{th}}\) agent, i.e. along \(\chi\) or \(\chi+\pi\). We restrict to \(\chi\) using the second indicator \[I^{\prime}_{ij}=\Theta[\tilde{\mathbf{x}}_{ij}\cdot\hat{\mathbf{n}}_{i}(\chi)]. \tag{7}\] The \(n^{\text{th}}\) component of the visual state vector \(\mathbf{\psi}_{i}\) is then \[\psi_{i}^{n}=\Theta\Bigg{[}\int_{\sigma_{n}}\Theta\Big{[}\sum_{j}I_{ij}(\chi)I^ {\prime}_{ij}(\chi)\Big{]}d\chi-\frac{\pi}{n_{s}}\Bigg{]}, \tag{8}\] where the \(n^{\text{th}}\) sensor covers the angular domain \(\sigma_{n}=[2\pi(n-1)/n_{s},2\pi n/n_{s}]\). The inner Heaviside function registers \(1\) ("black") if at least one agent intersects line-of-sight \(\chi\), the integral then measures the coverage of \(\sigma_{n}\) by "black" regions. The outermost Heaviside function is a further threshold that at least half the sensor must be "black" to activate the \(n^{\text{th}}\) visual state component. For a virtual action \(\tilde{\alpha}_{i}^{t}\) the entropy of the state distribution over (all nodes on) all virtual paths for the \(i^{\text{th}}\) agent following action \(\tilde{\alpha}_{i}^{t}\) is \[S(\tilde{\alpha}_{i}^{t})=-\sum_{\mathbf{\psi}}p_{i}(\tilde{\alpha}_{i}^{t},\mathbf{ \psi})\log p_{i}(\tilde{\alpha}_{i}^{t},\mathbf{\psi}). \tag{9}\] Where \(p_{i}(\tilde{\alpha}_{i}^{t},\mathbf{\psi})\) is the count of occurences of a state \(\mathbf{\psi}\) on these virtual paths, normalised by the count of states on all branches. In this way each action-branch \(\tilde{\alpha}_{i}^{t}\) is associated with an environmental path entropy \(S(\tilde{\alpha}_{i}^{t})\) Figure 1: Snapshot of a system configuration and sketch explaining our model. (a) \(N=50\) agents that take actions to maximise a future path entropy over environmental states (see text); axes show _x-y_ coordinates in units of the agent radius. Overlaid (broken circle) is a representation of the visual state perceived by the red individual. Obtained from a simulation with parameters \(\tau=6\), \(\Delta\theta=\pi/12=15^{\circ}\), \(v_{0}=10\), and \(\Delta v=2\) (see text for details). (b) In red the tree of hypothetical future actions the agent examines, starting from the present at the root on the far-left. Shown in blue and green (with dashed circles) are ballistic and noisey motion assumptions of other agents. The key step in the decision making process is that each agent then executes the action \[\alpha_{i}^{t}=\operatorname*{arg\,max}_{\tilde{\alpha}_{i}^{t}}S(\tilde{\alpha}_ {i}^{t}), \tag{10}\] thereby choosing the branch that maximises the entropy of future visual states. This process is carried out simultaneously for each agent and repeated, from scratch, at each time step. Degenerate options are selected at random, the only randomness in the baseline algorithm that is otherwise deterministic. Our model supports various phenotypes. In Fig 2 we report on the effect of varying the turning rate \(\Delta\theta\), the nominal speed \(v_{0}\) and its variation \(\Delta v\). We find that a highly ordered and cohesive phenotype is commonly achieved when the agents move relatively fast with moderate turning. Resembling those seen in flocks of social birds [35, 36], noting that these birds also have relatively fast speed, do not slow significantly and have limited turning ability relative to an insect. We also find cohesive disordered groups, some showing circulation. The most important conditions for the emergence of cohesive swarms are (i) \(\tau\gtrsim 3\), (ii) \(10\lesssim n_{s}\lesssim 100\) to avoid the visual states becoming largely degenerate (see SI for details). We report the visual opacity as the average sensor state \(\Theta=\langle\frac{1}{n_{s}}\sum_{n=1}^{n_{s}}\psi_{i}^{n}\rangle\), density \(\rho=\langle\frac{N\mathbf{\tau}\mathbf{r}^{2}}{\mathcal{A}^{t}}\rangle\) with the convex hull area \(\mathcal{A}^{t}\), global order \(\phi=\langle\left|\frac{1}{N}\sum_{i=1}^{N}\hat{\mathbf{v}}_{i}^{t}\right|\rangle\), and quantify rotation using a normalised mean squared vorticity \(\nu^{2}=\left\langle\left(\frac{1}{N}\sum_{i}\tilde{\mathbf{r}}_{i}^{t}\times\hat {\mathbf{v}}_{i}^{t}\right)^{2}\right\rangle\), with \(\mathbf{r}_{i}^{t}=\mathbf{x}_{i}^{t}-\langle\mathbf{x}_{k}^{t}\rangle_{k}\) and \(\mathbf{v}_{i}^{t}\) the \(i^{\text{th}}\) agent's position relative to the geometric centre and velocity respectively. In each case we average over agents \(i\) and times \(t\). We also use a measure of spatial clustering using DBScan [37] (SI section S2 for details) to both detect fragmentations and measure the quantities above on clusters. We denote the average fraction of agents in the largest cluster as \(\mathcal{C}\), and by \(\phi_{\mathcal{C}}\) denote the order of the largest cluster. We have established the emergence of co-aligned, cohesive states under environmental path entropy maximising trajectories, see Fig 2(a). It is therefore natural to ask about the effect of noise on these dynamics. In this way we will investigate to what extent this model supports an order-disorder transition similar to those extensively studied in other models of collective motion [8, 9, 10, 11, 12, 13, 14, 15]. By "cognitive noise" we mean some imprecision in an agent's model of the others. We therefore define a stochastic process for the virtual speeds \(\tilde{v}^{t^{\prime}}=v_{0}+\mu_{v}^{t^{\prime}}\) and rotations \(\tilde{\theta}^{t^{\prime}}=\mu_{\theta}^{t^{\prime}}\) of all \(j\neq i\) agents. Here both \(\mu\) variables (subscript \(v\), \(\theta\) omitted for clarity) are drawn from zero mean \(\langle\mu_{j}^{t^{\prime}}\rangle=0\) Gaussian distributions that are uncorrelated according to \(\langle\mu_{j}^{t^{\prime}}\mu_{j^{\prime}}^{t^{\prime\prime}}\rangle=\eta^{2 }\delta_{jj^{\prime}}\delta_{t^{\prime}t^{\prime\prime}}\) with \(j,j^{\prime}\neq i\) here the particle index of the (other) agents and \(t^{\prime},t^{\prime\prime}\geq t\). The root-mean-squared noise amplitude Figure 2: Agents that maximise environmental path-entropy naturally adopt different dynamical modes, or “phenotypes”. Each panel shows the agent’s trajectories, together with the time-averaged mean order \(\phi\), root-mean-squared vorticity \(\nu\), density \(\rho\) and opacity \(\Theta\) (see text): (a) the ordered, dense (“bird”) phenotype, (b) translation combined with significant rotation (similar to “fish” or “insects”). Averages computed over 10 replicates. Figure 3: Ordering transitions in the presence of (a) cognitive noise and (b) post-decision orientational noise: (a1-3) each agent approximates the future trajectories of others in the presence of **cognitive** noise as a sequence of random rotations and speed changes from their current heading and speed \(v_{0}\). The noise strengths \(\eta_{\theta}\) and \(\eta_{v}\) characterise the magnitude of the rotations (degrees) and speed changes (body radii per time step), respectively. (a1) A transition from high order to a disordered phase occurs with increasing cognitive rotational noise \(\eta_{b}\). Insets (a2) and (a3) focus on small \(\eta_{\theta}\) with \(\eta_{v}=0\). They show the order of the largest cluster \(\phi_{\mathcal{C}}\) and the overall system order \(\phi\), respectively; note the maximum in order appears at non-zero noise. The red horizontal line shows the order averaged over all runs \(0<\eta\leq 10^{\circ}\). In (a1-3) the future time horizon is \(\tau=6\). (b1) shows the effect of **post-decision** orientational noise on global order \(\phi\). Here a random rotation with root-mean squared angle \(\eta\) (degrees) is applied directly to the velocity, before the movement update. (b2) shows a statistically significant local maximum in \(\phi_{\mathcal{C}}\) for non-zero noise \(\eta\) whereas (b3) now shows no significant maximum in \(\phi\). The red horizontal line shows the order averaged over all runs \(0<\eta\leq 1.5^{\circ}\). All systems contain \(N=250\); all error bars are 1 standard error in the mean; the dashed lines represent the mean order \(\phi=1/\sqrt{N}\) of randomly orientated agents. In all statistical tests additional repeats (n=16) were computed for the zero noise case, see text for further details. of the speed and orientation are written, with subscripts restored, as \(\eta_{v}\) and \(\eta_{\theta}\) respectively. An example is shown in the sequence of positions shown in green in Fig 1(b). All else proceeds as before, without any additional noise applied to realised agent actions. The most striking feature from Fig 3a is that the order initially _increases_ with the addition of small levels of noise, before later decreasing again. An upper tailed t-test (with unequal variances) on the difference of the mean order in the noise-free case (\(\eta_{\theta}=0\)) and the mean of simulations with non-zero noise \(0<\eta\leq 10^{\circ}\), rejects the null hypothesis that the mean order \(\phi\) is the same at the level of \(p<10^{-13}\). The same t-test for \(\phi_{\mathbb{C}}\), the order computed only for agents that are members of the largest cluster, is rejected at \(p<10^{-50}\). To understand why a small amount of noise might actually increase order we compare the noise level at which the order is maximal, roughly \(\eta_{\theta}=5-7^{\circ}\), to intrinsic variation in the realised dynamics in a low noise state \(\phi\approx 0.98\). There are several ways to achieve this. (i) approximating the order as the mean component of the normalised velocities of the agents along the average direction of motion, \(\phi=\langle\cos\delta\theta\rangle\approx 1-\frac{1}{2}\langle\delta\theta^{ 2}\rangle\), leading to \(\delta\theta_{\text{rms}}=11^{\circ}\) (ii) Crudely assuming moves are uncorrelated extending for \(\tau\) steps into the future and asking what angular noise amplitude _per time step_, analogous to \(\eta_{\theta}\) would be required to give the realised order \(\phi\) at the _end_ of this sequence, leading to \(11^{\circ}/\sqrt{\tau}=4.7^{\circ}\) (iii) using the velocity auto-correlation function \(C_{vv}(t)=\langle\hat{\mathbf{v}}_{i}^{t^{\prime}}\cdot\hat{\mathbf{v}}_{i}^{ t^{\prime}+t}\rangle\) (see SI Fig 4) and extracting an angular noise per time step by either writing \(V_{vv}(1)=0.987=\langle\cos\delta\theta\rangle\) or by using \(V_{vv}(\tau)\approx 0.968\) leading to \(\delta\theta\approx 9^{\circ}\) and \(6^{\circ}\) respectively. All are similar to the observed value of \(\eta_{\theta}\) at which order is maximal. Thus the realised order is maximal at a value of cognitive noise \(\eta_{\theta}\) that is _self-consistent_ with the variation in the realised trajectories that arises in the dynamics. We argue that this is the noise level at which the predictive model of the trajectories of other agents will be more accurate, at least in a statistical sense. We propose that this represents the fundamental reason for the increase of order at small noise levels. To apply post-decision noise, the rotation associated with each action that appears in Eq (3) is modified to include noise according to \(\theta_{1}=\theta_{2}=\theta_{3}=\zeta_{i}^{t}\), \(\theta_{4}=\Delta\theta+\zeta_{i}^{t}\) and \(\theta_{5}=-\Delta\theta+\zeta_{i}^{t}\) with the random rotation angle \(\zeta_{i}^{t}\) drawn from a zero mean \(\langle\zeta_{i}^{t}\rangle=0\) Gaussian distribution satisfying \(\langle\zeta_{i}^{t}\zeta_{j}^{t^{\prime}}\rangle=\eta^{2}\delta_{ij}\delta _{tt^{\prime}}\). This noise can be interpreted as arising from imperfect implementation of the target velocity, ubiquitous in physical or living systems. Figure 3b shows the effect of this post-decision orientational noise. At large noise amplitude \(\eta\) the order approaches a value \(\sim 1/\sqrt{N}\), expected for \(N\) randomly orientated agents. This corresponds to a complete loss of orientational order. We find the order-disorder transition occurs around \(\eta=12^{\circ}\). This is a significantly smaller noise level than for the case of cognitive noise, see Figure 3 (a1-a3), where the transition occurs around \(\eta_{\theta}=45^{\circ}\). This indicates that cognitive noise has a much weaker disordering effect than post-decision noise and could even be seen as providing robustness, by anticipating the possibility of varied trajectories in the future. In contrast, post-decision noise plays no such role. A one-tailed t-test, to test whether the order at non-zero noise values are significantly different from the zero noise case, was performed for both \(\phi\) and \(\phi_{\mathbb{C}}\). The result being significant for the mean order for agents in the largest cluster \(\phi_{\mathbb{C}}\) (\(p<10^{-6}\)) but insignificant for the global order \(\phi\) (p=0.086). The difference between the two is likely due to rare fragmentations, which we see in large groups \(\gtrsim 100\), noting also that \(\phi\) is systematically lower than \(\phi_{\mathbb{C}}\). The fact that there is a significant increase in \(\phi_{\mathbb{C}}\) is perhaps even more surprising than the similar effect apparent in Fig 3(a3). The magnitude of the increase in order \(\phi_{\mathbb{C}}\) from \(\eta_{\theta}=0\) to \(\eta_{\theta}\sim 5^{\circ}\) that is apparent in Fig 3(a2) is about 1% (a relatively large difference: the mis-ordering halves). However, the corresponding increase in Fig 3(b2) is nearly an order of magnitude weaker and occurs at much smaller noise \(\eta\sim 0.5^{\circ}\). This signifies a different mechanism for the much weaker increase in order that occurs under such post-decision noise. We speculate that this might be due to subtle changes in the swarm structure resulting from the addition of noise, noting that the density is systematically lower in the presence of weak post-decision noise (see SI for details). Such changes could plausibly affect path-entropy maximising trajectories in such a way that they generate a higher order. Although there is no obvious intuitive explanation for this it could be related to the fact that the agents have more information on the global organisation at lower densities, where there are fewer particle overlaps in the visual state. To conclude, we analyse a simple model that could underly evolutionary fitness and hence intelligent behaviour. This model involves agents that seek to maximise the path entropy of their future trajectories, analogous to keeping future options open. The entropy is here computed over visual states, such as would be perceived by animals that rely primarily on vision to sense and navigate the world around them. Such path-entropy maximisation strategies could be of broader interest within biology, e.g. in the biochemical state space accessible to micro-organisms or cells. However, we believe that it will be easier to test these ideas in higher animals that exhibit swarming motion where the state space is lower dimensional and the dynamics of inertial flying (or swimming) agents is much more simple and well understood than the nonlinear chemical kinetics of cellular biochemistry. We find that the "bottom-up" principle of maximisation of path entropy is a promising candidate to understand the emergence of properties like co-alignment and cohesion observed in typical swarming phenotypes. This principle also leads to flocks with opacity values close to 0.5, in agreement with observations on some bird flocks [19]. Although the algorithm is highly computationally de manding it involves a simple mapping from an observed visual state to an action. Heuristics that mimic this process and that could operate under animal cognition in real time are easy to develop. For example, an artificial neural network could be trained on simulation data to choose actions from sensory input. Similar algorithms could also find use in novel forms of active, information-processing ("intelligent") matter that may soon form part of the experimental landscape. ###### Acknowledgements. Funding was provided by UK Engineering and Physical Sciences Research Council though the Mathematics for Real World Systems Centre for Doctoral Training grant no. EP/L015374/1 (H.L.D.). All numerical work was carried out using the Scientific Computing Research Technology Platform of the University of Warwick. M.S.T. acknowledges the support of a long-term fellowship from the Japan Society for the Promotion of Science, a Leverhulme Trust visiting fellowship and the peerless hospitality of Prof. Ryoichi Yamamoto (Kyoto).
2309.05381
Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations
Much research on Machine Learning testing relies on empirical studies that evaluate and show their potential. However, in this context empirical results are sensitive to a number of parameters that can adversely impact the results of the experiments and potentially lead to wrong conclusions (Type I errors, i.e., incorrectly rejecting the Null Hypothesis). To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results. We then perform a sensitivity analysis on 30 influential studies that were published in top-tier SE venues, against our hazard set and demonstrate their criticality. Our findings indicate that all 10 hazards we identify have the potential to invalidate experimental findings, such as those made by the related literature, and should be handled properly. Going a step further, we propose a point set of 10 good empirical practices that has the potential to mitigate the impact of the hazards. We believe our work forms the first step towards raising awareness of the common pitfalls and good practices within the software engineering community and hopefully contribute towards setting particular expectations for empirical research in the field of deep learning testing.
Salah Ghamizi, Maxime Cordy, Yuejun Guo, Mike Papadakis, And Yves Le Traon
2023-09-11T11:05:34Z
http://arxiv.org/abs/2309.05381v1
# Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations ###### Abstract. Much research on Machine Learning testing relies on empirical studies that evaluate and show their potential. However, in this context empirical results are sensitive to a number of parameters that can adversely impact the results of the experiments and potentially lead to wrong conclusions (Type I errors, i.e., incorrectly rejecting the Null Hypothesis). To this end, we survey the related literature and identify 10 commonly adopted empirical evaluation hazards that may significantly impact experimental results. We then perform a sensitivity analysis on 30 influential studies that were published in top-tier SE venues, against our hazard set and demonstrate their criticality. Our findings indicate that all 10 hazards we identify have the potential to invalidate experimental findings, such as those made by the related literature, and should be handled properly. Going a step further, we propose a point set of 10 good empirical practices that has the potential to mitigate the impact of the hazards. We believe our work forms the first step towards raising awareness of the common pitfalls and good practices within the software engineering community and hopefully contribute towards setting particular expectations for empirical research in the field of deep learning testing. Keywords:**Computing methodologies Supervised learning by classification; Supervised learning by classification. + Footnote †: journal: Hazards in Deep Learning Testing: Prevalence, Impact and Recommendations +
2309.15192
Simple Analytical Model for Optimizing Integrating Sphere Port Sizes
The integrating sphere (IS) is an indispensable tool for measuring transmission and scattering of materials and their colorimetry, as well as other photometric tasks. The accuracy of its data depends critically on port sizes used for measurement and control, usually defined by trial and error or brute-force optical simulations. To find the optimal port sizes of this powerful tool, a sample visibility function is defined and optimized using the energy conservation principle. This yields an analytical expression that should be useful in a variety of applications, especially those where signal is rather small (low-haze materials).
A. M. Bratkovsky
2023-09-26T18:51:48Z
http://arxiv.org/abs/2309.15192v1
## Simple Analytical Model for Optimizing Integrating Sphere Port Sizes ## Abstract The integrating sphere (IS) is an indispensable tool for measuring transmission and scattering of materials and their colorimetry, as well as other photometric tasks. The accuracy of its data depends critically on port sizes used for measurement and control, usually defined by trial and error or brute-force optical simulations. To find the optimal port sizes of this powerful tool, a _sample visibility_ function is defined and optimized using the energy conservation principle. This yields an analytical expression that should be useful in a variety of applications, especially those where signal is rather small (low-haze materials). ## Introduction The Integrating Sphere (IS) is a powerful tool used extensively in photometry and colorimetry invented by R. Ulbricht over a century ago [1, 2]. We shall recall below the main features of this ingenious device that is now available from multiple commercial sources for use in photometry, colorimetry, haze, and other measurements [2]. The applications area grows to this day, now including illumination [3] and communication [4]. In spite of the apparent simplicity of the device, theoretical understanding of its operation was slowly developing over decades with multiple approaches applied (cf. Ref. [6]). Still, issues related to optimal configuration like the port sizes are left to the 'rule of thumb' [5, 7] or direct simulation (see [8-10] and references therein). One important use of IS is characterization of diffuse scattering from transparent low-haze samples. In this, as well as all other cases, it would help to have a simple analytical formula for an optimal sample port size. Such a formula is derived below from the energy conservation principle. ## Irradiance and sample 'visibility' in integrating sphere Consider a radiation exchange between small surface elements dA and dA', Fig.1(a). If the element dA' has radiance \(L\) [W.m\({}^{-2}\).sr\({}^{-1}\)], i.e. it radiates \(L\) Watts per unit area into a unit solid angle (steradian, sr) then the element dA a distance \(s\) away from dA' will receive energy of _irradiance_ E(P) [W.m\({}^{-2}\)] times dA: \[E(P)dA=L(P^{\prime}\to P)dA^{\prime}\cdot d\Omega_{s}=L(P^{\prime}\to P)dA^{ \prime}\cdot\frac{dA\cos\vartheta}{s^{2}}\ [W] \tag{1}\] One has \(\vartheta^{\prime}\)=\(9,\ s=2R\ \cos\vartheta\) for a sphere [see Fig. 1(b). Assuming the element dA' on such a sphere is the Lambertian source with \(L(P^{\prime}\to P)=L\ \cos\vartheta^{\prime}\), it will receive the power \[E(P)dA=L\cos\vartheta^{\prime}\ dA^{\prime}\cdot\frac{dA\cos\vartheta}{s^{2}} =\frac{L\ dA^{\prime}dA}{4R^{2}}=\frac{\pi L}{A_{s}}dA^{\prime}dA\ [W] \tag{2}\] Here, \(A_{S}=4\pi R^{2}\) is the surface area of the sphere. One sees that sphere with diffuse coating has an amazing property that its every radiant element produces the same irradiance at all elements of the sphere regardless of their relative positions. Therefore, the first ray hitting the sphere would spread its luminous energy evenly over the entire sphere and every subsequent ray would too. A sphere is likely to be the only surface with a property \(s^{-2}\cos\vartheta^{\prime}\cos\vartheta=\)const, Fig.1(b). The sphere could be used in both reflection (Fig.2) and transmission configurations. The fast low-haze sample characterization requires optimizing the collection of light diffusely scattered by the sample. The low-haze samples would find very wide usage in e.g. mobile devices. The question is then: what is the Figure 1: (a) Schematic of the radiation power exchange between two Lambertian (ideally light diffusing) elements, (b) only in a sphere, every radiant element produces the _same_ irradiance at all elements of the sphere regardless of their relative positions (see text.) Cumulative irradiance of every element of the sphere after multiple scattering events would much exceed the very first irradiance of the wall by inserted light before it receives additional light scattered by the sphere. port size for a sample window that maximizes the reading by the detector? To answer this question, we apply the standard theory of Integrating Sphere (IS) [2]. We shall use the following notations with comments: * \(P\), the total energy inserted by light source into the IS * Sphere has \(N\) ports occupying areas \(A_{i}=f_{i}A\), where \(A=4\pi R^{2}\) is the total area of the sphere with radius \(R\), \(D=2R\) its diameter, \(i=0\div N\). Here, \(f_{i}\) is the _fractional area_ occupied by the \(i\)th port. * 'Port' \(i=\)0 is the area first hit by the light from the source. In the present case, its reflectance is therefore the same as the wall, \(m_{0}=m_{w}\) (Fig.2.) The spherical geometry guarantees that the initial beam will scatter diffusely like all subsequent and _every_ scattering event will distribute power Figure 2: Schematic of the Integrating sphere setup for measuring diffuse scattering from sample in specular light _excluded_ (SPEX) geometry. The baffled light source L introduces power \(P\) into the sphere that diffusely scatters off the walls with high _reflectance_\(m_{w}\) close to unity, with the rest (\(1-m_{w}\)) absorbed. The sample gets homogeneously illuminated from top half space and the light diffusely scattered by the sample gets collected by the detector D. The detector port diameter is \(d_{D}\). Open port P with the same diameter removes light specularly reflected from the sample. The detector and SPEX ports are positioned symmetrically with respect to the North pole of the sphere. uniformly over the whole sphere. This fact is of _cardinal importance_ for the use of the Integrating Spheres in photometry. * Ports have total (integrated over all angles back into the IS) reflectances \(m_{i}\). For transparent samples, \(m_{s}\ll 1\) is the'scatter ratio' for reflection. We will use \(m_{s}=0.0025\) as an example. * Steady power hitting the \(i\)th port is \(P_{i}=f_{i}P\). * Sphere walls have reflectance \(m_{w}\). We shall use a typical \(m_{w}=0.98\) (Spectralon). The wall absorbance is obviously small, \(1-m_{w}\ll 1\). * Fraction of the surface occupied by the detector and the SPEX port is \(2f_{d}\). (explained in the caption to Fig.2). We shall use, for illustrative purposes, \(f_{d}=0.002504\) (corresponding ratio of port diameter over diameter of the sphere \(d_{D}/D=0.1\)) and \(f_{d}=0.005657\) corresponding to \(d_{D}/D=0.15\). We assume that the detector reflectance is small, \(m_{d}\ll 1\), i.e. it absorbs most of the light hitting it. Apply energy conservation to find _steady flux_\(F\) [W/m\({}^{2}\)] on the surface of the sphere after multiple reflections. Power \(P\) [W] inserted into the sphere illuminates area \(A_{0}\) [m\({}^{2}\)], Fig.2, and a fraction \(m_{0}\) of that power gets reflected diffusely and uniformly illuminates the entire sphere interior with flux \(F_{0}=m_{0}P/A\) [W/m\({}^{2}\)]. Note that the ports remove power \[F\sum_{i=0}^{N}(1-m_{i})A_{i}=FA\sum_{i=0}^{N}(1-m_{i})f_{i}. \tag{3}\] The walls absorb power \[F(1-m_{w})\big{(}A-\sum_{i=0}^{N}A_{i}\big{)}=FA(1-m_{w})\big{(}1-\sum_{i=0}^{ N}f_{i}\big{)}. \tag{4}\] Balance of power yields \[m_{0}P=FA\sum_{i=0}^{N}(1-m_{i})f_{i}+FA(1-m_{w})\big{(}1-\sum_{i=0}^{N}f_{i} \big{)}, \tag{5}\] Hence, the average steady flux is equal to \[F=M\frac{P}{A}\, \tag{6}\] where \[M=\frac{m_{0}}{1-m}\gg 1, \tag{7}\] is the sphere multiplication factor and we have defined the average reflectance, \[\bar{m}=m_{w}\big{(}1-\sum_{i=0}^{N}f_{i}\big{)}+\sum_{i=0}^{N}m_{i}f_{i}. \tag{8}\] Notice that \(F\gg F_{0}\), the latter being the light power flux inserted by the lamp (Fig.2.) The fraction of steady power (after multiple reflections) hitting port \(i\) is, from a total energy balance \[p_{i}=\frac{P_{i}}{P}=\frac{FA_{i}}{P}=Mf_{i}\gg f_{i}. \tag{9}\] Now, we could find a fraction of power hitting the detector D in the situation shown in Fig. 2 (i.e. having three openings and remembering that in our case \(m_{0}=m_{w}\) and \(m_{d}\approx 0\)) as \[p_{d}=\frac{P_{d}}{P}=Mf_{d}=\frac{m_{w}f_{d}}{1-m_{w}(1-2f_{d})+(m_{w}-m_{s}) f_{s}}, \tag{10}\] where \[\overline{m}=m_{w}(1-f_{0}-2f_{d}-f_{s})+m_{0}f_{0}+2m_{d}f_{d}+m_{s}f_{s}\] \[\approx 1-m_{w}(1-2f_{d})+(m_{w}-m_{s})f_{s} \tag{11}\] Now, introduce _Sample Visibility_\(V\) of the sample scattering _by the Detector_, which is a difference in Detector (D) reading with sample off (the sample port is a hole reflecting very little back into the IS) and with sample on: \[V=p_{d}(m_{s})-p_{d}(m_{s}=0)\] \[= \frac{m_{w}f_{d}m_{s}f_{s}}{[1-m_{w}(1-2f_{d}-f_{s})][1-m_{w}(1-2 f_{d}-f_{s})-f_{s}m_{s}]}\] \[\approx \frac{m_{w}f_{d}m_{s}f_{s}}{[1-m_{w}(1-2f_{d}-f_{s})]^{2}}\,, \tag{12}\] since \(f_{s}m_{s}\ll 1\). The _sample visibility_ increases linearly with \(f_{s}\) with a large prefactor as \(\frac{m_{w}f_{d}m_{s}f_{s}}{(1-m_{w})^{2}}\!\sim\!10^{4}\,f_{d}m_{s}f_{s}\) and has a peak as a function of the sample fractional area at \[f_{s,max}=\frac{1-m_{w}}{m_{w}}+2f_{d}, \tag{13}\] see Fig.3. Above, the term \(2f_{d}\) is the fractional area occupied by the open ports. More generally, the _optimal_ fractional area of a sample port is \[f_{s,max}=\frac{1-m_{w}}{m_{w}}+\sum f_{\rm open\ ports}\,. \tag{14}\] The above Eq.(14) is the main result of the paper. Fig.3. The _Sample Visibility_ V of the diffuse scattering of the sample versus the fractional area of the sphere occupied by the sample port \(f_{S}\). Arrow marks the optimal visibility that is reached for the fraction of sample area versus sphere area \(f_{S,max}=0.025-0.030\) (i.e.\(2.5-3\%\)), when the ratio of sample port diameter \(d_{\text{sample}}\) to the sphere diameter \(D\) is \(\frac{d_{\text{Sample}}}{D}=\frac{5}{16}\div\frac{1}{3}\). Apparently, the sample visibility peak position at \(\frac{d_{\text{Sample}}}{D}\approx\frac{1}{3}\) is not very sensitive to the detector fractional size. In fact, it slightly increases for a smaller detector size. This will reduce the absolute amount of power registered by the Detector, so further optimization step might account for threshold power required by the given detector at a given power of the light source L, Fig.2. ## Conclusions The Integrating Sphere is a simple yet powerful tool for measuring scattering power by transparent materials, diffusers, etc. It could be used for fast characterization of important class of low-haze samples and its sensitivity could be significantly optimized by using sample port size that occupies about \(f_{S}=2.5-3\%\) of the sphere surface area, i.e. the sample port diameter being about one third of the Sphere diameter. Further optimization of the tool is possible with account of threshold power on the detector. The above analysis should help with designing new spheres without customarily used 'rule of thumb' [5, 6, 7] or numerical optimization via ray tracing simulations. It is interesting to note that the commercially available spheres feature port sizes that are pretty close to those predicted above (Fig.3) [11], complying with empirical 'rule of thumb' that the ports occupy no more than 5% of sphere surface [5, 7]. The author would like to acknowledge useful comments by Dr. Michal Mlejnek.
2309.07596
Quantum toroidal algebras and solvable structures in gauge/string theory
This is a review article on the quantum toroidal algebras, focusing on their roles in various solvable structures of 2d conformal field theory, supersymmetric gauge theory, and string theory. Using $\mathcal{W}$-algebras as our starting point, we elucidate the interconnection of affine Yangians, quantum toroidal algebras, and double affine Hecke algebras. Our exploration delves into the representation theory of the quantum toroidal algebra of $\mathfrak{gl}_1$ in full detail, highlighting its connections to partitions, $\mathcal{W}$-algebras, Macdonald functions, and the notion of intertwiners. Further, we also discuss integrable models constructed on Fock spaces and associated $\mathcal{R}$-matrices, both for the affine Yangian and the quantum toroidal algebra of $\mathfrak{gl}_1$. The article then demonstrates how quantum toroidal algebras serve as a unifying algebraic framework that bridges different areas in physics. Notably, we cover topological string theory and supersymmetric gauge theories with eight supercharges, incorporating the AGT duality. Drawing upon the representation theory of the quantum toroidal algebra of $\mathfrak{gl}_1$, we provide a rather detailed review of its role in the algebraic formulations of topological vertex and $qq$-characters. Additionally, we briefly touch upon the corner vertex operator algebras and quiver quantum toroidal algebras.
Yutaka Matsuo, Satoshi Nawata, Go Noshita, Rui-Dong Zhu
2023-09-14T10:57:37Z
http://arxiv.org/abs/2309.07596v2
# Quantum toroidal algebras and ###### Abstract This is a review article on the quantum toroidal algebras, focusing on their roles in various solvable structures of 2d conformal field theory, supersymmetric gauge theory, and string theory. Using \(W\)-algebras as our starting point, we elucidate the interconnection of affine Yangians, quantum toroidal algebras, and double affine Hecke algebras. Our exploration delves into the representation theory of the quantum toroidal algebra of \(\mathfrak{gl}_{1}\) in full detail, highlighting its connections to partitions, \(W\)-algebras, Macdonald functions, and the notion of intertwiners. Further, we also discuss integrable models constructed on Fock spaces and associated \(\mathcal{R}\)-matrices, both for the affine Yangian and the quantum toroidal algebra of \(\mathfrak{gl}_{1}\). The article then demonstrates how quantum toroidal algebras serve as a unifying algebraic framework that bridges different areas in physics. Notably, we cover topological string theory and supersymmetric gauge theories with eight supercharges, incorporating the AGT duality. Drawing upon the representation theory of the quantum toroidal algebra of \(\mathfrak{gl}_{1}\), we provide a rather detailed review of its role in the algebraic formulations of topological vertex and \(qq\)-characters. Additionally, we briefly touch upon the corner vertex operator algebras and quiver quantum toroidal algebras.
2309.06850
Low-complexity hardware and algorithm for joint communication and sensing
Joint Communication and Sensing (JCAS) is foreseen as one very distinctive feature of the emerging 6G systems providing, in addition to fast end reliable communication, the ability to obtain an accurate perception of the physical environment. In this paper, we propose a JCAS algorithm that exploits a novel beamforming architecture, which features a combination of wideband analog and narrowband digital beamforming. This allows accurate estimation of Time of Arrival (ToA), exploiting the large bandwidth and Angle of Arrival (AoA), exploiting the high-rank digital beamforming. In our proposal, we separately estimate the ToA and AoA. The association between ToA and AoA is solved by acquiring multiple non-coherent frames and adding up the signal from each frame such that a specific component is combined coherently before the AoA estimation. Consequently, this removes the need to use 2D and 3D joint estimation methods, thus significantly lowering complexity. The resolution performance of the method is compared with that of 2D MUltiple SIgnal Classification (2D-MUSIC) algorithm, using a fully-digital wideband beamforming architecture. The results show that the proposed method can achieve performance similar to a fully-digital high-bandwidth system, while requiring a fraction of the total aggregate sampling rate and having much lower complexity.
Andrea Bedin, Shaghayegh Shahcheraghi, Traian E. Abrudan, Arash Asadi
2023-09-13T09:53:52Z
http://arxiv.org/abs/2309.06850v1
# Low-complexity hardware and algorithm for joint communication and sensing ###### Abstract Joint Communication and Sensing (JCAS) is foreseen as one very distinctive feature of the emerging 6G systems providing, in addition to fast end reliable communication, the ability to obtain an accurate perception of the physical environment. In this paper, we propose a JCAS algorithm that exploits a novel beamforming architecture, which features a combination of wideband analog and narrowband digital beamforming. This allows accurate estimation of Time of Arrival (ToA), exploiting the large bandwidth and Angle of Arrival (AoA), exploiting the high-rank digital beamforming. In our proposal, we separately estimate the ToA and AoA. The association between ToA and AoA is solved by acquiring multiple non-coherent frames and adding up the signal from each frame such that a specific component is combined coherently before the AoA estimation. Consequently, this removes the need to use 2D and 3D joint estimation methods, thus significantly lowering complexity. The resolution performance of the method is compared with that of 2D MUltiple Signal Classification (2D-MUSC) algorithm, using a fully-digital wideband beamforming architecture. The results show that the proposed method can achieve performance similar to a fully-digital high-bandwidth system, while requiring a fraction of the total aggregate sampling rate and having much lower complexity. Joint communication and sensing, mmWave ## I Introduction The primary differentiation of 6G compared to 5G is Joint Communication and Sensing (JCAS). There are two key enablers of JCAS. First, _the large bandwidth_ available in the millimeter-wave spectrum enables not only higher data rates [1], but also higher ranging resolution. Second, the shorter wavelengths allow for very compact _large aperture antenna arrays_, thus enabling high-resolution beamforming and angular estimation. Sensing information can be used standalone, e.g., for user localization and navigation, and in imaging applications [2, 3, 4]. In addition, it can be used to enhance communication through better beam selection and preventing disruptions caused by blockages which are persistent issues of millimeter-wave communication systems. 6G is expected to support extreme Ultra Reliable Low Latency Communications (eURLLC) applications [5, 6], where best-effort service does not suffice anymore. An example of such traffic is cooperative robots, which according to [5], despite the low data rate on the order of kbps, may require failure rates as low as \(10^{-9}\). Another notable use case is self-driving vehicles, which pose specific requirements including: low throughput eURLLC for vehicle coordination and safety features, e.g., emergency braking and collision avoidance; JCAS for obstacle detection and environment mapping; and Massive data rates for entertainment and cooperative sensing. Since the key sensing parameters are the estimated Time of Arrival (ToA) and Angle of Arrival (AoA) corresponding to the targets of interest, we ideally need a high-rank and high-bandwidth full-MIMO systems. Although this is certainly possible theoretically, it poses major technical challenges in terms of power consumption and cost. Considering that a multi-GHz Analog to Digital Converter (ADC) can have a power consumption of over 2 watts [7, 8], large arrays using an individual ADC for each antenna become power hungry and expensive devices which are deemed unfeasible for commercial cellular devices, especially on the User Equipment (UE) side. For example, A UE with a \(16\) element array connected to \(2\)W ADCs would consume \(32\)W only for analog to digital conversion, which is impractical. Another relevant implementation challenge is the computational complexity of the sensing algorithms, which is a major practical limitation in the state-of-the-art sensing algorithms. Methods like 2D MUltiple SIgnal Classification (2D-MUSIC) are extremely complex due to the size of the covariance matrix and the search space. Finally, mono-static radar for sensing is practically infeasible, especially in mobile handsets, because they require very complex in-band full-duplex receivers [9]. ### Contributions We note that bandwidth is not critical for AoA estimation, and the eURLLC traffic which can benefit from large rank is typically low-throughput. Similarly, the array size has very little impact on ToA estimation, and massive data rates are also achievable with a low-rank system with analog beamforming. Therefore, we do not need to digitalize the signal from all antenna elements on the full bandwidth. We propose a novel hardware architecture that adds, on top of the usual analog beamforming, an individual narrowband RF chain for each antenna. The signal from the individual RF chains is then multiplexed into a single ADC and digitalized using a fraction of the sampling rate of the ADC for each antenna. More precisely, in this article, we make the following technical contributions: 1. In Section IV-A, we propose _a novel hardware architecture_ for low-complexity high-resolution sensing. The proposed architecture significantly reduces the power consumption and cost of the mmWave JCAS system by combining an equivalent network of low-bandwidth ADCs (realized by multiplexing multiple signals into a single ADC) for digital beamforming with a high-bandwidth ADC for analog beamforming. In addition, we propose a modulation scheme and beam design mechanism that is suitable for both communication and sensing, and provide a mathematical description of the output of the channel estimate that will be used later for AoA and ToA estimation. 2. In Section VI, we propose a novel JCAS algorithm exploiting the proposed architecture. First, using the wideband analog beamformer of the architecture, ToA is accurately estimated (e.g. using the MUltiple SIgnal Classification (MUSIC) algorithm). Then, Maximum Ratio Combining (MRC) is applied in the digital beamforming domain on multiple non-coherent frames to amplify the path component associated to each ToA. Finally, the corresponding AoA is estimated exploiting the combined digital beamforming signal (e.g. using Matrix Pencil). This way, AoA and ToA of the multipath components are estimated with high resolution and low complexity. The theoretical advantages of the proposed architecture and method are explored in Section V. 3. In Section VII, we evaluate the performance of the proposed method, both in terms of parameter estimation error and close target resolution capabilities, and compare its performance with the performance of 2D-MUSIC. We show that, despite the dramatically lower hardware and software complexity and the reduced power consumption, the proposed system has comparable performance to state-of-the-art solutions. ### Advantages of the Proposed Architecture and Method The proposed architecture possesses several advantages, both for communication and sensing, as explained below. 1. _Versatility:_ The proposed reconfigurable architecture comprising both a high-bandwidth analog beamformer and a low-bandwidth digital beamformer can support the vast range of requirements of modern standards much better than a classic analog or hybrid beamforming solution, while being considerably less expensive than a fully digital MIMO system. 2. _One-shot beam scanning:_ Our architecture overcomes the limitations of codebook-based beamforming approaches used in communication [10, 11] by designing the beam in real-time using MRC. Such a beam design procedure, in fact, requires individual knowledge of the channel for each antenna. Acquiring such information in a classical analog beamforming system imposes a significant overhead, as we are required to sweep through the codebook every time an updated channel state information is needed. This limits the frequency of the beam update, which in a very dynamic environment could lead to momentary disruptions in communication. As in the case of sensing though, it might not be necessary to sample all the antennas in the full bandwidth to obtain the channel state information required to design the beam. In our architecture, the beam can be thus optimized very often based on the narrowband digital beamforming subsystem channel estimates, while removing the necessity of time-consuming beam sweep procedures. It should be also noted that this beam update can be done _without interrupting the communication_, meaning that we remove any overhead related to beam training, at the expense of a slightly higher power consumption at the front end. 3. _Robust Low-SNR Sensing:_ The use of analog beamformer for the ToA estimate provides enhanced SNR, thus preventing the SNR collapse [12] of the MUSIC algorithm, i.e. the situation where a noise eigenvector is selected over a signal one due to random fluctuations of the eigenvalues. 4. _Low Computational Complexity:_ Finally, the method relies on subsequent 1D parameter estimation, rather than joint 2d estimations. This greatly improves its computational complexity. ## II related work Most of the recent literature integrated radar sensing with communication where a single waveform is used to extract both target parameters and communication symbols. The choice of waveform has a significant effect on the performance of JCAS. The existing JCAS waveforms may be classified into radar-based waveforms, communication-based waveforms and optimised JCAS waveforms [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. In the following, we provide an overview of these works. **JCAS using radar waveforms.** Some prior works embed communication information in typical radar waveforms such as, frequency modulated continuous wave (FMCW) [24, 25], phase modulated continuous wave (PMCW) [25, 26, 27], linear frequency modulated (LFM) wave [28], and pulse position modulated (PPM) wave [29]. The general drawback of these radar-based JCAS approach is that they cannot provide high data-rate transmission. In another approach, [30] proposes a PMCW with a direct sequence spread-spectrum technique to achieve high throughput for JCAS, where there is a trade-off between the length of the code sequence and data rate. Also, since it uses large code sequences, it suffers from high complexity and low energy efficiency. **JCAS using communication waveforms.** OFDM is one of the most common communication waveforms, and therefore is also widely used for JCAS [13, 14, 15, 16, 20, 21, 22]. Some works use standard compliant communication waveforms for JCAS [15, 31, 32, 33, 34, 35, 36, 37]. For example, [15] uses IEEE 802.11p car-to-cart communication standard for automotive radar, and [37] uses 802.11ax Wi-Fi-based for indoor human tracking, but due to the small bandwidth at sub-6GHz frequencies, the communication and sensing performance are limited. To overcome this issue, in [31], IEEE 802.11ad mmWave wireless local area network (WLAN) standard is exploited to realize a joint waveform for long-range radar applications. However, its performance is limited by the channel estimation methods implemented in the communication protocol. Moreover, a full-duplex radar operation is assumed, which makes the hardware implementation infeasible due to its complex self-interference cancellation requirement. Spread spectrum [17] and noise-OFDM [19] are other communication waveforms that have been used for JCAS. **JCAS using optimised waveforms.** Another option is to design a waveform that is suitable for JCAS by optimizing the metrics of radar and communication performance [38, 39]. In general, finding the optimal waveform can be computationally expensive. Contrary to these works, we propose a novel architecture that enables JCAS with high resolution, low complexity and low power consumption. In this paper, the received signals that are reflected from different vehicles are processed in a bistatic OFDM-based scenario, and the sensing parameters are estimated using the proposed method. ## III System model In this paper, we consider a system as depicted in Fig. 1, where a Base Station (BS) transmits data to a UE in a multipath environment. The UE receives the signal transmitted by the BS, estimates the Channel State Information (CSI) and estimate the AoA and ToA of the received multipath components. Although the BS could be using MU-MIMO, for simplicity, in this analysis, we only consider a single beam-formed stream directed to the UE of interest. Consequently, we can use a SIMO channel model, and consider any interference from other MU-MIMO streams as part of the SNR. Moreover, we assume that either the UE or the environment are not static (e.g., vehicular scenarios), and therefore the phases of the multipath components are not constant over the acquisition of multiple frames due to the Doppler shift. For example, at a frame rate of \(0.1\)ms, the position of a car moving at \(15\)m/s (\(54\) kmh) is displaced by \(1.5\)mm. Although negligible in terms of range, at \(60\)GHz carrier frequency, this displacement introduces a phase shift larger than \(90^{\circ}\). We assume that the received signal for frame \(i\) can be decomposed in a sum of \(K\) plane waves, each of which has an associated complex amplitude \(\alpha_{(k,i)}\), delay \(\tau_{k}\) and azimuth incidence angle \(\theta_{k}\). We consider a Uniform Linear Array (ULA) with \(N\) antennas spaced \(\frac{\lambda}{2}\), where \(\lambda\) is the wavelength of the received signal, and therefore do not consider the elevation angle. Nevertheless, the work can be adapted for more complex geometries. Finally, we assume that the channel is measured at subsequent time instants, and between those instants the geometry stays the same (i.e. \(\tau_{k}\) and \(\theta_{k}\) are constant) but the channel coefficients \(\alpha_{(k,i)}\) can change. With the above, the Channel Impulse Response (CIR) between the transmitter and antenna \(n\) can be written as \[h_{n}(i,t)=\sum_{k=0}^{K-1}\alpha_{(k,i)}e^{jn\pi\cos(\theta_{k})}\delta(t- \tau_{k}). \tag{1}\] This definition leads to a baseband Channel Frequency Response (CFR) of \[H_{n}(i,f)=\sum_{k=0}^{K-1}\alpha_{(k,i)}e^{jn\pi\cos(\theta_{k})}e^{-j2\pi f \tau_{k}}. \tag{2}\] with \(f\in(-\frac{B_{A}}{2},\frac{B_{A}}{2})\) and \(B_{A}\) is the total bandwidth of the system. ## IV Hybrid JCAS architecture In this section, we discuss the hardware architecture of the proposed system and briefly elaborate on its potential of enhancing communication. In addition, we propose a modulation scheme and beam design mechanism that is suitable for both communication and sensing, and provide a mathematical description of the output of the channel estimate that will be used later for AoA and ToA estimation. ### _Receiver hardware architecture_ To obtain good estimates for both the ToA and AoA, we require a system that is wideband and high rank, while maintaining practical costs and power consumption. While fully digital MIMO satisfies the former constraints, it fails on the last: its implementation is inherently costly and power-hungry due to the need for multiple high-speed ADCs. Our proposed architecture, depicted in Fig. 2, can achieve these goals. It comprises of \(N\) antennas connected to a classic analog beamforming chain (marked in yellow in the figure) that operates on the full bandwidth \(B_{A}\) of the system. In Fig. 1: System overview. Fig. 2: Hardware architecture. The wideband analog beamforming and the narrowband fully-digital beamforming blocks are highlighted in yellow and blue, respectively. particular, the signal from each antenna is amplified, phase shifted and added together in the analog domain. Subsequently, the combined signal is downconverted and digitalized by a single high-speed ADC. Calling the sampling period of the ADC \(T\leq\frac{1}{B_{A}}\), the transmitted and received signal \(x(t)\) and \(y_{A}(t)\) respectively, and \(\beta_{n}\) a complex beamforming coefficient with the amplitude determined by the LNA gain and the phase determined by the phase shifter, the output of the ADC would therefore be of the form \[y_{A}(sT)=\sum_{n=0}^{N-1}\beta_{n}(h_{n}*x)(sT),\;s\in\mathbb{Z}, \tag{3}\] where "*" denotes the convolution operator, and its Discrete Time Fourier Transform (DTFT), assuming that the signal is band-limited with bandwidth \(B_{A}\), is \[Y_{A}(f)=\sum_{n=0}^{N-1}\beta_{n}H_{n}X(f) \tag{4}\] On top of the calssical system, for each antenna, we implement an additional dedicated RF chain (marked in light blue) that has a narrower bandwidth. In particular, the signal from each antenna can be extracted after the low noise fronted amplifiers, individually downconverted to baseband and filtered with a Low Pass Filter (LPF). We now re-purpose one ADC and multiplex all the low bandwidth individual antenna signals into that it to obtain a low bandwidth digital beamforming chain. Let us define the set of antennas \(\mathcal{M}=\{m_{0},...,m_{M-1}\},M\leq N\) for which we are enabling the individual RF chain. At the \(s\)-th sample of the ADC, we connect the multiplexer to the RF chain of antenna \(m_{(s\mod M)}\). With this assumption, and defining the filter impulse response as \(g(t)\) and its transfer function as \(G(f)\), the signal \(y_{D}\) at the ADC output can be expressed as \[y_{D}(kT)=(h_{m_{(s\mod M)}}*g*x)(sT),\;s\in\mathbb{Z}. \tag{5}\] From this signals, we can extract the individual signals \[y_{j}(MsT) =y_{D}((Ms+j)T) \tag{6}\] \[=(h_{m_{((M+j)\mod M)}}*g*x)((Ms+j)T)\] (7) \[=(h_{m_{j}}*g*x)(MsT+jT),\] (8) \[s\in\mathbb{Z},\;j\in\{0,...,M-1\}\] Notably, the signal \(y_{j}(MsT)\) includes all the contribution of antenna \(m_{j}\) only, which is sampled regularly with a sampling interval \(MT\). The signal is also delayed by a time \(jT\), however, we ignore this shift as it can be compensated for in the digital domain. To satisfy the Nyquist-Shannon sampling theorem each of the signals must have a bandwidth of \(B_{D}\leq\frac{B_{A}}{M}\). If we further assume the frequency response of the filter is \[G(f)=\begin{cases}1&\text{if }f\in\left(-\frac{B_{D}}{2},\frac{B_{D}}{2} \right)\\ 0&\text{elsewhere}\end{cases}, \tag{9}\] we can write the DTFT of the received signal as \[Y_{j}(f)=H_{m_{j+1}}(f)X(f) \tag{10}\] for \(f\in\left(-\frac{B_{D}}{2},\frac{B_{D}}{2}\right)\). The key fact there is that with this architecture, the total sampling rate required is twice as much as a purely analog system, but still \(\frac{2}{N}\) times lower than a fully digital beamforming system, while maintaining both a full rank (though in a narrower band) and the full bandwidth. As an example, if we consider a mmWave 5G system with \(400\)MHz of bandwidth and 16 antennas, the total sampling rate required for classic analog beamforming would be \(400\)MS/s. By contrast, for the proposed architecture it would be \(800\)MS/s and for the fully digital architecture \(6.4\)GS/s. Scaling the power consumption of the ADC described in [7] by the required bandwidth would correspond to \(150\)mW for the analog beamforming, \(300\)mW for the proposed architecture and \(2.5\)W for the fully digital MIMO. Therefore, by replacing the fully digital MIMO system with the proposed architecture we could achieve a power consumption reduction of \(87.5\%\), which corresponds to roughly 2W. This reduction is quite significant in a battery-operated device, as a \(2\)W power consumption alone (i.e. without considering the CPU, screen, etc..) could drain a typical smartphone battery (e.g. with a \(15\)Wh capacity) in \(7.5\) hours. ### _JCAS Key Aspects_ As mentioned in the introduction, despite this paper being focused on sensing, we want to briefly elaborate on the potential of this architecture in communications. In particular, we want to highlight its ability to support both eURLLC traffic and high data rates at the same time. Some possible strategies to achieve this goal are listed in the following: * Using the digital beamforming part to obtain CSI and improve the analog beam design. This application is particularly relevant in situations where the channel varies quickly. In fact, classical beam sweep imposes major overhead in such situations, whereas when measuring the CSI with narrowband digital beamforming there is no overhead. * Using the analog beamforming part for high-throughput non-critical communications, such as video streaming, and the digital beamforming for low throughput critical time-sensitive transmissions, such as packets used for closed-loop control of machinery. The idea behind this implementation is to avoid packet loss due to beam failure: despite being a rare event, in an ultra-reliable setting with, e.g., a \(99.9999\%\) reliability requirement, beam failure can become a likely enough event to prevent the system to meet its specifications. We assume that CSI is inherently acquired by the communication system, e.g., for channel equalization and beamforming, so that this information is already available, and can be exploited for sensing. More specifically, we expect the sensing to have zero overhead on the radio channel, and do not cause any disruption to the communication. ### _Modulation and analog beamforming_ For this analysis, we assume that the system uses OFDM with \(S\) subcarriers1 and a subcarrier spacing of \(\Delta_{f}\), thus the bandwidth is \(B_{A}=S\Delta_{f}\). Such configuration is often found, e.g., in 4G and 5G commercial systems, and is foreseen to be maintained in 6G. Footnote 1: Since the FFT size is usually even (a power of two), we assume that the subcarrier with index -5/2 is a null subcarrier, such that the zero index subcarrier is located in the middle of the band. We assume that the transmitter includes some reference symbols in the transmitted signal. For each frame \(i\), we obtain a noisy channel estimate by demodulating the signal and extracting such reference symbols. For the analog beamformer, as we are observing the signal \(y_{A}(sT)\), we have the estimate of the channel after the phase shift and addition,which can be expressed as: \[\hat{H}_{A}(i,s\Delta_{f})=\sum_{n=0}^{N-1}\beta_{n,i}H_{n}(i,s \Delta_{f})+w, \tag{11}\] \[s\in\left\{-\left\lceil\frac{S}{2}\right\rceil+1,...,\left\lfloor \frac{S}{2}\right\rfloor-1\right\}, \tag{12}\] where \(\beta_{n,i}\) is the complex beamforming coefficient for antenna \(n\) and \(w\sim\mathcal{N}(0,\sigma_{n})\) is a white Gaussian noise. Similarly, for the digital beamforming we have the channel estimate for each antenna, obtained by demodulating the signals \(y_{D}(MsT)\), which is: \[\hat{H}_{m}(i,s\Delta_{f})=H_{m}(i,s\Delta_{f})+w, \tag{13}\] \[s\in\left\{-\left\lceil\frac{S}{2M}\right\rceil+1,...,\left\lfloor \frac{S}{2M}\right\rfloor-1\right\},m\in\mathcal{M}. \tag{14}\] If we assume that the beam is generated based on the available CSI, and is designed according to MRC with an Infinite Impulse Response (IIR) filter to mitigate the noise of the estimate, at every frame the beamforming coefficients for the antennas in \(\mathcal{M}\) are updated to: \[\beta_{m,i+1}=\mu\beta_{m,i}+(1-\mu)\hat{H}_{m}^{*}(i,0), \tag{15}\] where \(\mu\) is the IIR filter coefficient. Assuming that the channel at every antenna is updated regularly, after the initial beam alignment, this method allows to maintain a MRC beam without requiring any beam sweep or other procedures. ## V Theoretical advantage In order to evaluate the potential AoA and ToA resolution capabilities of the aforementioned architecture, we use the analysis proposed in [40]. We assume to acquire \(F\)=10 channel measurements on \(N=16\) antennas with a subcarrier spacing of \(\Delta_{f}\) = \(400\) kHz, and we have a SNR of \(\Gamma_{0}\) = 10 dB when transmitting on \(S\)=1000 subcarriers. The SNR \(\Gamma\) will be scaled accordingly for systems that use less bandwidth to maintain the same transmitted power. With these definitions, we can rewrite the equations in [40] to compute the ToA resolution \(\delta_{T}\) as \[\delta_{T}=\frac{1}{\pi S\Delta_{f}}\sqrt[4]{\frac{360(S-2)}{\Gamma SFN}}. \tag{16}\] To evaluate the AoA resolution, we consider two paths with distinct angles \(\vartheta_{1}\) and \(\vartheta_{2}\). In order to reuse the expression in Eq. (16) for angular resolution, we define the spatial frequencies \(\omega_{1}=\frac{\cos(\vartheta_{1})}{2}\) and \(\omega_{2}=\frac{\cos(\vartheta_{2})}{2}\). In the spatial domain, the antenna aperture plays the role of the bandwidth in the time domain. We thus have that the received phases associated with each component, as a function of the antenna number, are \(2n\pi\omega_{1}\) and \(2n\pi\omega_{2}\). Since our antenna spacing is \(\frac{\lambda}{2}\), we can write \[\delta_{\omega}=\frac{2}{\pi N}\sqrt[4]{\frac{360(N-2)}{\Gamma NFS}}. \tag{17}\] In order to quantify the gain that could be obtained with the proposed architecture, we compare the following systems: **Proposed system** uses the proposed architecture with \(N\)=16 antennas, \(|\mathcal{M}|=N\), \(B_{A}\) = \(400\) MHz and \(B_{D}\) = \(25\) MHz. For this system, the transmission takes place across \(S=1000\) subcarriers so the SNR is \(\Gamma=\Gamma_{0}\). **Equivalent classical system** A conventional digital MIMO architecture with \(N^{\prime}\in[4,16]\) antennas and a bandwidth of \(\frac{2B_{A}}{N^{\prime}}\) GHz. This system has a total sampling rate equivalent to the proposed system, but it only transmits on a fraction of the bandwidth, so its SNR is \(\Gamma=\frac{N^{\prime}}{2}\Gamma_{0}\). **Full MIMO system** A conventional digital MIMO architecture with \(N=16\) antennas and \(400\) MHz of bandwidth. The same SNR consideration mentioned for the proposed system holds. It should be noted that the Full MIMO system is clearly much more expensive than the others as it requires a total of \(16\) ADCs with \(400\) MHz bandwidth each. As such, it will be used as a baseline for comparison. For the proposed system we use the analog beamforming part to compute the ToA and the digital beamforming to compute the AoA. Fig. 3 illustrates the ToA and AoA resolution as a function of the number of antennas for these three systems. In particular, Fig. (a)a shows the resolution of the ToA, whereas in Fig. (b)b, we can see the AoA resolution in terms of spatial frequency difference (i.e. \(\omega_{1}-\omega_{2}\)). We can observe that our proposed architecture has roughly half the resolution of the full MIMO architecture in both ToA and AoA, while requiring a significantly lower sampling rate. Moreover, it matches or outperforms all the equivalent architectures for ToA estimation. It only falls short in AoA estimation with respect to the equivalent classical system for \(N^{\prime}\geq 12\) antennas. However, the difference in resolution is up to \(15\%\) for \(N^{\prime}=16\) antennas, and it corresponds to a ToA resolution that is \(4\) times worse. This is due to the narrower bandwidth of the equivalent classical system, which allows it to obtain an SNR 16 times higher with the same transmitted power. It should be also noted that this analysis applies for individual ToA and AoA estimation, whereas the method we propose in the following exploits the ToA information to improve the AoA estimation. For this reason, we expect the gap in AoA estimation resolution to be even higher in the final implementation. Next, we analyzed AoA and ToA as two disjoint parameters. However, real applications often rely on joint AoA and ToA estimation to correctly predict the location of the targets without ambiguity. While there are computationally heavy methods for joint estimation, such as 2D-MUSIC, to the best of our knowledge, there are no known solutions to the problem of matching separately estimated AoA and ToA that comes from heterogeneous beamforming architectures. In the following, we will provide a method to overcome this issue, and obtain joint ToA and AoA estimates, while still performing the estimation of each parameter independently. This is a key feature to support the rank-bandwidth trade-off that is provided by the proposed architecture, as 2D-MUSIC only works with fully digital MIMO. ## VI Sensing algorithm In this section, we provide an in-depth explanation of the proposed sensing technique, describe its computational complexity and compare it with the complexity of 2D-MUSIC. Fig. 4 illustrates our proposed approach for accurate ToA and AoA estimation. First, we estimate the ToA of the multipath components by applying a super-resolution algorithm, e.g., MUSIC, on the CSI obtained from the wideband analog beamforming. Given the large bandwidth, the resulting ToA estimation is highly accurate. In the second step, for each estimated ToA, we process the channel estimates from the narrowband digital beamforming to amplify the component related to the ToA of interest. In particular, we use MRC over frequency and frame to enhance the component of interest and suppress the other components and the noise. Finally, using matrix pencil, the AoA of each path is estimated from the amplified CSI. In the following, these steps are explained in detail. Without loss of generality, let us assume that we use a set of \(F\) consecutive frames starting from frame \(0\). The frames are non-coherent, i.e. the phase of each component changes between frames, and \(F\) should be chosen such that the targets do not move significantly between the first and last frame (i.e. its position change is much smaller than \(\frac{c}{H_{A}}\)). As described above, the first step to the algorithm is to estimate the ToA from the analog beamforming channel estimate. While this method is agnostic to the ToA estimation applied, for clarity, we use the MUSIC algorithm, described in appendix A. We assume the ToA estimation returns a list of peak times \(\mathcal{T}=\{\hat{\tau}_{0},...,\hat{\tau}_{L-1}\}\) For each \(\hat{\tau}_{t}\in\mathcal{T}\) we estimate the channel coefficient for all frames as \[\hat{\alpha}_{(\ell,i)}=\sum_{s=-\left\lceil\frac{\theta}{2}\right\rceil+1}^{ \left\lfloor\frac{\theta}{2}\right\rfloor-1}e^{j2\pi\tau_{I}s}H_{A}(i,s\Delta_ {f}). \tag{18}\] With this information, we can combine the digital beamforming channel estimates of all frames to amplify a specific component. The digital combined channel responses for \(m\in\mathcal{M}\) and path \(\ell\) is computed as: \[\bar{H}_{d}(\hat{\tau}_{t},m)=\sum_{i=0}^{F-1}\sum_{s=-\left\lceil\frac{\theta }{2\pi}\right\rceil+1}^{\left\lfloor\frac{\theta}{2\pi}\right\rfloor-1}\hat{ \alpha}_{(\ell,i)}^{*}e^{j2\pi\hat{\tau}_{\ell}s\Delta_{f}}\hat{H}_{m}(i,s \Delta_{f}). \tag{19}\] This is the compensated channel that will be used later for the AoA estimation. To show that it indeed this amplifies the component associated with the given ToA, we replace the channel model in the expression. Assuming a correct ToA and channel coefficient estimation, i.e. \(\hat{\tau}_{\ell}=\tau_{\hat{k}}\) and \(\hat{\alpha}_{(\ell,i)}=\alpha_{(\hat{k},i)}\) Fig. 4: Block diagram of the proposed method Fig. 3: Theoretical resolution achievable by the proposed architecture and some classical fully digital beamforming architectures using MUSIC for some \(\hat{k}\), and a constant beam \(\beta_{n,i}=\beta_{n}\), we obtain2 Footnote 2: We omit the limits of \(s\) and \(i\) for compactness \[\bar{H}_{d}(\hat{\tau}_{\ell},m)=\sum_{i,s}\left(\sum_{n=0}^{N-1} \beta_{n}\sum_{k=0}^{K-1}\alpha_{(\hat{k},i)}e^{jn\pi\cos(\theta_{k})}\delta( \tau_{\hat{k}}-\tau_{k})\right)^{*}.\] \[e^{j2\pi\tau_{\hat{k}}s\Delta_{f}}\left(\sum_{k=0}^{K-1}\alpha_{ (k,i)}e^{jn\pi\cos(\theta_{k})}e^{-j2\pi s\Delta_{f}\tau_{\hat{k}}}+w\right). \tag{20}\] We now define \(\Xi_{n}=\sum_{n=0}^{N-1}\beta_{n}e^{jn\pi\cos(\theta_{k})}\) for compactness. We also note that the noise statistic is unaffected by a rotation in the complex plane, hence we can rewrite the expression as \[\bar{H}_{d}(\hat{\tau}_{\ell},m)=\Xi_{n}\sum_{i,s}\alpha_{(\hat{k},i)}^{*}\sum_{k=0}^{K-1}\alpha_{(k,i)}e^{jn\pi\cos(\theta_{k})}.\] \[e^{j2\pi s\Delta_{f}(\tau_{\hat{k}}-\tau_{\hat{k}})}+w. \tag{21}\] Without loss of generality, we assume \(\hat{k}=0\) and \(\Xi_{n}=0\). Then Eq. (21) can expressed as: \[\bar{H}_{d}(\hat{\tau}_{\ell},m)=\sum_{i,s}|\alpha_{(1,i)}|^{2}e^ {jn\pi\cos(\theta_{1})} \tag{22}\] \[\quad+\sum_{k=1}^{K-1}\sum_{i,s}\alpha_{(\hat{k},i)}^{*}\alpha_{ (k,i)}e^{jn\pi\cos(\theta_{k})}e^{j2\pi s\Delta_{f}(\tau_{\hat{k}}-\tau_{\hat {k}})}\] (23) \[\quad+\sum_{i,s}\alpha_{(\hat{k},i)}^{*}w. \tag{24}\] Here we can see that the component with \(k=0\) is combined coherently, whereas the other components, as well as the noise, are combined incoherently. This ensures that the component of interest is amplified. This operation is the core novelty of the proposed method, as it allows the ToA and AoA association that would be otherwise impossible because the narrowband part does not have enough bandwidth. The key observation here is that while the wideband analog and narrowband digital domains do not share a common AoA or ToA information that can be reliably used for the association, they do share the doppler domain, so the phase change over multiple frames is the only possible mean to connect the ToA and AoA results. After computing the digital combined channel for each path, we perform AoA estimation on each of the digitally combined channels. Again, the proposed scheme can work with different AoA estimation methods. In this case though, we note that the combination produces a single vector, therefore methods that relies on the singular value decomposition like MUSIC are not suitable to detect more than one angle. For this reason, we use the Matrix Pencil method, described in appendix B. We apply the such method to the vector \(\mathbf{\bar{H}}_{d}(\hat{\tau}_{\ell})=\left(\bar{H}_{d}(\hat{\tau}_{\ell},0 ),...,\bar{H}_{d}(\hat{\tau}_{\ell},M-1)\right)\) with a pencil parameter \(P\) to obtain the estimated spatial frequencies and amplitude of the components. For the generic \(\ell^{\text{th}}\) component, we obtain a set of spatial frequencies \(\mathbf{\Omega}_{\ell}=\{\omega_{\ell,\mathbf{0}},...,\omega_{\ell,\mathbf{M }-\mathbf{P}-\mathbf{1}}\}\). For each spatial frequency, thus obtaining the set of amplitudes \(\mathcal{A}_{\ell}=\{\hat{a}_{\ell,0},...,\hat{a}_{\ell,M-P-1}\}\), where: \[\hat{a}_{\ell,q}=\sum_{d=0}^{M-1}\bar{H}_{d}(\hat{\tau}_{\ell},q)e^{-j\frac{2 \pi d}{M}\omega_{\ell,q}}. \tag{25}\] Assuming that the number of components associated with the time domain sample \(\hat{\tau}_{\ell}\) are less than \(M-P\), some of the components reported will be a result of noise. Therefore, we apply a threshold to the amplitudes \(\mathcal{A}\) to remove such components. In particular, the threshold is defined as: \[\text{Thr}=\rho\sqrt{\frac{1}{M}\sum_{m=0}^{M-1}|\bar{H}_{d}(\hat{\tau}_{\ell},m)|^{2}}, \tag{26}\] where \(\rho\) is a configurable parameter.3 The components \(\{q:|\hat{a}_{\ell,q}|>\text{Thr}\}\) are kept, and their angle is estimated as \(\hat{\theta}_{\ell,q}=cos^{-1}\left(\frac{2\omega_{\ell,q}}{M}\right)\). This will generate the set \(\mathcal{Z}=\left\{Z_{1},...,Z_{O}\right\}\) of output range-angle pairs of the form \(Z_{o}=\left(\hat{\tau}_{\ell},\hat{\theta}_{\ell,q}\right)\). For the sake of a lighter notation, we define \(\tau(Z_{o})=\hat{\tau}_{\ell}\) and \(\theta(Z_{o})=\hat{\theta}_{\ell,q}\). The complete procedure is summarized in Algorithm 1. Footnote 3: Minimum Description Length Criterion or Akaike Information Criterion could also be used. ``` 1:\(\mathcal{Z}\leftarrow\{\}\) 2:\(\mathcal{T}\gets MUSIC(\hat{H}_{A}(i,s))\) 3:for each\(\ell\in\{0,...,L-1\}\)do 4:\(\hat{\alpha}_{\ell,i}\leftarrow\sum_{s=-\left\lceil\frac{\pi}{2}\right\rceil+1}^{- 1}e^{j2\pi\hat{\tau}_{\ell}s}H_{A}(i,s\Delta_{f})\) 5:\(\bar{H}_{d}(\ell,m)\leftarrow\sum_{i,s}\hat{\alpha}_{(\ell,i)}^{*}e^{j2\pi \hat{\tau}_{\ell}s\Delta_{f}}\hat{H}_{m}(i,s\Delta_{f})\) 6:\(\text{Thr}\leftarrow\rho\sqrt{\frac{1}{M}\sum_{m=0}^{M-1}|\bar{H}_{d}(\hat{\tau}_{ \ell},m)|^{2}}\) 7:\(\mathbf{\Omega}_{\ell}\leftarrow\mathbf{MatrixPencil}(\bar{\mathbf{H}}_{\mathbf{d}} (\hat{\tau}_{\ell}))\) 8:for each\(\omega_{\ell,q}\in\mathbf{\Omega}_{\ell}\)do 9:\(\hat{a}_{\ell,q}\leftarrow\sum_{d=1}^{M}\bar{H}_{d}(\hat{\tau}_{\ell},q)e^{-j \frac{2\pi d}{M}\omega_{\ell,q}}\) 10:if\(|\hat{a}_{\ell,q}|\geq\text{Thr}\)then 11:\(\mathcal{Z}\leftarrow\mathcal{Z}\cup\left\{\tau_{\ell},cos^{-1}\left(\frac{2 \omega_{\ell,q}}{M}\right)\right\}\) 12:endif 13:endfor 14:endfor ``` **Algorithm 1** Sensing algorithm ### _Computational complexity_ One of the main advantages of our proposed method is its low computational complexity. The improvement in performance is mainly due to breaking down the problem into two different 1-D estimation stages, as opposed to the joint 2D estimation used in, e.g., 2D-MUSIC, thus removing the need to search in a 2D parameter space. In particular, if we consider 2-D MUSIC, the computational complexity4 is \(\mathcal{O}((SN)^{3}+(SN)^{2}n_{\theta}n_{\tau})\), where \(n_{\theta}\) and \(n_{\tau}\) are the sizes of the search grid in angle and range, respectively. By contrast, our method reduces the complexity to \(\mathcal{O}(S^{3}+S^{2}n_{\theta}+KFS+KN^{3})\). Where the terms of the summation refer respectively to the matrix decomposition for the 1-D MUSIC (\(S^{3}\)), the spectrum computation for the 1-D MUSIC (\(S^{2}n_{\tau}\)), the \(K\) path component amplification steps (\(KFS\)) and the execution of Matrix Pencil (\(KN^{3}\)). It should be noted that the matrix pencil contribution only depends on the number of antennas and components, which are typically much smaller than the number of subcarriers. Therefore it often has a negligible contribution and can be ignored in most practical situations. Moreover, we note that the complexity of the path component amplification is not dependent on the number of antennas used since using more antennas results in a lower number of subcarriers for the digital beamforming. In order to better understand the difference in performance, we study the effect of scaling all the parameters (i.e. \(S\), \(N\), \(n_{\tau}\), \(n_{\theta}\) and \(K\)) by a factor \(\alpha\). In the case of 2D-MUSIC the complexity becomes \(\mathcal{O}(\alpha^{6}\left[(SN)^{3}+(SN)^{2}n_{\theta}n_{\tau}\right])\), whereas the proposed method has complexity \(\mathcal{O}(\alpha^{3}\left[S^{3}+S^{2}n_{\tau}+KFS+KN^{3}\right])\). For example, if \(\alpha=2\), i.e., we double all the parameters in the system, the proposed method will take \(8\) times longer to execute, whereas 2D-MUSIC will take \(64\) times longer. Indeed, from this example we can understand how the proposed method is much more scalable. ## VII Results ### _Numerical analysis_ For this analysis, we generate a channel according to the model described in III, with a bandwidth \(B_{A}\) = 400 MHz, \(S\) = 128 subcarriers, \(F\) = 10 frames and an SNR of 10 dB on each antenna. We use a 2 paths channel with \(\theta_{0}=15^{\circ}\), \(\tau_{0}=\frac{10\,\text{m}}{c}\), and \(\theta_{1}=30^{\circ}+\Delta\theta\)\(\tau_{1}=\frac{10\,\text{m}+\Delta L}{c}\), respectively. The complex amplitudes \(\alpha_{0}\) and \(\alpha_{1}\) have the same statistics, corresponding to a complex normal distribution of zero mean and the identity as covariance matrix. Moreover, we use \(\rho=0.3\) for the threshold in Eq. (26). We run the algorithm for \((\Delta\theta,\Delta L)\in\{0\,\text{m},0.0125\,\text{m},\ldots,0.5\,\text{m} \}\times\{0^{\circ},0.25^{\circ},\ldots,10^{\circ}\}\). For each \((\Delta\theta,\Delta L)\) pair, we generate \(100\) realizations of the complex channel coefficients, as well as the noise. For each realization, we compare the output \(\{Z_{1},\ldots,Z_{O}\}\) with the real channel parameters as followings. For each estimated component \(Z_{o}\), we find the closest true component according to the distance function \[d\left(Z_{o},(\tau_{k},\theta_{k})\right)=\sqrt{\frac{1}{\sigma_{\tau}^{2}}( \tau(Z_{o})-\tau_{k})^{2}+\frac{1}{\sigma_{\theta}^{2}}(\theta(Z_{o})-\theta_ {k})^{2}}. \tag{27}\] Without loss of generality, let us assume that the closest component is the first one. In case \(d\left(Z_{o},(\tau_{0},\theta_{0})\right)<1\), we associate the estimated component \(Z_{o}\) with the first component, otherwise it is considered a spurious component. If both real components have an associated estimated component, we say that the components have been resolved. Moreover, the said component is used to determine the Root Mean Square (RMS) error for the angle and delay estimation. For this evaluation, we use \(\sigma_{\tau}=\frac{30\,\text{cm}}{c}\) and \(\sigma_{\Theta}=3^{\circ}\). The resulting resolution probability for the proposed Fig. 5: Resolution probability of the proposed method, showing similar performance to full bandwidth fully digital MIMO and widely outperforming MIMO systems with equivalent sampling rate. method, as well as 2-D MUSIC on the full MIMO system is shown in Fig. 4(a) and 4(b), respectively. It should be noted that for 2D-MUSIC the correct number of components, and thus the size of the noise subspace, is _assumed to be know exactly_ and is provided as an input of the algorithm. In a real deployment, this parameter would need to be estimated as well, potentially further reducing the resolution capabilities of the method. We can see that the proposed method, despite using only \(25\%\) the total sampling rate and having much lower complexity, performs comparably to the classical 2D-MUSIC in terms of resolution. At least for the time domain, this is not unexpected: According to Eq. (16) in fact, the dependence on antenna and SNR of the resolution is proportional to \(\sqrt{4}\frac{1}{\Gamma N}\). Despite having \(N=1\) in the proposed architecture, we have that the SNR is \(\Gamma=\Gamma_{0}N\) thanks to the beamforming, so the product \(\Gamma N\) stays constant. In Fig. 4(c), 4(d) and 4(e) we can observe the resolution for 2D-MUSIC for a digital beamforming system with the same aggregate sampling rate as the proposed architecture. Here, the sampling rate is spent for either digitalizing multiple antennas or a larger bandwidth. We may easily notice that the proposed method exhibits a far superior resolution to 2D-MUSIC, either in range, angle or both, based on the allocation of the sampling rate. Finally, In Fig. 6, we can see the resolution achieved by the proposed method when provided with digital beamforming for the full bandwidth. The result is very close to the one show in Fig. 4(a). This clearly show that, at least from a resolution standpoint, fully digital MIMO on the whole bandwidth is not providing a huge advantage, and certainly does not justify the huge increase in cost and power consumption. In Fig. 7 we show the accuracy of the parameters estimated by the proposed method, as well as the one estimated by 2D-MUSIC for the full MIMO system and for the equivalent MIMO systems with \(N^{\prime}=\{2,4,8\}\) antennas. We generated a channel composed of a single component with a random angle and range, and ran the algorithm on a such channel to evaluate the error. We repeated the process for \(1000\) channel realizations to obtain the estimated Root Mean Square Error (RMSE), which is plotted as a function of the full bandwidth SNR \(\Gamma_{0}\). The actual SNR, however, has been adjusted for the equivalent MIMO systems to \(\gamma=\gamma_{0}\frac{2}{N^{\prime}}\). As it can be seen in the figure, at high SNR the performance of the proposed method is comparable with the fully-digital MIMO system, while they slightly degrade towards the low SNR regime. This degradation is likely due to the fact that in the simulation we use only one frame to design the beamforming coefficients, thus, the beam is affected by noise. In terms of angle, the system with \(N^{\prime}=8\) antennas seems to largely outperform the proposed method. It should be noted however that this advantage is caused to the higher SNR due to the lower total bandwidth, which also implies a large degradation in terms of communication due to the low bandwidth. ## VIII Discussion ### _Leakage issue_ Due to imperfections in the coefficient estimation, when two targets are very close we often observe that some of the Fig. 6: Resolution probability of the proposed method with a fully digital MIMO architecture, showing similar performance to the proposed architecture while requiring a much larger aggregate sampling rate. Fig. 7: Accuracy of the proposed method compared to 2D-MUSIC, showing comparable performance despite the much lower aggregate sampling rate. components are also visible at different times. An example is shown in Fig. 8, where we may notice the algorithm estimation result in the case where the channel has two components (depicted in orange) at \((10\text{ m},30^{\circ})\) and \((10.25\text{ m},35^{\circ})\). As we can see, the two components are correctly identified, but the algorithm also detects two leaked components at \((10\text{ m},35^{\circ})\) and \((10.25\text{ m},30^{\circ})\). When the distances get closer, these components may have a significant amplitude, and sometimes they are not filtered out with a simple threshold. Moreover, if the amplitude of one component is significantly larger than the others, we may even observe that at a specific time, the real component has a lower amplitude than the leaked one. This unfortunately means that it is possible to misidentify some components when using the simple threshold strategy proposed in VII-A. Over the whole resolution experiment, we recorded an overall probability of observing at least a leaked component of \(25\%\), i.e., we reported at least one spurious component over at least one-quarter of the channel realizations. On average, we observed \(0.41\) leaked components per realization. This suggests that the leakage problem is significant, at least for close targets. It should be noted though, that for more spread targets the issue is a lot less relevant. For example, repeating the experiment with \((\Delta\theta,\Delta L)\in\{0\text{ m},1\text{ m},\ldots,5\text{ m}\}\times\{0^{ \circ},1^{\circ},\ldots,10^{\circ}\}\), the average fraction of observations showing leakage reduces to \(0.98\%\), with an average number of leaked component of \(0.017\). Moreover, it is always true that a real component is larger than the leaked component with the same angle, so it is possible to replace the threshold a better classification algorithm to identify the real components after the estimation. However, we will address this solution in future work. ### _Target illumination_ Naturally, in order to detect an object that should be both illuminated by the transmitter and fall within the receiver beam. In order to verify if this is the case, we can compute the Array Factor (AF) of the MRC beam, which is \[\text{AF}(\phi) =\sum_{n=0}^{N-1}\left(\sum_{k=0}^{K-1}\alpha_{k}e^{jn\pi\cos( \theta_{k})}\right)^{*}e^{jn\pi\cos(\phi)} \tag{28}\] \[=\sum_{k=0}^{K-1}\alpha_{k}^{k}\sum_{n=0}^{N-1}e^{-jn\pi\cos( \theta_{k})}e^{jn\pi\cos(\phi)}. \tag{29}\] This shows that the AF generated by the MRC can be decomposed in a linear combination of beams with expression \[\text{AF}_{k}(\phi)=\sum_{n=0}^{N-1}e^{-jn\pi\cos(\theta_{k})}e^{jn\pi\cos( \phi)}. \tag{30}\] Notably, \(\text{AF}_{k}(\phi)\) is the array factor of a beam pointed in the direction \(\theta_{k}\). If MRC is performed on both ends, this fact should guarantee both illumination of the target and a suitable receiver beam. Moreover, to improve the SNR of low amplitude components, we propose to add a perturbation to the transmitter beam. We define the new beamforming coefficients as \[\beta_{n}^{\prime}(\xi,\varphi)=\xi e^{jn\pi\cos(\varphi)}+(1-\xi)\beta_{n}. \tag{31}\] Where \(\beta_{n}\) are the beamforming coefficients derived by the MRC, \(\xi\in(0,1)\). This adds a lobe in the direction \(\varphi\) with an amplitude \(\xi\), which could potentially illuminate better some far or low reflective objects. An example of the resulting array factor farfield pattern with and without the additional lobe can be seen in Fig. 9, where we show in blue the original pattern, and in dashed black we can see 4 examples with an additional side lobe at \(-60^{\circ}\), \(-50^{\circ}\), \(-40^{\circ}\) and \(-30^{\circ}\) respectively. The sidelobe has been added with a relative amplitude of \(\xi=0.2\). As it can be clearly seen, the addition of the side lobe has a negligible impact on the gain of the main pattern. It should be noted that better illumination of those objects will cause their component at the receiver to have a higher amplitude, and thus to be amplified even further by the receiver MRC. The direction \(\varphi\) can then be swept to make sure that every possible object is illuminated. Moreover, since we use Fig. 8: Example of leakage. Fig. 9: Beamformed farfield pattern [dBi] without the additional lobe (blue) and with an additional lobe in different directions (black) only a fraction of the energy equal to \((1-\xi)^{2}\) in that beam, the communication should receive a penalty in SNR of roughly \(-20\log(1-\xi)\), which for small \(\xi\) values should be negligible (e.g., \(\xi=0.2\) will cause a loss of roughly \(2\)dB). Moreover, MRC generates a beam that is suitable for communication, as the presence of major lobes in the direction of the received power ensures a high SNR. ## IX Conclusions In this paper, we described a low-complexity method to extract ToA and AoA information from a novel hardware architecture that features a combination of a high-bandwidth analog beamforming and low-bandwidth digital beamforming system. We have shown how it is possible to match the ToA estimate to the relevant AoA estimate by acquiring multiple non-coherent frames and using the phase difference between different ToA to isolate a specific component in the narrowband digital beamforming domain. Finally, we have conducted a numerical evaluation of the performance of the method, showing that it matches or outperforms 2D-MUSIC despite the use of significantly fewer ADC samples. Finally, the proposed architecture also allows for the design of a robust communication beam by extracting the channel coefficients from the digital beamforming part and applying MRC. Such a beam turns out to be also well suited for sensing as it guarantees to capture energy from all possible directions. ## Appendix A MUSIC MUSIC is a subspace-based super-resolution algorithm that relies on the eigenvalue decomposition of the sample covariance matrix of the received signal as follows: \[\mathbf{R}_{y}=\mathbf{Y}\mathbf{Y}^{H} \tag{32}\] where \(\mathbf{Y}\) is computed as: \[\mathbf{Y}=\begin{bmatrix}\hat{H}_{A}\left(0,-\left\lceil\frac{S}{2}\right\rceil+1 \right)&\cdots&\hat{H}_{A}\left(F-1,-\left\lceil\frac{S}{2}\right\rceil+1 \right)\\ \vdots&&\vdots\\ \hat{H}_{A}\left(0,\left\lfloor\frac{S}{2}\right\rfloor-1\right)&\cdots&\hat {H}_{A}\left(F-1,\left\lfloor\frac{S}{2}\right\rfloor-1\right)\end{bmatrix}. \tag{33}\] The covariance matrix \(\mathbf{R}_{y}\) can be re-written as \[\mathbf{R}_{y}=\mathbf{A}\mathbf{R}_{s}\mathbf{A}^{H}+\sigma_{n}^{2}\mathbf{I} \tag{34}\] where \(\mathbf{A}\) and \(\mathbf{R}_{s}\) are as defined in (35) and (36) respectively. \(\sigma_{n}^{2}\) is the noise power, and \(\mathbf{I}\) is the identity matrix. The intuition behind the method comes from this decomposition: we can notice that the image of \(R_{y}\) can be decomposed into two subspaces: * The signal space, associated with the first term of Eq. (34), which is the subspace generated by the columns of \(\mathbf{A}\). * The noise subspace, associated with the second term of Eq. (34), which is the subspace orthogonal to the signal subspace. We can also infer that, given a sufficient SNR, the eigenvalues associated with the signal subspace will be significantly larger than the ones associated with the noise subspace. MUSIC extracts the noise subspace by removing the eigenvectors corresponding to the largest eigenvalues of \(\mathbf{R}_{y}\) as the vectors that span the signal subspace. The remaining eigenvectors, which correspond to the near-zero eigenvalues, constitute the noise matrix \(\mathbf{U}_{n}\). Finally, we can compute the MUSIC spectrum, \(P_{\text{MUSIC}}(\tau)=\frac{1}{\mathbf{a}^{H}(\tau)\mathbf{U}_{n}\mathbf{U}_{n}^{H}\mathbf{a }(\tau)}\), where \(\mathbf{a}(\tau)=[1,e^{-j2\pi\Delta_{f}\tau},...,e^{-j2\pi(S-1)\Delta_{f}\tau}]^{T}\) is the steering vector associated with the delay \(\tau\). By definition of the noise subspace, if \(\tau=\tau_{k}\) for some \(k\), due to the subspace orthogonality, \(\mathbf{a}^{H}(\tau)\mathbf{U}_{n}\mathbf{U}_{n}^{H}\mathbf{a}(\tau)\approx 0\), and therefore the MUSIC spectrum has a large value. Therefore, we detect this condition by performing a peak detection on \(P_{\text{MUSIC}}(\tau)\) and report the list of peak times \(\mathcal{T}=\{\hat{\tau}_{0},...,\hat{\tau}_{L-1}\}\) ## Appendix B Matrix Pencil The Matrix Pencil method works as follows. From the vector \(\bar{\mathbf{\Pi}}_{d}(\hat{\tau}_{\ell})\), we generate a Hankel matrix: \[\mathcal{H}_{\ell,P}=\begin{bmatrix}\bar{H}_{d}(\hat{\tau}_{\ell},0)&...&\bar {H}_{d}(\hat{\tau}_{\ell},P-1)\\ \bar{H}_{d}(\hat{\tau}_{\ell},1)&...&\bar{H}_{d}(\hat{\tau}_{\ell},P)\\ \vdots&\vdots&\vdots\\ \bar{H}_{d}(\hat{\tau}_{\ell},M-P-1)&...&\bar{H}_{d}(\hat{\tau}_{\ell},M-1) \end{bmatrix}. \tag{37}\] From this matrix, we generate the two matrices \(\mathcal{H}_{\ell,P}^{(1)}\) and \(\mathcal{H}_{\ell,P}^{(2)}\) removing the last and first column of \(\mathcal{H}_{\ell,P}\) respectively. It can be shown that the generalized eigenvalues of the pair \(\left\{\mathcal{H}_{\ell,P}^{(1)},\mathcal{H}_{\ell,P}^{(2)}\right\}\) are of the form \(e^{j\frac{\pi}{M}\omega_{\ell,q}}\), where \(\omega_{\ell,q}\) is the spatial frequency associate with the i-th component [41, 42, 43, 44].
2309.07582
On Performance of Fluid Antenna System using Maximum Ratio Combining
This letter investigates a fluid antenna system (FAS) where multiple ports can be activated for signal combining for enhanced receiver performance. Given $M$ ports at the FAS, the best $K$ ports out of the $M$ available ports are selected before maximum ratio combining (MRC) is used to combine the received signals from the selected ports. The aim of this letter is to study the achievable performance of FAS when more than one ports can be activated. We do so by analyzing the outage probability of this setup in Rayleigh fading channels through the utilization of Gauss-Chebyshev integration, lower bound estimation, and high signal-to-noise ratio (SNR) asymptotic approximations. Our analytical results demonstrate that FAS can harness rich spatial diversity, which is confirmed by computer simulations.
Xiazhi Lai, Tuo Wu, Junteng Yao, Cunhua Pan, Maged Elkashlan, Kai-Kit Wong
2023-09-14T10:26:59Z
http://arxiv.org/abs/2309.07582v1
# On Performance of Fluid Antenna System using Maximum Ratio Combining ###### Abstract This letter investigates a fluid antenna system (FAS) where multiple ports can be activated for signal combining for enhanced receiver performance. Given \(M\) ports at the FAS, the best \(K\) ports out of the \(M\) available ports are selected before maximum ratio combining (MRC) is used to combine the received signals from the selected ports. The aim of this letter is to study the achievable performance of FAS when more than one ports can be activated. We do so by analyzing the outage probability of this setup in Rayleigh fading channels through the utilization of Gauss-Chebyshev integration, lower bound estimation, and high signal-to-noise ratio (SNR) asymptotic approximations. Our analytical results demonstrate that FAS can harness rich spatial diversity, which is confirmed by computer simulations. Diversity, fluid antenna system (FAS), maximum ratio combining (MRC), outage probability. ## I Introduction Fluid antenna system (FAS) capitalizes upon the inherent spatial diversity by dynamically adjusting the antenna elements to optimal positions, referred to as "ports". This new paradigm stands in contrast to traditional communication methodologies, in which the antenna elements remain in fixed positions, as elucidated by Shojaeifard _et al._ in [1]. The realization of FAS may come in the forms of liquid-metal-based antennas [2] or on-off pixel-based antennas [3]. See [4] for more details. Motivated by the great potential of FAS, recent research has delved into the FAS channel model, deriving the probability density function (PDF) of the received signal-to-noise ratio (SNR) as well as the corresponding outage probability [5, 6, 7]. Remarkably, the outcomes of their investigation unveiled the superiority of the FAS scheme over conventional fixed-position antenna systems, particularly when a considerable multitude of ports is at disposal. Machine learning techniques have also been shown to be effective in port selection for FAS [8]. Most recently, Wong _et al._ has extended the use of FAS for multiple access by taking advantage of the ups and downs of fading channels in the spatial domain, and illustrated the possibility of alternative multiple access schemes using FAS [9, 10, 11]. However, research in FAS is still in an early stage and the majority of the results so far are limited to FAS with only one selected port exhibiting the maximum SNR [5, 6, 7, 8, 9, 10]. The fact that a mobile terminal can actually afford more than one radio frequency (RF) chains, means that it is increasingly probable that FAS can come with multiple activated ports, with better performance [12]. Since maximum ratio combining (MRC) is the optimal mixing scheme without interference, it is therefore of great importance to understand the achievable performance of FAS using MRC if more than one ports can be selected for reception. This is the aim of this letter. Specifically, our contributions are summarized as follows: * First, we consider a \(K\)-port FAS which corresponds to a FAS with \(K\) selected ports, operating in Rayleigh fading channels. The mobile receiver selectively activates \(K\) optimal ports from the available \(M\) ports. Then MRC is employed to combine the \(K\) branches of signals from the activated ports. We derive the outage probability of the proposed \(K\)-port FAS using both Laplace transform (LT) and Gauss-Chebyshev integration methods. * Additionally, we present the lower bound and asymptotic expressions for the outage probability. * The simulation results substantiate the effectiveness of the proposed analytical approach, thereby confirming and validating our insights and discussions. ## II System Model Consider an end-to-end communication in Rayleigh fading channels, where the source transmits the signal using a conventional fixed-position antenna with transmit power \(P_{S}\) but the receiver is equipped with a FAS with \(K\) fluid antenna elements.1 Each antenna element is connected to one RF chain. Within this particular FAS configuration, a linear space of \(W\lambda\) encompasses a total of \(M\) ports, where \(\lambda\) represents the wavelength [5]. Among these \(M\) ports, it is assumed that each port is evenly distributed, and \(K\) ports can be activated for signal receiving out of the total \(M\) ports. Footnote 1: In our idealized mathematical model, a FAS with multiple single-activated-port fluid antennas is equivalent to a FAS with multiple activated ports although their specific implementation details will differ. Since each port is placed closely, the channel parameters of each port are correlated. Building upon the channel model developed in [7] and [10], we introduce a virtual reference port to model the channel correlation. This virtual reference port is characterized by a channel parameter \(h_{0}\sim\mathcal{CN}(0,\alpha)\), following a complex Gaussian distribution with zero mean and variance \(\alpha\). Accordingly, the SNR of \(h_{0}\) can be written as \[\gamma_{0}=\frac{P_{S}|h_{0}|^{2}}{\sigma^{2}}, \tag{1}\] where \(\sigma^{2}\) denotes the noise power level. Considering \(h_{0}\) as a complex Gaussian random variable (RV), the PDF of \(\gamma_{0}\) can be expressed as \[f_{\gamma_{0}}(x)=\frac{1}{\phi}e^{-\frac{\phi}{\phi}}, \tag{2}\] where \(\phi=P_{S}\alpha/\sigma^{2}\) represents the average received SNR. Now, we proceed to establish the channel parameter linking the source and the \(m\)-th port, denoted as \(h_{m}\), where \(m\in\mathcal{M}=\{1,2,\ldots,M\}\). The expression for \(h_{m}\) takes the form \[h_{m}=\mu h_{0}+(1-\mu)e_{m}, \tag{3}\] where \(e_{m}\sim\mathcal{CN}(0,\alpha)\) for \(m\in\mathcal{M}\) are independently and identically distributed (i.i.d.) RVs, \(\alpha\) is the average channel gain from the source to the ports. Additionally, \(\mu\) denotes the correlation factor, which is given by [7] \[\mu= \sqrt{2}\sqrt{{}_{1}F_{2}\Big{(}\frac{1}{2};1;\frac{3}{2};-\pi^{2} W^{2}\Big{)}-\frac{J_{1}(2\pi W)}{2\pi W}}, \tag{4}\] where \({}_{a}F_{b}\) denotes the generalized hypergeometric function and \(J_{1}(\cdot)\) is the first-order Bessel function of the first kind. Conditioned on a fixed channel parameter \(h_{0}\), and in accordance with \(\gamma_{0}\), the corresponding SNR of \(h_{m}\), expressed as \(\gamma_{m}=\frac{P_{S}|h_{m}|^{2}}{\sigma^{2}}\), follows a non-central chi-square distribution. The conditional PDF can be expressed as \[f_{\gamma_{m}|\gamma_{0}=x_{0}}(x)= \omega e^{-\omega(x+\mu x_{0})}I_{0}\big{(}2\omega\sqrt{\mu x_{0} }x\big{)}, \tag{5}\] where \(\omega=\big{(}\phi(1-\mu)\big{)}^{-1}\). Besides, \(I_{0}(u)\) is the modified Bessel function of the first kind with order \(0\), which can be expressed in series representation as [13] \[I_{0}(z)=\sum_{k=0}^{\infty}\frac{z^{2k}}{2^{2k}k!\Gamma(k+1)}. \tag{6}\] Combining (5) with (6), we further derive \(f_{\gamma_{m}|\gamma_{0}=x_{0}}(x)\) as \[f_{\gamma_{m}|\gamma_{0}=x_{0}}(x)= \sum_{k=0}^{\infty}c_{k}x_{0}^{k}e^{-\omega\mu x_{0}}x^{k}e^{- \omega x}, \tag{7}\] where \[c_{k}=\frac{\omega^{2k+1}\mu^{k}}{(k!)^{2}}. \tag{8}\] In order to receive the signal transmitted from the source, the receiver selects the \(K\) ports with the \(K\) highest received SNR from the available total of \(M\) ports for activation. The set of selected ports is denoted by \[\mathbb{K}=\arg\textit{K}\max_{m\in\mathcal{M}}\gamma_{m}, \tag{9}\] where \(\textit{K}\max_{m\in\mathcal{M}}\gamma_{m}\) denotes to select the \(K\) maximal \(\gamma_{m}\) out of set \(\mathcal{M}\). In addition, to process the received signals from different antenna elements, the MRC technique is utilized to combine the \(K\) branches of signals. Moreover, the channel state information (CSI) is assumed to be not available at the source; hence the transmission data rate is fixed to \(R\). Therefore, the outage of communication occurs when the FAS cannot sustain the data rate \(R\), i.e., \[\mathcal{E}=\left\{\log_{2}\left(1+\sum_{m\in\mathbb{K}}\gamma_{m}\right)\leq R \right\}. \tag{10}\] Thus, the system's outage probability is written as \[P_{\mathrm{out}}=\Pr\left(\mathcal{E}\right). \tag{11}\] ## III Performance Analysis Here, we derive the exact outage probability of the proposed FAS-enabled communications. Subsequently, the lower bound and asymptotic expressions of the outage provability of system are derived. These derivations offer valuable insights for the proposed FAS-enabled communications system. ### _Exact Outage Probability_ Consider the port with the \((K+1)\)-th maximal channel gain, denoted as \(v\). Given \(\gamma_{0}=x_{0}\), the outage probability is expressed as \[\Lambda(z) =\Pr\left(\sum_{m\in\mathbb{K}}\gamma_{m}\leq z|\gamma_{0}=x_{0}\right)\] \[\overset{(a)}{=}\binom{M}{K}(T+1)\int_{0}^{\infty}\Phi(z)\Psi(v) f_{v|\gamma_{0}=x_{0}}(v)dv, \tag{12}\] where \(z=2^{R}-1\) denotes the SNR threshold of outage, \(T=M-K-1\), and \[\Psi(v,x_{0})=\Pr\left(\gamma_{m}\leq v,m\in\mathcal{T}|\gamma_{0}=x_{0} \right), \tag{13}\] is the probability that \(T+1\) ports are idle with maximal channel gain \(v\), and \(\mathcal{T}=\{1,2,\ldots,T\}\). Also, \[\Phi(z,v,x_{0})=\Pr\left(\sum_{m\in\mathcal{K}}\gamma_{m}\leq z,\gamma_{m}>v| \gamma_{0}=x_{0}\right), \tag{14}\] is the probability that \(K\) ports are selected and outage occurs and \(\mathcal{K}=\{1,2,\ldots,K\}\). Step \((a)\) holds since \(\gamma_{m}\) for \(m\in\mathcal{M}\) are i.i.d. RVs, and \(\Psi(v,x_{0})\Phi(z,v,x_{0})\) represents the outage probability related to one of the port selection results. In the following, we derive the expressions of \(\Psi(v,x_{0})\) and \(\Phi(z,v,x_{0})\). Then we obtain the outage probability by taking the expectation of \(\Lambda(z)\) with respect to \(\gamma_{0}\). First, it is important to note that \(\forall m,l\in\mathcal{T}\), \(\gamma_{m}\) and \(\gamma_{l}\) are independent with each other given \(\gamma_{0}=x_{0}\). Furthermore, in accordance with (5), the joint PDF of \(\gamma_{m}\) for \(m\in\mathcal{T}\) can be expressed as \[f_{\gamma_{m},m\in\mathcal{T}|\gamma_{0}=x_{0}}(x_{1},\ldots,x_{ T})\\ =\prod_{m=1}^{T}\omega e^{-\omega(x_{m}+\mu x_{0})}I_{0}\big{(}2 \omega\sqrt{\mu x_{0}x_{m}}\big{)}. \tag{15}\] Then, by utilizing (13) and (15), we evaluate \(\Psi(v,x_{0})\) as \[\Psi(v,x_{0}) =\int_{0}^{v}\cdots\int_{0}^{v}f_{\gamma_{m},m\in\mathcal{T}| \gamma_{0}=x_{0}}(x_{1},\ldots,x_{T})dx_{1}\cdots dx_{T}\] \[=\Big{(}1-Q_{1}\big{(}\sqrt{2\omega\mu x_{0}},\sqrt{2\omega v}\big{)} \Big{)}^{T}, \tag{16}\] where \(Q_{1}(\cdot,\cdot)\) is the first order Marcum-\(Q\) function [6]. Next, we proceed to derive the analytical expression of \(\Phi(z,v,x_{0})\) by utilizing the following theorem. **Theorem 1**: _The LT expressions of the following functions_ \[g(x)=x^{a}e^{-bx}u(x-v), \tag{17}\] \[p(x)=(x-a)^{K-1}e^{-bx}u(x-a), \tag{18}\] _are, respectively,_ \[L[g(x);s]=e^{-(s+b)v}\sum_{l=0}^{a}\frac{a!v^{l}}{l!(s+b)^{a+1-l}}, \tag{19}\] \[L[p(x);s]=\frac{(K-1)!e^{-a(s+b)}}{(s+b)^{K}}, \tag{20}\] _where \(\mathrm{Re}(s)\geq-b\), \(\mathrm{Re}(\mathrm{x})\) denotes the real part of \(x\), and \(u(\cdot)\) is the step function._ See Appendix A. From Theorem 1 and (7), the LT of the PDF of \(\gamma_{m}\) with \(\gamma_{m}>v\) is given by \[L\big{[}f_{\gamma_{m}|\gamma_{0}=x_{0}}(x_{m});s\big{]}\\ =e^{-(s+\omega)v-\omega\mu x_{0}}\sum_{m=0}^{\infty}\sum_{l=0}^{m }\frac{d_{m}x_{0}^{m}v^{l}}{l!(s+\omega)^{m+1-l}}, \tag{21}\] where \(\mathrm{Re}(\mathrm{s})\geq-\omega\) and \(d_{m}=c_{m}m!\). Then, by using the fallung theorem in [13], the LT of the PDF of RV \(\bar{\gamma}=\sum_{m=1}^{K}\gamma_{m}\) conditioned on \(\gamma_{m}>v\) can be derived as \[L\big{[}f_{\bar{\gamma}|\gamma_{0}=x_{0}}(x);s\big{]}\\ =\Big{(}L\big{[}f_{\gamma_{m}|\gamma_{0}=x_{0}}(x_{m});s\big{]} \Big{)}^{K}\\ =e^{-Kv(s+\omega)-K\mu x_{0}}\sum_{\genfrac{}{}{0.0pt}{}{r_{m}=0 }{m\in\mathcal{K}}}^{\infty}\rho_{m}x_{0}^{\eta_{m}}\sum_{\genfrac{}{}{0.0pt }{}{m=0}{m\in\mathcal{K}}}^{\tau_{m}}\frac{v^{\epsilon_{m}}q_{m}}{(s+\omega)^{ \chi_{m}}}, \tag{22}\] where \[\left\{\begin{array}{l}\rho_{m}=\prod_{m=1}^{K}d_{m},\\ \eta_{m}=\sum_{m=1}^{K}r_{m},\\ \epsilon_{m}=\sum_{m=1}^{K}l_{m},\\ q_{m}=\prod_{m=1}^{K}\frac{1}{l_{m}!},\\ \chi_{m}=K+\eta_{m}-\epsilon_{m}.\end{array}\right. \tag{23}\] Utilizing Theorem 1, we can obtain the PDF of \(\bar{\gamma}\) conditioned on \(\gamma_{0}=x_{0}\) as \[f_{\bar{\gamma}|\gamma_{0}=x_{0}}(x)=e^{-\omega(x+K\mu x_{0})} \sum_{\genfrac{}{}{0.0pt}{}{r_{m}=0}{m\in\mathcal{K}}}^{\infty}\rho_{m}x_{0}^ {\eta_{m}}\\ \times\sum_{\genfrac{}{}{0.0pt}{}{r_{m}=0}{m\in\mathcal{K}}}^{ \tau_{m}}q_{m}\frac{\big{(}x-Kv\big{)}^{\chi_{m}-1}}{(\chi_{m}-1)!}, \tag{24}\] with \(x\geq Kv\). Based on (24), the computation of \(\Phi(z,v,x_{0})\) can be performed by \[\Phi(z,v,x_{0}) =\int_{Kv}^{z}f_{\bar{\gamma}|\gamma_{0}=x_{0}}(x)dx\] \[=e^{-\omega(Kv+K\mu x_{0})}\sum_{\genfrac{}{}{0.0pt}{}{r_{m}=0}{m \in\mathcal{K}}}^{\infty}\rho_{m}x_{0}^{\eta_{m}}\] \[\times\sum_{\genfrac{}{}{0.0pt}{}{r_{m}=0}{m\in\mathcal{K}}}^{ \tau_{m}}q_{m}\frac{\gamma\big{(}\chi_{m},\omega(z-Kv)\big{)}}{(\chi_{m}-1)! \omega^{\chi_{m}}}, \tag{25}\] in which \(z\geq Kv\) is a necessary condition; otherwise, \(\Phi(z,v,x_{0})=0\). In addition, \(\gamma(\alpha,x)\) is the lower incomplete Gamma function, which can be expressed in integral and serial representations respectively, as \[\gamma(\kappa,x)=\int_{0}^{x}e^{-t}t^{\kappa-1}dt=(\kappa-1)!\left(1-e^{-x} \sum_{m=0}^{\kappa-1}\frac{x^{m}}{m!}\right). \tag{26}\] Calculating \(\Lambda(z)\) in (12) with (16) and (25), and then taking the expectation of \(\Lambda(z)\) with respect to \(\gamma_{0}\), the outage probability of the system can be computed as \[P_{\mathrm{out}}=\int_{0}^{\infty}\int_{0}^{\hat{\ast}}\binom{M}{ K}(T+1)\Phi(z,v,x_{0})\Psi(v,x_{0})\\ \times f_{\gamma_{m}|\gamma_{0}=x_{0}}(v)f_{\gamma_{0}}(x_{0})dvdx_ {0}. \tag{27}\] **Remark 1**: _From (16), it becomes evident that \(\Psi(v,x_{0})\) becomes tiny with a large number of \(T\), owing to the fact that \(Q_{1}(\cdot,\cdot)\) is bounded by 1 [6]. This observation implies that \(P_{\mathrm{out}}\) in (27), i.e., the outage probability of the system approaches zero when the total number of ports \(M\to\infty\).2_ Footnote 2: Note that the conclusion may vary depending on how spatial correlation over the ports is modelled. That said, the analysis presented in this letter gives the first-look performance of FAS using MRC. It is noticeable that the integral in (27) presents computational challenges. To address this, we initially replace the upper limit of the integral in (27) with a sufficiently large value denoted as \(H\). This approximation is valid because the integrand in (27) tends to approach zero as \(x_{0}\) increases. Subsequently, we resort to the Gauss-Chebyshev integral to derive a precise approximation of \(P_{\mathrm{out}}\) in serial representation: \[P_{\mathrm{out}}\approx\binom{M}{K}\frac{\pi^{2}Hz(T+1)}{4U_{p} U_{l}}\sum_{p=1}^{U_{p}}\sum_{l=1}^{U_{l}}\Phi(z,y_{l},y_{p})\Psi(y_{l},y_{p})\\ \times\sqrt{1-t_{p}^{2}}\sqrt{1-t_{l}^{2}}f_{\gamma_{m}|\gamma_{0 }=y_{p}}(y_{l})f_{\gamma_{0}}(y_{p}), \tag{28}\] where \(U_{p}\) and \(U_{l}\) are complexity-accuracy tradeoff parameters, and \[\left\{\begin{aligned} t_{p}&=\cos\left(\frac{(2p-1) \pi}{2U_{p}}\right),\\ y_{p}&=\frac{H(t_{p}+1)}{2},\\ t_{l}&=\cos\left(\frac{(2l-1)\pi}{2U_{l}}\right),\\ y_{l}&=\frac{z(t_{l}+1)}{2K}.\end{aligned}\right. \tag{29}\] According to [14], it is established that the approximation provided in (28) is tight with large numbers of \(U_{p}\) and \(U_{l}\). ### _Lower Bound and Asymptotic Analysis_ For the sake of facilitating computation and analysis of \(P_{\mathrm{out}}\), we derive a lower bound for \(P_{\mathrm{out}}\) in this subsection. Notably, this lower bound closely approximates the exact outage probability, particularly in the high SNR region. Moreover, we analyze the asymptotic behavior of \(P_{\mathrm{out}}\) and discuss the performance bottleneck of the system. First, from (7), we can readily know that \(f_{\gamma_{m}|\gamma_{0}=x_{0}}(x)\) is lower-bounded by \[\bar{f}_{\gamma_{m}|\gamma_{0}=x_{0}}(x)=\omega e^{-\omega\mu x_{0}}e^{-\omega x}. \tag{30}\] Based on (30), we can accordingly obtain the lower bound of \(\Psi(v,x_{0})\) and \(\Phi(z,v,x_{0})\), respectively, as \[\bar{\Psi}(v,x_{0}) =e^{-\omega T\mu x_{0}}\sum_{t=0}^{T}{T\choose t}(-1)^{t}e^{- \omega tv}, \tag{31}\] \[\bar{\Phi}(z,v,x_{0}) =e^{-\omega(Kv+K\mu x_{0})}-e^{-\omega(z+K\mu x_{0})}\] \[\times\sum_{k=0}^{K-1}\frac{\omega^{k}}{k!}\sum_{m=0}^{k}{k \choose m}z^{k-m}(-Kv)^{k}. \tag{32}\] Applying (30)-(32) into (27), we can obtain \[P_{\mathrm{out}} \geq\bar{P}_{\mathrm{out}}\] \[={M\choose K}\frac{T+1}{M\mu\omega\phi+1}\] \[\times\left(\sum_{t=0}^{T}{T\choose t}\beta_{t}-\sum_{t=0}^{T} \sum_{k=0}^{K-1}\sum_{m=0}^{k}{T\choose t}{k\choose m}\kappa_{t,k,m}\right), \tag{33}\] where \[\beta_{t} =\frac{(-1)^{t}}{(t+K+1)}\big{(}1-e^{-z\omega(t+K+1)}\big{)}, \tag{34}\] \[\kappa_{t,k,m} =\frac{(-1)^{t+m}K^{m}(z\omega)^{k-m}\gamma\left(m+1,\frac{z \omega(t+1)}{K}\right)}{k!(t+1)^{m+1}}. \tag{35}\] From (7), it becomes apparent that the exact value of \(P_{\mathrm{out}}\) approaches the lower bound \(\bar{P}_{\mathrm{out}}\) as the average received SNR becomes large, i.e., the values of \(\alpha\) or \(P_{S}\) are large. Moreover, when the value of \(M\) and \(K\) are large, the calculations of outage probabilities in (27) and (28) become intricate. In contrast, the evaluation of \(\bar{P}_{\mathrm{out}}\) using the expression in (33) remains computationally efficient, aiding in the analysis of the performance of the proposed system. Furthermore, by applying the expansion \(e^{-x}=1-x\) for tiny value of \(|x|\), we can obtain the asymptotic expression of outage probability in the high SNR region as \[P_{\mathrm{out}}\simeq\psi(z\omega)^{M}, \tag{36}\] where \[\psi=\] \[{M\choose K}\frac{(T+1)(1-\mu)}{K!(M\mu+1-\mu)K^{T+1}}\sum_{k=0}^{ K}{K\choose k}\frac{1}{k+T+1}. \tag{37}\] **Remark 2**: _The asymptotic outage probability in (36) indicates that the diversity order of the FAS-aided communication system is \(M\). This means that the proposed system can fully exploit the diversity offered by total available \(M\) ports, regardless of the number of activated ports \(K\). Therefore, enhancing the system's performance by increasing the number of \(K\) ports is feasible; yet the improvement is less significant than the improvement of increasing \(M\)._ ## IV Numerical Results In this section, we present several numerical results for the FAS-aided communications. Following a similar approach to the work in [9], we assume the value of \(W=5\), which is a common choice for 5G networks in the context of handset devices. Moreover, we set the data rate \(R\) to \(5\) bit/s/Hz, leading to an outage SNR threshold \(z\) set at 31. Unless specified otherwise, we refer to the outcomes of our simulations as "Simul". Also, we denote the results obtained from (28), (33), and (36) as "Ana.", "LB", and "Asy.", respectively. Fig. 1 illustrates the variations in outage probability with the average SNR (\(\phi\)), considering different values of \(M\) and \(K\). As observed from Fig. 1, it is evident that the analytical outage probability derived from equation (28) closely aligns with the simulation results. Additionally, the lower bound provided by equation (33) accurately approximates the simulation outcomes, particularly in the high SNR region, which corroborates with the asymptotic result in equation (36). Furthermore, Fig. 1 indicates that the outage probability of the system is predominantly influenced by the total port number \(M\), affirming the analysis in equation (36) that the diversity stemming from all available ports can be maximally exploited. It is worth noting that the gain achieved by increasing \(K\) from \(2\) to \(4\) in the high SNR region is approximately \(3.8\), consistent with the findings presented in equation (36). However, the enhancement resulting from increasing the number of activated ports \(K\) is comparatively less pronounced than the gains derived from increasing \(M\). Fig. 2 provides a visualization of the relationship between the number of activated ports (\(K\)) and the resulting outage Fig. 1: Outage probability versus average SNR \(\phi\). probability in the context of the \(K\)-port FAS-aided communications system. The experiment is conducted with \(\phi\) set at \(10\) dB, and two distinct values for the total port count (\(M\)), namely \(10\) and \(20\). Meanwhile, the number of activated ports \(K\) is allowed to vary within the interval of \(1\) to \(8\). Upon examining the results depicted in Fig. 2, it becomes evident that increasing the count of activated ports (\(K\)) contributes significantly to enhancing the overall system's outage performance. However, the most striking insight emerges from the clear trend indicating that the advantages stemming from augmenting the total port count (\(M\)) are even more pronounced. This noteworthy pattern is in concordance with the analytical findings presented in the preceding sections. ## V Conclusion In this letter, we proposed to analyze the FAS-aided communications system with multiple activated ports, where the MRC technique was utilized to combine the signal from different activated ports. The outage probability of the proposed system has been derived in Rayleigh fading channels, in forms of exact expression, lower bound, and asymptotic expression. Analysis showed that the diversity order of the system equals the number of total available ports. Simulation results corroborated the effectiveness of the provided analysis. ## Appendix A Proof of Theorem 1 According to the definition of LT, we can compute the LT expression of \(g(x)\) in (17) as \[L[g(x);s] =\int_{0}^{\infty}g(x)e^{-sx}dx=\int_{v}^{\infty}x^{a}e^{-(s+b)x}dx\] \[=e^{-(s+b)v}\sum_{l=0}^{a}\frac{a!v^{l}}{l!(s+b)^{a+1-l}}, \tag{38}\] where \(\mathrm{Re}(s)\geq-b\), and the last step can be derived by using the partial integral technique. Similarly, we can compute the LT expression of \(p(x)\) as \[L[p(x);s] =\int_{0}^{\infty}p(x)e^{-sx}dx\] \[=\int_{a}^{\infty}(x-a)^{K-1}e^{-(b+s)x}dx\] \[\stackrel{{(e_{1})}}{{=}}\frac{e^{-a(s+b)}}{(s+b)^{ K}}\int_{0}^{\infty}t^{K-1}e^{-t}dt\] \[\stackrel{{(e_{2})}}{{=}}\frac{(K-1)!e^{-a(s+b)}}{( s+b)^{K}}, \tag{39}\] where \(\mathrm{Re}(s)\geq-b\), the step (\(e_{1}\)) can be obtained by setting \(t=(x-a)(s+b)\), and step (\(e_{2}\)) uses the partial integral technique.
2305.19469
Energy coupling in intense laser solid interactions: material properties of gold
In the double-cone ignition inertial confinement fusion scheme, high density DT fuel is rapidly heated with high-flux fast electrons, which are generated by short and intense laser pulses. Gold cone target is usually used to shorten the distance between the critical surface and the compressed high density DT core. The material properties of solid gold may affect the generation and transport of fast electrons significantly, among which the effects of ionization and collision are the main concerns. In this work, the effects of ionization, collision and blow-off plasma on laser energy absorption rate are investigated using the LAPINS code: A three-stage model is adopted to explain the mechanism of fast electron generation and the change in laser energy absorption rate. With the increase of the charge state of Au ions, the laser-plasma interaction transfers to the later stage, resulting in a decrease in laser energy absorption rate. Collision has both beneficial and harmful effects. On one hand, collision provides a thermal pressure that makes it easier for electrons to escape into the potential well in front of the target and be accelerated in the second stage. On the other hand, collision increases stopping power and suppress electron recirculation within the target in the third stage. The vacuum sheath field behind the target enhances the electron circulation inside the target and thus improves the laser energy absorption, however this effect will be suppressed when the blow-off plasma density behind the target increases or collision is considered.
Xu Liu, Dong Wu, Jie Zhang
2023-05-31T00:35:37Z
http://arxiv.org/abs/2305.19469v1
# Energy coupling in intense laser solid interactions: material properties of gold ###### Abstract In the double-cone ignition inertial confinement fusion scheme, high density DT fuel is rapidly heated with high-flux fast electrons, which are generated by short and intense laser pulses. Gold cone target is usually used to shorten the distance between the critical surface and the compressed high density DT core. The material properties of solid gold may affect the generation and transport of fast electrons significantly, among which the effects of ionization and collision are the main concerns. In this work, the effects of ionization, collision and blow-off plasma on laser energy absorption rate are investigated using the LAPINS code: A three-stage model is adopted to explain the mechanism of fast electron generation and the change in laser energy absorption rate. With the increase of the charge state of Au ions, the laser-plasma interaction transfers to the later stage, resulting in a decrease in laser energy absorption rate. Collision has both beneficial and harmful effects. On one hand, collision provides a thermal pressure that makes it easier for electrons to escape into the potential well in front of the target and be accelerated in the second stage. On the other hand, collision increases stopping power and suppress electron recirculation within the target in the third stage. The vacuum sheath field behind the target enhances the electron circulation inside the target and thus improves the laser energy absorption, however this effect will be suppressed when the blow-off plasma density behind the target increases or collision is considered. ## I Introduction Inertial confinement fusion (ICF) has been proposed and studied for decades as one of the two main paths to achieve stable and controllable fusion [1; 2]. In ICF, deuterium-tritium fuel is compressed to a state of high density and high temperature under the action of the driver energy (e.g., superintense laser, ion beams or Z-pinch device), and the plasma is confined long enough by its inertial for the thermonuclear burn to produce copious amounts of fusion energy. In 1972, Nuckolls first came up with the idea of compressing tiny targets with high-power lasers to bring thermonuclear fuel to ignition conditions [3]. Then an approach to ICF, known as the Fast Ignition scheme (FI) is proposed in which precompressed fuel is ignited by an external hot electron source [4; 5; 6; 7]. In principle, fast ignition can contribute to a higher gain than the conventional center ignition scheme. In addition, due to the separation of ignition process and the implosion process, the limits of compression symmetry and hydrodynamic instability in conventional hot-spot ignition scheme could be relaxed in FI [8]. Double-Cone Ignition scheme (DCI) [9] is a newly-proposed method that involves four processes: quasi-isentropic compression, acceleration, collision of fuels and rapid heating. The acceleration and collision process of DCI are able to pre-heat the fuel to provide 20%-30% of the energy required for ignition. As a result, the energy requirement of picosecond heating laser pulses can be significantly reduced. Through the symmetrical collision process of two high-speed fuel targets, the newly generated fuel can reach a temperature of about 1 keV and double its density. Finally in the rapid heating process, guided by an applied magnetic field, MeV electron beams generated by the interaction of picosecond heating laser pulses and several gold cones can reach the core area of the fuel, heating the fuel to ignition temperature. To improve the energy coupling and achieve a higher gain, it is of broad significance to study the role of gold cone in the generation and transport process of fast electron beams. The optimal geometric parameters of the cone, such as cone angle and cone structure, have been studied [10; 11; 12] and widely accepted. There are also multidimensional simulations of the interaction between laser and Au cone to study the hot-electron generation and their transport inside the Au cone [13; 14]. However, the mechanism of laser-solid gold cone interaction and how the properties of gold cone affects the generation of fast electrons are not fully understood. For the convenience of simulation, density of solid gold is artificially decreased and the binary collision between electrons and ions is often ignored in past research. In our simulations, the high electron density of the gold cone is the same as the real gold cone, and we take into account collisions between particles which is handled based on Monte Carlo method to bring our results closer to the real situation. By the way, in the simulation considering collision process, the reported results are divergent. Some researchers believe that collision will reduce the electron supply to the laser-plasma interaction (LPI) region therefore is harmful to the laser-target energy coupling [15]. On the contrary, some other researchers believe that collision helps to delay the formation of electron density steepening at the interface so it is beneficial for energy coupling [16]. In this paper, using the recent developed PIC code LAPINS [17; 18; 19], the coupling of multiple physical processes in laser-solid gold interaction is studied. The results show that the process of laser-plasma interaction could be divided into three different stages. Laser energy absorption rate decreases due to the transition of the laser-plasma interaction to the later stage when charge state of Au ions increases. Collision has both beneficial and harmful effects. On the one hand, collision provides a thermal pressure which makes it easier for the electrons to escape into the potential well in front of the target to be accelerated in the second stage. On the other hand, collision increases the stopping power and suppresses the electron recirculation within the target in the third stage. The vacuum sheath field behind the target enhances the electron circulation inside the target and thus improves the laser energy absorption, however this effect will be suppressed when the blow-off plasma density behind the target increases or collision is considered. Our results may provide references for the on-going DCI champaign in the gold cone target design used for rapid electron heating. The paper is structured as follows: Sec II introduces the simulation setup parameters and some information about the LAPINS code. In Sec III, the time evolution of laser-plasma interaction process considering dynamic ionization is analyzed. A three-stage model of laser plasma interaction is given to explain how the absorption rate of laser energy is influenced by ionization and collisional effects. In order to verify the three-stage model by controlling the variables, simulations of different fixed charge state with and without collision are also carried out. In Sec IV, the relationship between laser energy absorption rate and target charge state is presented. The role of the collisional effects in laser-target energy coupling is then discussed in Sec V. And the collisional effects on electron recirculation in the presence of blow-off plasma is discussed in Sec VI. Finally, summary of this paper is given in Sec VII. ## II Model description The simulation is carried out using the PIC code LAPINS [17; 18; 19]. In LAPINS, multiple physical effects such as collision [20], ionization [21; 22], radiation [18], QED [23] and nuclear reactions [24], are included and coupled. To simulate the interactions between laser and matter with a large number of particles, the weighted particles technique is used in simulations, which has proven to be more efficient than the uniformly weighted particles in the calculation [25]. The collision model in our PIC code is based on Monte Carlo binary collisions [20], including binary collisions among ion-electron, ion-ion and electron-electron. Contributions of both free and bound electrons are considered in the model. The calculation of the collision process is carried out in three steps: (i) pair of particles are randomly selected from the cell, which may be ion-electron, ion-ion or electron-electron pair; (ii) for the selected pair of particles, we calculate the particle velocity change due to Coulomb collision within the time interval; (iii) replace the velocity of each particle by the newly calculated one. For the ionization module, our code includes field ionization (FI), collision ionization (CI) based on the electron-ion collision cross sections, electron-ion recombination (RE) based on three body recombination, and ionization potential suppression (IPD) model [21; 22]. In addition, a high-order implicit numerical method is used in our LAPINS code to avoid numerical heating and reduce the calculation burden by using large grid size [17; 18]. The 1D simulation box is 35 \(\mu\)m long with grid resolution of 0.01 \(\mu\)m in which 1000 particles per cell are placed. The linearly polarized laser is incident from the left, with a wavelength of \(\lambda=1\)\(\mu\)m and an intensity of \(I=10^{20}\) W/cm\({}^{2}\). The rising time and dropping time of the incident Gaussian profile laser are 100 fs and the flat time is 1 ps. The simulation time is set to 1.2 ps, which is equal to the incidence time of the laser. The absorbing boundary condition of fields and particles are adopted in the direction of laser propagation. The 15 \(\mu\)m Au target is placed at the end of the simulation box, with solid density of 19.32 g/cm\({}^{3}\) and initial temperatures of 100 eV for both ions and electrons. In front of the target, 5 \(\mu\)m pre-plasma is attached in which electron number density increases from 3.5\(\times 10^{19}\) cm\({}^{-3}\) to 5\(\times 10^{21}\) cm\({}^{-3}\) exponentially with the scale length of 1 \(\mu\)m. Two observation planes are set at \(x=21\)\(\mu\)m and \(x=34\)\(\mu\)m respectively to count the number and energy of fast electrons passing through them. The schematic of the simulation is shown in Fig. 1. In cases where the charge state of Au ions is calculated dynamically in the next section, the initial charge state of Au ions is \(Z=5\). Figure 1: Schematic of the simulated situation. ## III Three-stage model of laser plasma interaction In this section, we start with the time evolution of laser energy absorption rate and give a three-stage theoretical model of laser-plasma interaction process. Fig. 2 shows the time evolution of laser energy absorption rate, in which the laser energy absorption rate refers to the ratio of the kinetic energy increment of electrons and ions to the increment of laser energy input during every 0.1 ps. Note that at \(t=0.2\) ps, the laser energy absorption rates in both cases are almost the same (\(\sim 17\%\)), because the incident laser is interacting with the pre-plasma in front of the target which have the same density profile in different cases. During \(t=0.3-0.5\) ps the laser energy absorption rates maintain at a relatively high value (\(\sim 27\%\)). After \(t=0.6\) ps, the energy absorption rates in both cases decrease significantly. Based on the above analysis, it can be inferred that the laser-target interaction process can be divided into three different stages [26; 27], which is the reason for the change of laser energy absorption rate with time. A schematic of the three-stage model of laser-plasma interaction is shown in Fig. 3. We label these three stages as (I) absorption near the critical density, (II) interaction with the shelf plasma, and (III) interaction with the steep surface, respectively. We will illustrate how these three stages affect laser energy absorption. In the first stage, the laser passes through the pre-plasma in front of the target and propagates to the relativistically modified critical density \(n_{c}^{\prime}=\gamma n_{c}\approx 8.6n_{c}\), where \(\gamma=\sqrt{1+I\lambda^{2}/1.37\times 10^{18}}\) is the Lorentz factor. The dominant absorption mechanism in the first stage is stochastic heating and \(J\times B\) heating in the pre-plasma. Fig. 4 shows the time-integral energy spectrum of hot electrons passing through the front observation plane at \(x=21\:\mu\)m for the two dynamic ionization cases until \(t=1.2\) ps. As can be seen from Fig. 4, the electrons in the energy spectrum are divided into different parts, corresponding to the three-stage model. The highest energy part corresponds to the first stage, and its slope (shown by the dotted line) is consistent with the temperature obtained by Wilks scaling [28]\(\epsilon_{T}=(\gamma-1)m_{e}c^{2}\sim 3.8\) MeV. Fig. 2 shows that the laser energy absorption rates of dynamic ionization cases with and without collision are almost the same before \(t=0.4\) ps, which is consistent with the fact that the number of high-energy electrons (\(E>6\) MeV) generated in the two cases are almost the same in Fig. 4. This is due to the low collision frequency in the pre-plasma which has a low electron density and high temperature. The relationship between collision frequency \(\nu\) and electron density \(n_{e}\) and temperature \(T_{e}\) is given by: \(\nu\propto n_{e}T_{e}^{-3/2}\). From the temperature and electron density profile given in Fig. 5 and Fig. 6, it can be estimated that the collision frequency in the pre-plasma is approximately seven orders of magnitude lower than that inside the target. Then, under the action of laser ponderomotive force, pre-plasma is gradually compressed into the target and the ions are ionized to produce more free electrons, symbolizing the second stage of laser-plasma interaction. Fig. Figure 4: Time-integral energy spectrum of hot electrons passing through the front observation plane at \(x=21\:\mu\)m until \(t=1.2\) ps. Figure 3: Schematic of the three-stage model of laser-plasma interaction. Stage I: Absorption near the critical density. Stage II: Interaction with shelf plasma. Stage III: Interaction with steep surface. Figure 2: Evolution of laser energy absorption rate calculated every 0.1 ps for dynamic ionization cases. (red) Collisional case, (blue) collisionless case. 5 shows the time evolution of Au ions charge state in the two cases considering dynamic ionization and Fig. 6 shows the time evolution of electron density in the two cases. Due to field ionization, the charge state of the ions in front of the target quickly increases to \(Z=51\) and remains unchanged. For the collisionless case, the charge state of Au ions in the target is relatively uniform. However, for the collisional case, the energy deposition of fast electrons causes the local charge state of several \(\mu\)m depths in the target to be higher. This also results in an overall higher charge state of the target than in the collisionless case. Fig. 7 shows the electron density and the corresponding longitudinal electric field Ex of the two dynamic ionization cases at \(t=400\) fs. Electrons are gathered in front of the target by strong positive-negative field, and are then loaded into the vacuum by another electric field on the left. Fig. 8 shows the electron longitudinal phase plot and corresponding electrostatic potential for the three stages. Note that in the figures, \(-\phi\) is plotted instead of \(\phi\) in order to compare with the energy of electrons. As shown in Fig. 8(c) and (d), an electrostatic potential well is formed before the target in the second stage of laser-plasma interaction. It is known that the electron oscillating coherently with the electric field of a plane wave obtains zero energy in one period. However, due to the electrostatic potential in front of the target, the phase coherence is broken when electrons moving in the electrostatic potential well [27], so that electrons can be accelerated to an energy much higher than the maximum electrostatic potential by the incident laser and reflected laser. The spectrum of the electrons generated in the second stages form a shoulder-like area in Fig. 4. Ionization not only provides an additional energy absorption mechanism, but also generates more electrons for acceleration, leading to an increase in the laser energy absorption rate in this stage. During pre-plasma compression, on the one hand, collision increases the charge state and charge to mass ratio (\(Z/A\)) of Au ions, making the ions and pre-plasma in front of the target easier to be swept away; on the other hand, collision provides a hot pressure \(P=2n_{e}T_{e}/3\) to counteract the laser ponderomotive force. The pre-plasma compression processes with and without collision are almost identical due to the two opposite effects of collision. It can be seen from Fig. 6 that in both collisionless case and collisional case, the pre-plasma is swept away under the action of the laser ponderomotive force at \(t=600\) fs, leading to the third stage of laser-plasma interaction. The laser directly interacts with the target surface and is reflected at the critical density \(\gamma n_{c}\) in the third stage, whose energy absorption rate and the electron energy are the lowest of the three stages. Since the pre-plasma in front of the target is almost completely swept away, electron acceleration in the electrostatic potential well is no longer important in the third stage. As shown in Fig. 8(e) and (f), only a small number of electrons escape from the front surface of the target. Ignoring the small fraction of electrons escaping from the target surface, there are equilibrium conditions for pressure, electron flux, and energy at the target surface. First, the pressure equilibrium condition for electrons at the laser critical interface can be written as: \[\frac{(1+\eta)I_{in}}{c}=n_{e}e\Delta\phi+\frac{2}{3}n_{e}T_{e} \tag{1}\] In which \(\eta,I_{in}\) are the laser reflectivity and laser intensity, \(n_{e},T_{e}\) are the density and temperature (in the energy unit) of electrons, \(\Delta\phi\) is the potential difference of the charge separation field caused by laser light pressure. For collisionless cases, the last term \(2n_{e}T_{e}/3\) does not exist. Therefore potential difference \(\Delta\phi\) in collisionless cases will be larger than that in collisional cases. As is shown in Fig. 8(e) and (f), the potential difference in collisionless case is 0.1 MV higher than that in collisional case, which is consistent with the interface temperature of about \(0.1-1\) MeV in collisional case shown in Fig. 5(d). The fast electron energy can be written as: \[\epsilon_{e}=(\sqrt{1+\frac{{p_{x}}^{2}}{m_{e}^{2}c^{2}}}-1)m_{e}c^{2}=\frac{( 1+\eta)I_{in}}{n_{e}c} \tag{2}\] And the equilibrium conditions for electron flux and energy flux are given by: \[n_{r}v_{r}=n_{f}v_{f} \tag{3}\] \[I_{in}-I_{re}=\chi I_{in}=\epsilon_{e}v_{f}n_{f} \tag{4}\] Figure 5: Time evolution of charge state of Au ions (a) (b) and energy density of hot electrons (c) (d) in the two cases considering dynamic ionization. (a) (c) Collisionless case, (b) (d) collisional case. In which \(n_{r},v_{r}\) are the density and velocity of electrons which returns to the target surface from inside the target, \(n_{f},v_{f}\) are the density and velocity of fast electrons that gain energy at the surface and are re-injected into the target. \(I_{in}\) and \(I_{re}\) are the intensity of incident laser and reflected laser while \(\chi\) represents the laser energy absorption rate. From equation(2)-(4), the absorption rate is obtained: \[\chi\approx\frac{2}{a}\frac{n_{f}}{n_{c}}(1-\frac{1}{\gamma_{f}})\sqrt{\frac{n _{c}}{n_{e}}} \tag{5}\] Normalized laser amplitude \(a=\sqrt{I\lambda^{2}/1.37\times 10^{18}}\) and factor \(\gamma_{f}\sim 1+a^{2}(n_{c}/n_{e})\). In the collisional cases, the velocity of the return electrons \(v_{r}\) are reduced, resulting in a decrease in the density of the generated fast electrons \(n_{f}\). Meanwhile, due to the decrease in electric potential difference \(\Delta\phi\), the energy of the fast electrons \(\epsilon_{e}\) also decreases in collisional cases. The above are the reasons why the laser energy absorption rate decreases due to collision in the third stage. These conclusions can be clearly seen in Fig 8 (e) and (f). ## IV Effect of target charge state on fast electron generation In this section we discuss how the charge state of Au ions affects laser energy absorption and fast electron generation. To control the variables, we performed simulations with different fixed charge states whose electron density profiles are shown in Fig. 9(a). As can be seen from Fig. 9(b), in both collisional and collisionless cases, laser energy absorption rate decreases with the increasing of charge state Z of Au ions. Here absorption rate refers to the ratio of the total kinetic energy of fast electrons and ions generated in the simulation to the total incident laser energy. In order to better understand the reason for the decrease in laser energy absorption rate and verify the three-stage model in the previous section, Fig. 10(a) and (b) show the spatial profile of electron density, electron energy density, and the laser energy density corresponding to the charge state \(Z=5\) and \(Z=40\) at \(t=1\) ps, respectively. When the laser is incident, it will compress the pre-plasma into the target as is discussed in the previous section. During the compression of pre-plasma, electrons in the pre-plasma are pushed into the Au target by laser ponderomotive force, forming an charge separation field between these electrons and remaining Au ions to push the ions into the Au target. For cases with higher charge state Z, the ions have a higher charge to mass ratio Z/A and are easier to move under the electric field. Therefore, in the high-Z cases, the pre-plasma is quickly swept away, allowing the laser to interact with the solid target directly. As is shown in Fig. 10, for \(Z=5\) case there is still pre-plasma with an electron density \(\sim 1n_{c}\) in Figure 6: Time evolution of electron density in the two cases considering dynamic ionization. The first column represents collisionless case, and the second column represents collisional case. (a) (b) The first stage, (c) (d) the second stage, (e) (f) the third stage. Figure 7: Electron density and the corresponding longitudinal electric field Ex of the two cases considering dynamic ionization at \(t=400\) fs. (a) Collisionless case, (b) collisional case. The range of coordinates for the y-axis in (b) is the same as in (a), which we omit here for alignment with other figures. front of the target which means that the laser-plasma interaction still remain in the second stage for \(Z=5\) case, while for \(Z=40\) case the pre-plasma is almost swept away completely which means that the laser-plasma interaction has entered the third stage. Fig. 11 shows the electron longitudinal phase plot and corresponding electrostatic potential at \(t=1.1\) ps for \(Z=5\) cases (a) (c) and \(Z=40\) cases (b) (d). As expected, from the phase plot and potential, the two cases of \(Z=5\) remain in the second stage, while the two \(Z=40\) cases has entered the third stage. Fig. 12 shows the dynamic laser energy absorption rate calculated every \(0.1\) ps. It can be seen that the dynamic laser energy absorption rate of \(Z=40\) case decreases significantly after \(t=0.4\) ps while the dynamic laser energy absorption rate of \(Z=5\) case remains relatively high. Meanwhile less fast electrons are generated in \(Z=40\) case than in \(Z=5\) case as shown in Fig. 12(b). These results indicate that a fixed low charge state of the high-Z target which is usually assumed in the simulations is not valid. This is due to the fact that charge state will affect the evolution into the third stage and that the late stage absorption rate of the fixed low charge state will be very different from the fixed high charge state case and the dynamic ionization case. ## V Collisional effects on laser-target interaction In this section, collisional effects will be discussed, revealing how the presence of collision affects laser-target energy coupling in different stages. The 'collision' here refers to the binary Coulomb collision between charged particles, which is calculated in a natural manner based on the Monte Carlo method in our PIC code. As is shown in Fig. 9(b), absorption rates in collisional cases are higher than those in collisionless cases except for the cases of \(Z=40\). Our results show that collision has both positive and negative effects on the laser energy absorption rate. The positive effect of collision is to prevent the laser from compressing the pre-plasma and make it easier for the electrons to diffuse into the vacuum in front of the target. Due to the presence of collision, there will be a thermal pressure \(2n_{e}T_{e}/3\) generated by the thermal motion of the electrons and the Coulomb force between the electrons. The coefficient \(2/3\) is due to \(T_{e}=E_{\rm hot}=3k_{B}T/2\), in which \(E_{\rm hot}\) is Figure 8: Electron phase plot and corresponding potential of the three stages. The first column (a) (c) (e) correspond to collisionless case and the second column (b) (d) (f) correspond to collisional case. (a) (b) \(t=200\) fs, (c) (d) \(t=400\) fs, (e) (f) \(t=1100\) fs. The potential difference at the target surface is given in the small windows of (e) (f). Figure 10: Spatial profile of (red) electron density, (green) electron energy density, and (blue) laser energy density for collisionless cases (a) \(Z=5\) and (b) \(Z=40\) at \(t=1\) ps. Figure 9: (a) Initial electron density profile of cases with different charge state of Au ions. (b) The relationship between laser energy absorption rate and charge state Z. The red line and the blue line represent collisional and collisionless cases with different charge state, respectively. electron average kinetic energy, \(k_{B}\) and \(T\) are Boltzmann constant and temperature of electrons, respectively. As is shown in Fig. 11(a) and (b), for the second stage of laser-plasma interaction (\(Z=5\)), it is easier for electrons in the collisional case to escape into the electric potential well and be accelerated, resulting in more fast electron generation and higher energy absorption rate. Therefore, in the second stage of laser-plasma interaction, collision is beneficial for laser-target energy coupling. The negative effect of collision is causing electrons in the return current scattered by ions. As is shown in Fig. 11(c) and (d), in the collisionless case of \(Z=40\), both the forward and return electron beams are more energetic. This is consistent with the result in Fig. 9(b) that shows a higher absorption rate in collisionless case than that in collisional case of \(Z=40\). In the collisionless case, electrons can travel back to the front surface of Au target through the return current and re-reach the LPI region. While in the collisional case, the electrons in the return current are scattered by the ions, which lowers the velocity of return electrons \(v_{r}\) in Eq. 3. So collision is harmful for laser-target energy coupling in the third stage of laser-plasma interaction. In summary, there are abundant electrons in the shelf plasma to be accelerated in the second stage, in which collision helps electrons diffuse into the electric potential well and be accelerated by the laser; however, there is almost no pre-plasma left in front of the target in the third stage so the supply of electrons mainly depends on the return current within the target which will be severely suppressed by collision. Therefore, though collision is beneficial for laser energy absorption in the second stage, it will be harmful in the third stage. And it can be inferred from Fig. 9(b) whether the laser-plasma interaction will enter the third stage is mainly determined by the charge state of Au ions. Figure 11: Electron phase plot and corresponding potential at \(t=1.1\) ps for (a) \(Z=5\) collisionless case, (b) \(Z=5\) collisional case, (c) \(Z=40\) collisionless case, (d) \(Z=40\) collisional case. Figure 14: (a) Laser energy absorption rate in (blue) collisionless cases and (red) collisional cases with different blow-off plasma densities. (b) Fast electron spectrum of two collisionless cases (red) \(10n_{c}\) and (blue) \(100n_{c}\) at \(t=1\) ps. Figure 12: (a) Time evolution of laser energy absorption rate under different conditions with fixed charge state of Au ions calculated every 0.1 ps. (b) Fast electron spectrum at \(t=1.1\) ps for (solid line) collisional cases and (dashed line) collisionless cases with (blue) \(Z=5\) and (red) \(Z=40\). Figure 13: Initial electron density profile of cases with different blow-off plasma densities. ## VI Collisional effects on electron recirculation in the presence of blow-off plasma Notice that the Au cone is typically inserted into plasma, which may influence the generation of fast electrons. Therefore cases in which there are several micro meters of blow-off plasma behind the target should also be discussed. In order to study the effect of blow-off plasma on laser energy absorption and electron generation, the 5 \(\mu\)m thickness end of the target is replaced by blow-off plasma of different densities in the simulation. The initial density of the target is shown in Fig. 13. Here the target charge state is fixed as \(Z=40\) and only the blow-off plasma density differs between different cases. Hereinafter each case will be referred to as its blow-off plasma density. To verify that the gap of 5 \(\mu\)m behind the target is long enough, we added simulations in which the blow-off plasma density is still 0 \(nc\) but the gap behind the target is increased to 15 \(\mu\)m. It is found that the laser energy absorption rates of 5 \(\mu\)m gap and 15 \(\mu\)m gap are almost the same. This is due to the fact that the 5 \(\mu\)m gap behind the target has already prevented most of electrons from leaving the right boundary. Therefore, it can be considered that the gap of 5 \(\mu\)m behind the target is long enough. It is believed that the blow-off plasma has the effect of avoiding fast electron recirculation [15]. Fig. 14(a) shows the laser energy absorption rate over the whole simulation duration of both collisionless and collisional cases with different blow-off plasma densities. With the increasing of blow-off plasma density, the absorption rates of collisionless cases decrease significantly. The energy spectrum in Fig. 14(b) also shows that less fast electrons are generated in the collisionless \(100n_{c}\) case than in the \(10n_{c}\) case. In Fig. 15, electron density profiles at \(t=1\) ps in different cases are given. In general, with the increasing of blow-off plasma density from \(0n_{c}\) to \(1000n_{c}\), the electron density in front of the target declines gradually, which has been discussed in the previous section to be important in both the second stage and the third stage. The difference of electron densit Figure 16: (a) Spectrum of electrons passing forwards through the rear observation plane at \(x=34\,\mu\)m; (b) Spectrum of electrons passing backwards through the front observation plane at \(x=21\)\(\mu\)m. Figure 17: Electron longitudinal phase plot at \(t=1\) ps for collisionless cases (the first column) and collisional cases (the second column) with different blow-off plasma densities. (a) (b) \(1000n_{c}\); (c) (d) \(100n_{c}\); (e) (f) \(10n_{c}\); (g) (h) \(0n_{c}\). Figure 15: Electron density profile at \(t=1\) ps for (a) collisionless and (b) collisional cases from \(0n_{c}\) to \(1000n_{c}\). is caused by the electron recirculation inside the target, which is influenced by blow-off plasma density. Due to the presence of the sheath field behind the target, a large part of electrons did not leave from the right boundary directly but return to the left until they came to the target front surface, forming the reflux current and electron recirculation inside the target. Fig. 16(a) and (b) show the energy spectrum of the electrons which travel forwards through the observation plane at x=34 \(\mu\)m and travel backwards through the observation plane at x=21 \(\mu\)m respectively. When the density of blow-off plasma increases to \(100n_{c}\), the number of electrons reflected back by the sheath field behind the target is reduced, which explains why the laser energy absorption rate decreases significantly in collisionless \(100n_{c}\) case. The first column of Fig. 17 shows the longitudinal phase plot at \(t=1\) ps of different blow-off plasma densities for collisionless cases while the second column for collisional cases. With the increasing of blow-off plasma density, electrons in the forward bunches and the reflux current are less powerful. For collisionless cases, the electron density of blow-off plasma has a significant effect on the electron recirculation. However, when collision is taken into account, the electron recirculation is suppressed and the effect of blow-off plasma density will not be so significant. As shown in Fig. 14, for collisional cases, laser energy absorption rate remains almost the same for different blow-off plasma densities which means that the electron circulation inside the target is negligible in the presence of collision. To quantitatively analyze the inhibitory effect of collision on reflux current, we make an estimate of the stopping power within the target. The stopping power inside the target consists of ohmic component \(\epsilon_{o}\) and collisional component \(\epsilon_{c}\)[29]: \[\begin{split}\frac{dE}{dx}&=\epsilon_{o}+\epsilon_{ c}\\ &=-\frac{e^{2}q_{0}}{\sigma m_{e}c^{2}(\gamma_{0}-1)}(\frac{{ \gamma_{0}}^{2}}{{\gamma_{0}}^{2}-1})^{1/2}\frac{1}{\sqrt{\Gamma(E)}}\\ &-\frac{4\pi e^{4}n_{i}}{m_{e}c^{2}R}\Gamma(E)[Z_{i}\Lambda_{fe} +(Z-Z_{i})\Lambda_{be}]\end{split} \tag{6}\] Where \(E\) is the energy of fast electron, \(\gamma=1+E/m_{e}c^{2}\) is the relativistic factor for the electron, and \(\Gamma(E)=\gamma^{2}/(\gamma^{2}-1)\). \(E_{0}\) and \(\gamma_{0}\) are the initial electron energy and the relativistic factor of electron respectively. \(q_{0}\) is the initial energy flux density of fast electrons. \(m_{e}\), \(c\) and \(n_{i}\) are the electron rest mass, the speed of light and ion density respectively. \(Z_{i}=40\) is charge state of Au ions and \(Z=79\) is the maximum charge state of Au ions. \(\Lambda_{fe}\) and \(\Lambda_{be}\) are the Coulomb logarithms for fast electron that collides with free and bound electrons, respectively. \(\sigma\) is the plasma conductivity while R is the electron-ion scattering factor: \[R=[1-\exp(-\frac{\gamma\pi^{2}}{4Z_{i}}\frac{[Z_{i}\Lambda_{fe}+(Z-Z_{i}) \Lambda_{be}]}{\Lambda_{i}})]^{1/2} \tag{7}\] Where \(\Lambda_{i}=\ln[2m_{e}c^{3}(\gamma-1)(\gamma^{2}-1)^{1/2}/(Z_{i}e^{2}\omega_{ p}\gamma)]\) is the Coulomb logarithm of fast electrons that collide with ions and \(\omega_{p}\) is the plasma frequency. To simplify the calculation, here we directly give out the approximate formulas for ohmic and collisional stopping power[29]: \[\epsilon_{o}\left[\text{MeV}/\mu\text{m}\right]\approx 0.125\frac{\chi E}{ \lambda^{2}(1+5.5\cdot 10^{2}T^{3/2}Z_{i}^{-1}\Lambda_{ei}^{-1})} \tag{8}\] \[\epsilon_{c}\left[\text{MeV}/\mu\text{m}\right]\approx 0.32\cdot 10^{-4} \frac{\rho[Z_{i}\Lambda_{fe}+(Z-Z_{i})\Lambda_{be}]}{ZR} \tag{9}\] Figure 18: Electron transverse phase plot at \(t=1\) ps for collisionless cases (the first column) and collisional cases (the second column) with different blow-off plasma densities. (a) (b) \(1000n_{c}\); (c) (d) \(100n_{c}\); (e) (f) \(10n_{c}\); (g) (h) \(0n_{c}\). Here \(E\) is electron energy in the unit of MeV, \(\chi\approx 0.2\) is laser energy absorption rate, \(\lambda=1\) is the laser wavelength in the unit of \(\mu\)m, \(T\approx 4\) is target temperature in the unit of keV and \(\rho=19.32\) is target density in the unit of g/cc. Take a reasonable approximate value of \(\Lambda_{ei}\approx 10\), \(\Lambda_{fe}\approx 6\), \(\Lambda_{be}\approx 2\) and \(R\approx 0.6\). Then it can be obtained that for a typical electron generated in the third stage with energy 0.1 MeV, the ohmic stopping power \(\epsilon_{o}\approx 2\times 10^{-4}\) MeV/\(\mu\)m and the collisional stopping power \(\epsilon_{e}\approx 4\times 10^{-3}\) MeV/\(\mu\)m which is an order of magnitude higher than the ohmic stopping power. At this point, the phenomenon observed in Fig. 14(a) can be explained. Fast electrons generated near the target front surface pass through the target and are reflected by the sheath field at the rear surface. The reflected electrons then pass through the target back again to the front surface, forming the electron recirculation. In this process, the distance that electrons travelled is twice the target thickness (20 \(\mu\)m). For collisional case, the energy loss of these electrons is \(\Delta E\approx 0.084\) MeV. Therefore the number of low-energy electrons (\(E\sim 0.1\) MeV) in the recirculation is greatly reduced, resulting in relatively low laser energy absorption rates. In addition, collision can change the direction of electron velocity. As shown in Fig. 18, transverse momentum of electrons in collisional case is significantly larger than that in collisionless case. This effect also inhibits electron recirculation because the longitudinal momentum of the electron is transferred to the transverse. ## VII Summary We have described in detail the three-stage model of laser-plasma interaction, introducing the mechanism of fast electron generation in each stage. Using the three-stage model, the roles of ionization, collision, and blow-off plasma in laser target interaction and fast electron generation are illustrated. With the increase of the charge state of Au ions, the laser-plasma interaction transfers to the later stage, resulting in a decrease in laser energy absorption rate. Collision provides a thermal pressure that makes it easier for electrons to escape into the vacuum in front of the target and be accelerated. On the other hand collision increases the stopping power within the target to decelerate the reflux current of electrons. Therefore collision is beneficial for laser energy absorption in the second stage but will be harmful in the third stage. When there is blow-off plasma behind the target, the electron density of blow-off plasma has a significant impact on the laser absorption rate and electron recirculation in collisionless cases while electron recirculation inside the target will be suppressed severely in collisional cases. Therefore the laser energy absorption remains low and almost unchanged for different blow-off plasma densities in collisional cases. The results show that in the presence of collision, the electron circulation inside the target is negligible. Based on the results of this paper, it is recommended to apply a layer of low-Z material on the inner surface of the gold cone to prevent the laser-plasma interaction from entering the third stage. In addition, the thickness of the gold target should be thin to reduce the energy loss and momentum direction change of the fast electron beam during transport in the gold cone. ## VIII Acknowledgments This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (Grant Nos. XDA25010100 and XDA250050500), National Natural Science Foundation of China (Grants No. 12075204), and Shanghai Municipal Science and Technology Key Project (No. 22JC1401500). D. Wu thanks the sponsorship from Yangyang Development Fund.
2302.01281
Redesigning Electronic Health Record Systems to Support Developing Countries
Electronic Health Record (EHR) has become an essential tool in the healthcare ecosystem, providing authorized clinicians with patients' health-related information for better treatment. While most developed countries are taking advantage of EHRs to improve their healthcare system, it remains challenging in developing countries to support clinical decision-making and public health using a computerized patient healthcare information system. This paper proposes a novel EHR architecture suitable for developing countries--an architecture that fosters inclusion and provides solutions tailored to all social classes and socioeconomic statuses. Our architecture foresees an internet-free (offline) solution to allow medical transactions between healthcare organizations, and the storage of EHRs in geographically underserved and rural areas. Moreover, we discuss how artificial intelligence can leverage anonymous health-related information to enable better public health policy and surveillance.
Jean Marie Tshimula, D'Jeff K. Nkashama, Kalonji Kalala, Maximilien V. Dialufuma, Mbuyi Mukendi Didier, Hugues Kanda, Jean Tshibangu Muabila, Christian N. Mayemba
2023-01-31T19:16:38Z
http://arxiv.org/abs/2302.01281v1
# Redesigning Electronic Health Record Systems to Support Developing Countries ###### Abstract Electronic Health Record (EHR) has become an essential tool in the healthcare ecosystem, providing authorized clinicians with patients' health-related information for better treatment. While most developed countries are taking advantage of EHRs to improve their healthcare system, it remains challenging in developing countries to support clinical decision-making and public health using a computerized patient healthcare information system. This paper proposes a novel EHR architecture suitable for developing countries--an architecture that fosters inclusion and provides solutions tailored to all social classes and socioeconomic statuses. Our architecture foresees an internet-free (offline) solution to allow medical transactions between healthcare organizations, and the storage of EHRs in geographically underserved and rural areas. Moreover, we discuss how artificial intelligence can leverage anonymous health-related information to enable better public health policy and surveillance. ## 1 Introduction Electronic health record (EHR) systems provide a secure, integrated collection of patient and population electronically-stored health information in a digital format (Odekunle et al., 2017; Kukafka et al., 2007; Akanbi et al., 2012; Adetoyi and Raji, 2020; Kavuma, 2019; Kohli and Tan, 2016); it provides a comprehensive digital view of a patient's health history with the goals of eliminating legibility problems with handwritten records; enabling remote access of health records; facilitating intervention earlier in the course of the disease, patient care, and outcomes; increasing efficiency and lowering costs; and ameliorating billing procedures (Schmitt and Wofford, 2002; Erstad, 2003). The potential benefits of EHR systems have enabled its wide adoption in developed and some emerging countries (Black et al., 2011). While most developed countries are taking advantage of EHRs to improve their healthcare system, it remains challenging in developing countries to support clinical decision-making and public health using a computerized patient healthcare information system. Some developing countries including sub-Saharan Africa still predominantly use paper-based systems in healthcare delivery, instead of computerized patient management systems (Odekunle et al., 2017; Akanbi et al., 2012; Adetoyi and Raji, 2020; Kavuma, 2019; Kohli and Tan, 2016). The lack of an EHR system may lead to issues in managing patient health data to improve the quality of patient care and safety through decision support to clinicians. For instance, patient P lives in city X, travels to city Y in the same country and falls sick during her stay. Since clinician C in Y does not have more health data about patient P, (i) treatment options provided to P could cause some important problems involving past health issues and (ii) prescription drugs delivered to P could ignore her medical history. Medication errors can result in a substantial economic burden on patients, side effects, and irreversible consequences; there is a huge spectrum of medication errors. Some errors may be minors and others may lead to adverse events causing complications and higher mortality (Bates and Slight, 2014; Forster et al., 2008). However, EHR systems can potentially reduce prescription errors and adverse drug interactions (Chaudhry et al., 2006) and make available medical history data during emergency care (Stiell et al., 2003). This data provides vital medical history details and gives more options to clinicians to decide which treatment best corresponds to the problem and when it should be administered. We pose the question: _"How could we replace paper-based systems with EHR systems in the context of developing countries?"_. A study identified some factors hindering the widespread adoption of EHR systems in developing countries. The identified fac tors include but are not limited to high cost of procurement and maintenance, poor electricity supply and internet connectivity (Odekunle et al., 2017). This paper therefore proposes an EHR architecture that addresses the previously mentioned factors. We believe that the implementation of EHR systems in the style of industrialized countries may fail to function and provide solutions in the context of developing countries. To implement an EHR system in developing countries, besides the aforementioned issues, we also address the issues related to social inclusion, discrimination and socioeconomic status in healthcare. Everyone qualifies for health monitoring regardless of personal income, or standard of living. We propose a straightforward architecture to implement an EHR system that fosters inclusion and provides solutions tailored to all social classes. The proposed architecture takes into consideration internet coverage, electricity, and infrastructure issues and foresees alternative solutions to skirt these issues. More interestingly, our architecture proposes an internet-free alternative (an offline solution) to allow medical transactions within hospitals and clinics and the storage of EHRs in geographically underserved and rural areas. Note that the offline solution does not require relatively expensive terminals (such as computers, tablets, and smartphones) to establish connections between healthcare organizations. The motivation behind this solution is to bridge inequalities in healthcare and allow healthcare organizations with limited means to access EHR systems with any type of mobile phone that they possess. Additionally, the proposed architecture foresees the utilization of artificial intelligence to enable better public health policy and surveillance in (i) monitoring patterns suggesting disease outbreaks, (ii) predicting disease based on symptoms, medical records, and treatments over time, and (iii) providing drug recommendations. The rest of this paper is organized as follows. A brief outline of some related work is given in SS2. Section 3 describes the proposed architecture. We discuss the scope of the proposed architecture, challenges, and opportunities in SS4. We describe ethical considerations in SS5. Finally, we conclude and present future directions in SS6. ## 2 Related work Researchers investigated qualitative and quantitative methods of storing patient data and reported that an electronic storage and indexing system are a more suitable method for administering medical records (Kohli and Tan, 2012). In order to manage patient data, many studies addressed the problem of the implementation and adoption of EHR systems in the context of developing countries (Adetoyi and Raji, 2020; Odekunle et al., 2017; Akanbi et al., 2012; Kavuma, 2019; Akwaowo et al., 2022; Sood et al., 2008; Fraser et al., 2005; Syzdykova et al., 2017; Kamadjeu et al., 2005; Sabi et al., 2018). For instance, Adetoyi and Raji (2020) proposed a design framework for inclusion of EHRs in medical informatics of sub-Saharan Africa; and Kamadjeu et al. (2005) experimented with the use of an EHR system in urban primary health care practice in Cameroon. Jawhari et al. (2016) examined EHR deployments in sub-Saharan African slums by considering the systems, people, processes, and product factors that endorse a crucial involvement in the fate of its implementation, also equating difficulties in knowledge and learning opportunities for EHR use in resource-constrained settings. On similar lines, Kavuma (2019) evaluated the implementation of electronic medical record systems in sub-Saharan Africa and then assessed their usability based on a defined set of metrics. While the EHR systems proposed in (Adetoyi and Raji, 2020; Kamadjeu et al., 2005) have implemented good strategies to permit healthcare personnel to quickly access patient data to support healthcare delivery, many factors still affect their adoption. The studies in (Odekunle et al., 2017; Sabi et al., 2018; Akwaowo et al., 2022) highlighted some factors hindering the facilitation of broad adoption of EHR systems in sub-Saharan Africa, including the lack of infrastructure, electricity outages, and internet Figure 1: Architectural diagram of an EHR system tailored to the context of developing countries. coverage issues Odekunle et al. (2017). In this paper, we introduce an architecture that addresses these factors and proposes an internet-free solution for accessing an EHR system with relatively minor dependence on the previously mentioned factors. The rationale behind this is to facilitate the use of EHRs in healthcare delivery, make the EHR system accessible to everyone without exception, including slums and rural areas and regardless of socioeconomic status, and combat inequalities in healthcare. The geographical restrictions of the internet represent an important challenge for the development of Africa Counted and Arawole (2016). Research discovered that demographic and socioeconomic factors as well as complementary infrastructure are also important factors in internet adoption Rodriguez-Castelan et al. (2021). Usually, individuals utilize mobile broadband internet to access the internet in developing countries, in order to get maternal health support, detect fake drugs and access a digital health-financing platform Holst et al. (2020). In healthcare settings, the internet can help clinicians rapidly access patient data and identify suitable treatment plans. Since the lack of infrastructure slows down EHR adoption, our architecture proposes a simple solution that bridges inequalities in healthcare. The particularity of our solution is that it utilizes technology, such as short code1 and Unstructured Supplementary Service Data (USSD),2 to access EHR systems from mobile phones. USSD is a communication protocol used by mobile devices to communicate with a network service provider. USSD can be used to access various services, such as checking account balances or subscribing to a service. It is possible to use USSD to access electronic health records, but this typically involves the creation of a system in which healthcare providers can input and update information into the EHR system using USSD, and access patient records by sending a USSD request to the system. This could be useful in situations where patients or healthcare providers do not have access to a computer or smartphone, or where internet connectivity is unreliable. Footnote 1: Understanding the Common Short Code: Its Use, Administration, and Tactical Elements: [https://identitypraxis.com/2006/09/01/understanding-the-common-short-code-its-use-administration-and-tactical-elements](https://identitypraxis.com/2006/09/01/understanding-the-common-short-code-its-use-administration-and-tactical-elements). Accessed 2 January 2023 Footnote 2: USSD (Unstructured Supplementary Service Data): [https://www.techtarget.com/searchetworking/definition/USSD](https://www.techtarget.com/searchetworking/definition/USSD). Accessed 2 January 2023 ## 3 Proposed architecture We propose an EHR system management architecture suitable for developing countries. The proposed architecture improves the accessibility of patient data by clinicians in urban, geographically underserved and rural areas. Figure 1 illustrates the proposed architecture, which includes a centralized database (EHR Database), and a web-based, mobile-based, and USSD-based EHR system. This section presents each sub-system (component) in the architecture and discusses its specific role and constraints to work properly. ### Module 1: Web-based EHR system The web-based EHR system (WES) allows clinicians to access patient medical records via a browser of any device, including a computer and smartphone. As shown in Figure 1(a), WES could be Figure 2: Illustration of the proposed architecture at the module level hosted on a server with uninterrupted internet access. Note that communication between the WES and EHR database also goes over the internet. Interactions start from the WES; for instance, if a clinician wants to access a patient's prescription records. The clinician should first provide credentials for authentication purposes. Once the clinician successfully logs in, the system sends her request to the web server over the internet, and then the web server queries the EHR database to obtain the requested data from WES. Ultimately, the web server responds to the web application with the requested information. One of the advantages of WES is it provides a more friendly interface. It can display different types of data, including images such as medical scans. However, we noticed the architecture of WES depends strongly on the internet as its sole communication channel. Consequently, WES can solely be used at hospitals that can afford computers, internet, and servers. In the context of developing countries, WES seems to be an inadequate solution because it constantly requires the internet and electricity. ### Module 2: Mobile-based EHR system Unlike WES, the mobile-based EHR system (MES) allows access to EHRs only on mobile devices. Figure 2b shows the functioning of MES. Interactions in MES are slightly similar to WES; the major difference resides in the mobility and capability of MES to work offline. The instability (or lack) of the internet and untimely power cuts in developing countries handicap hospitals to operate normally. Therefore, we propose that MES embeds a lite database called _DB lite_. In the absence of the internet, EHRs could be stored in this database locally. A syncing process is foreseen to transfer locally stored data to the centralized database when the internet connection is established; the syncing process aims to ensure that both DB lite and the centralized database are up to date. Since MES requires smartphones and partially the internet to work properly, it could be challenging for healthcare organizations that are unable to afford smartphones and internet subscriptions. ### Module 3: USSD-based EHR system USSD (Unstructured Supplementary Service Data) is a communication protocol used by GSM cellular networks to send text messages between a mobile phone and an application program in the network. It allows users to access various services, such as banking and information services, by dialing a code on their phone's keypad and following the prompts. USSD messages are transmitted over the same channels as voice calls, but they do not require a dedicated connection to be established, as is the case with SMS (Short Message Service) texts. This makes USSD a faster and more efficient way to send text-based data between a mobile phone and a server (Zhou et al., 2015; Lakshmi et al., 2017). An EHR architecture using USSD could enable healthcare providers to access and manage patient data and interact with the healthcare system, using simple text-based commands sent via USSD, even in areas with limited internet connectivity (see Figure 2c). (i) The USSD gateway acts as the interface between the mobile network and the EHR system. It is responsible for receiving USSD requests from mobile phones, parsing the requests, and sending them to the EHR system for processing. (ii) The EHR server stores and manages the electronic health records of patients. It receives requests from the USSD gateway, processes them, and sends back the appropriate response. (iii) The EHR database stores electronic health records and other relevant data, such as patient demographics, medications, allergies, and medical history. (iv) The USSD menu system provides the interactive menu system that mobile users interact with when using the EHR system via USSD. It allows users to navigate through the different options and select the appropriate one to perform a specific task, such as viewing their medical history or requesting a prescription refill. (v) The security layer is responsible for ensuring the confidentiality and integrity of the data transmitted between mobile phones and the EHR system. It may include measures such as encryption and authentication. One of the advantages of the USSD-based EHR system (UES) over WES and MES is that it can be used on any type of mobile phone, even feature phones that do not have internet access; this means that it can potentially be accessed by a wider range of users, including those in rural or low-income areas where internet access may be limited. USSD is a very simple and user-friendly technology, requiring only a basic understanding of how to use a phone keypad; this makes it easy for users to navigate and use the EHR system, even if they are not technically savvy. USSD communication is conducted over the airwaves and is not stored on the device, which means that it is relatively less vulnerable to hacking or data breaches compared to WES and MES. We believe that UES can be a suitable solution for facilitating the utilization of the EHR system in geographically underserved and rural areas, as it requires fewer resources and infrastructure. ## 4 Discussion This paper presents a novel EHR architecture suitable for developing countries--an architecture organized into three sub-systems. We discuss the benefits of each sub-system and show how the proposed architecture fosters inclusion and provides solutions adapted to all social classes and socioeconomic statuses. We show that WES and MES depend on the internet and infrastructure such as computers, smartphones, and servers, even if MES has an offline option that allows it to store EHR data locally, and further syncs this data to the centralized database. A limitation of MES is that the centralized database gets completely updated only when locally stored EHRs are synced. This could be critical in certain scenarios. _For instance, suppose that patient P was admitted to hospital H1 and her EHRs were stored locally in a device at H1. For medical reasons, hospital H1 decides to transfer P to hospital H2 for more highly specialized care. Hospital H2 could encounter difficulty accessing the recent EHRs of patient P and providing a medical intervention because of the sync issue._ While WES and MES provide straightforward scenarios, we demonstrated that these systems are inadequate for developing countries because of untimely power cuts and internet issues. We argue that UES bridges inequalities in healthcare in areas with limited internet or technological infrastructure to access their health records and other healthcare information. Similar to WES and MES, UES can help healthcare providers make more informed decisions about treatment, reducing the risk of errors or misdiagnosis. One of the advantages of UES is it provides the possibility to access health records at a lower cost and can improve the quality of care for underserved populations by providing healthcare providers with access to a patient's complete medical history. UES can help reduce healthcare costs for underserved populations by enabling the sharing of medical records and other healthcare information between healthcare providers. This can help prevent duplication of tests and other unnecessary procedures, which can be particularly important for patients who may not have the resources to pay for multiple visits or procedures. There are also some potential drawbacks to using USSD for EHR systems, including the limited amount of data that can be transmitted using USSD and the lack of support for multimedia content. Our architecture proposes EHR systems that respond to the limitations of (Adetoyi and Raji, 2020; Kamadjeu et al., 2005; Jawhari et al., 2016), foster social inclusion, and can facilitate EHR adoption in developing countries. Many healthcare providers in developing countries may not be familiar with using EHR systems, so providing training and support can help ensure that they are able to use the systems effectively (Akwaowo et al., 2022; Odekunle et al., 2017; Fraser et al., 2005). Beyond EHR data storage and manipulation, we can utilize artificial intelligence (AI) to analyze EHR data to enable public health policy and surveillance in a number of ways. One potential use of AI is the analysis of large amounts of data contained in EHRs to identify patterns and trends in the data that may be relevant to public health. This can help public health officials identify potential health threats and take appropriate action to prevent or mitigate them. Another use of AI in the context of public health surveillance is the ability to predict and prevent outbreaks of infectious diseases by analyzing EHR data along with other factors such as population density, travel patterns, and weather conditions (Yang et al., 2022; Schwartz et al., 2019; Ayala Solares et al., 2020; Wong et al., 2018). By identifying clusters of patients with similar symptoms or diagnoses, AI models can help public health officials anticipate where and when outbreaks are likely to occur and take steps to prevent them. In addition, AI can be used to optimize the allocation of limited public health resources, such as vaccines and medications, by analyzing EHR data to identify which patients are most in need of these resources. Overall, the use of AI in EHRs can help to improve the effectiveness of public health policy and surveillance efforts, particularly in developing countries where access to data and resources may be limited. ## 5 Ethical considerations There are important ethical considerations to cover in regard to EHR systems. It is important to ensure that patient information is always kept confiden tial and secure. This means that access to EHRs should be restricted to authorized healthcare personnel only and that measures such as encryption and secure authentication should be used to protect patient data. Patients have a right to privacy; i.e., personal information should not be shared without the patient's consent and that measures should be in place to prevent unauthorized access to patient data. Patients have the right to know who has access to their personal data, and to control who can view and use it (Ozair et al., 2015; Genes and Appel, 2013). Note that EHR data can serve to the improvement of public health policies and allow governments or health organizations to make timely decisions. For such a purpose, patient data should be anonymous and aggregated. It is critical to ensure that no reverse engineering could result in patient personal information. For data analysis by geographical zones, we suggest the withdrawal of geographical zones with very low population density in such analysis in order to avoid the disclosure of personal information from aggregates. ## 6 Conclusion This paper proposes a novel EHR architecture tailored to the context of developing countries. The proposed architecture considers issues related to the internet and electricity and the lack of infrastructure in developing countries and provides solutions adapted to geographically underserved and rural areas. We show how this architecture fosters social inclusion and discuss how the use of AI, on data stemming from the proposed architecture, can help to improve the effectiveness of public health policy and surveillance efforts in developing countries. Additionally, we discuss a few measures and ethical considerations that should be taken while manipulating patient data. In the future, we would like to build AI models that use metadata-induced contrastive learning to (i) provide drug recommendations within an EHR system and (ii) learn patient representations from EHR data to predict dangerous cases of polypharmacy usage and discover sociodemographic biases in the outcomes of polypharmacy usage. ## Acknowledgments The authors thank Moise Mbikayi, Rene Manasse Galekwa, Senghor Abraham Gihonia, and Cady Nyome Gbomosa for helpful discussions and comments on early drafts.
2309.13291
Reinforcement Learning for Robust Header Compression under Model Uncertainty
Robust header compression (ROHC), critically positioned between the network and the MAC layers, plays an important role in modern wireless communication systems for improving data efficiency. This work investigates bi-directional ROHC (BD-ROHC) integrated with a novel architecture of reinforcement learning (RL). We formulate a partially observable \emph{Markov} decision process (POMDP), in which agent is the compressor, and the environment consists of the decompressor, channel and header source. Our work adopts the well-known deep Q-network (DQN), which takes the history of actions and observations as inputs, and outputs the Q-values of corresponding actions. Compared with the ideal dynamic programming (DP) proposed in the existing works, our method is scalable to the state, action and observation spaces. In contrast, DP often suffers from formidable computational complexity when the number of states becomes large due to long decompressor feedback delay and complex channel models. In addition, our method does not require prior knowledge of the transition dynamics and accurate observation dependency of the model, which are often not available in many practical applications.
Shusen Jing, Songyang Zhang, Zhi Ding
2023-09-23T07:21:47Z
http://arxiv.org/abs/2309.13291v1
# Reinforcement Learning for Robust Header Compression under Model Uncertainty ###### Abstract Robust header compression (ROHC), critically positioned between the network and the MAC layers, plays an important role in modern wireless communication systems for improving data efficiency. This work investigates bi-directional ROHC (BD-ROHC) integrated with a novel architecture of reinforcement learning (RL). We formulate a partially observable _Markov_ decision process (POMDP), in which agent is the compressor, and the environment consists of the decompressor, channel and header source. Our work adopts the well-known deep Q-network (DQN), which takes the history of actions and observations as inputs, and outputs the Q-values of corresponding actions. Compared with the ideal dynamic programming (DP) proposed in the existing works, our method is scalable to the state, action and observation spaces. In contrast, DP often suffers from formidable computational complexity when the number of states becomes large due to long decompressor feedback delay and complex channel models. In addition, our method does not require prior knowledge of the transition dynamics and accurate observation dependency of the model, which are often not available in many practical applications. Bi-directional robust header compression (BD-ROHC), network layer, packet header. ## I Introduction Advancements in recent communication generations have greatly enhanced bandwidth efficiency through technologies at the PHY/MAC layer, reaching performance levels close to their limits [1]. Consequently, little room is left for further improvement at PHY/MAC layer. With the widespread adoption of Internet Protocols (IP) in numerous applications and services, the move towards all-IP packet-switched architectures in wireless network infrastructures has become prominent [2]. Future improvements in wireless networks should not only concentrate on MAC and PHY layer techniques, but also encompass a greater focus on optimizing IP-based protocol stacks across wireless infrastructures. An IP packet consists of a header and a payload, where the header contains essential system information such as version, time to live, and IP addresses. In certain applications, such as Voice-over-Internet-Protocol (VoIP) and Internet-of-Things (IoT), the header can be comparable to, or even larger in size than, the payload, which can compromise the overall data efficiency of packets transmission. To address this issue, a mechanism called robust header compression (ROHC) was developed [3, 4, 5, 6]. ROHC takes advantage of the fact that many fields in the header tend to change slowly throughout the lifetime of a data flow. It selects reference values for these fields and only encodes the small deviations from these reference values in the header. By compressing the header in this manner, ROHC reduces the overhead associated with transmitting IP packets, thereby improving packet network efficiency. The adoption of ROHC has been widespread in wireless packet switch networks such as 4G-LTE [7] and 5G-NR [8], and it has a strong potential in IoT scenarios where packets with short payload are prevalent. By minimizing the header length without sacrificing important system information, ROHC enhances the efficiency of IP-based communication in various wireless networks. Despite its widespread deployment, ROHC has not received significant attention. However, a few studies have focused on improving and analyzing the performance of ROHC. As an early work, the authors of [9] proposed configurations of ROHC for scenarios with scarce resources links to improve efficiency and robustness. Window-based least significant bits (W-LSB) encoding [3] is one of the common compression method in ROHC. The authors of [10] studied the impact of the different channel conditions on W-LSB encoding in ROHC. It pointed out that smaller window size is preferable when channel condition is good. However, the existing works only considered memoryless channels. A more recent work [11] filled this gap by adopting Gilbert Elli dynamic channel model in ROHC, and described the system behavior with mathematical models. The proposal of [12] leveraged hybrid ARQ (HARQ) information from PHY/MAC layer to facilitate ROHC design. More recently, the authors of [13] first formalized the U-mode ROHC as a partially observable _Markov_ decision process (POMDP), in which trans-layer information, including HARQ, is used as partial observation to support the decision-making of compressor. A subsequent work [14] proposed the bi-directional ROHC (BD-ROHC), in which the compressor may require feedback from the decompressor. It formulated BD-ROHC as a POMDP and proposed the optimal solution using dynamic programming (DP). Although the optimal solution provided by DP [14] for the BD-ROHC, it has two practical issues. First, DP becomes computationally prohibitive when the number of POMDP states becomes larger, resulting from long feedback delay and complicated channel models. Second, DP, together with other existing approaches, rely on transition dynamics and probabilistic of POMDP model, which is often unavailable or inaccurate in many practical applications. To address these issues, we propose a BD-ROHC design using reinforcement learning (RL). Specifically, we adopt deep Q-network (DQN) to incorporate the history of actions and observations as inputs, and generate as outputs the Q-value corresponding to each action. This RL framework enables us to handle POMDP with a vast number of states, including the cases where the state space is infinite. Moreover, DQN's training process only relies on collected episodes, eliminating the need for explicit knowledge of transition dynamics. The double DQN (DDQN) technique is further deployed to improve the convergence and stability of the learning process. Our simulation results demonstrate better transmission efficiency achieved by our proposed RL method than benchmarks under different channel models and parameter settings, without prior knowledge of model dynamics. The rest of this paper is organized as follows: Section II reviews the system model, including the basic functionality of compressor and decompressor. Section III delivers the POMDP formulation. Section IV provides details on the deployment of the deep Q-learning for BD-ROHC. Section V demonstrates the the proposed design through simulations. Section VI finally summarizes the work. ## II System Model Fig. 1 presents the system diagram of BD-ROHC. The compressor selects compressed headers with different lengths for the packets to be transmitted. The decompressor tries to decode headers of packets received from the channel. Decoding failures could happen due to the imperfection of channels or over-aggressive compression of the headers. The compressor uses trans-layer information from lower layer (MAC/PHY) as observations to support its decision making, such as channel quality information (CQI), hybrid ARQ (HARQ) feedback and frequency of header context initialization. In the bi-directional setting, the ROHC compressor may also request feedback from the decompressor, which can be used for decision making as well. We now discuss the components and functionalities of BD-ROHC in details. ### _Headers_ Let \(\alpha_{C}[t]\) denote the compressor's decision on the headers at the \(t\)-th slot. There are three choices of headers, namely initialization and refresh (IR) header, compressed header with \(7\)-bits CRC (CO7) and compressed header with \(3\)-bits CRC (CO3), denoted as \(\alpha_{C}=0,1,2\), respectively. * **IR header** (\(\alpha_{C}[t]=0\), longest with length \(L_{0}\)) is the full length header, which is used to establish header context at the decompressor. The decompressor can decode compressed header, i.e., CO7 and CO3, only if the header context has been established. * **CO3 header** (\(\alpha_{C}[t]=2\), shortest with length \(L_{2}\)) is the fully compressed header. It can be decoded only if the decompressor maintains a header context. Due to the imperfection of the channels, decoding failures can happen even if header context is maintained. After a few successive failures, the context will be damaged and CO3 will not be useful. * **CO7 header** (\(\alpha_{C}[t]=1\), of medium length \(L_{1}\)) is used for repairing the damaged header context. If the decompressor successfully decode a CO7 header, the damaged header context can be repaired, after which CO3 header can be decoded again with the header context. In general, longer headers are more likely to be successfully decoded by the decompressor, but it has low packet transmission efficiency defined as \(L/(L+L_{i})\), where \(L\) is the length of the payload. Note that headers are not always compressible. Whether a header is compressible depends on the header sources at the transmitter. We denote the compressibility of the header at the \(t\)-th slot as \(\sigma_{S}[t]\), and use \(\sigma_{S}[t]=1\) and \(\sigma_{S}[t]=0\) to represent compressible and uncompressible header, respectively. If a header is uncompressible, an IR or CO7 header will be required, i.e., \(\alpha_{C}[t]=0,1\), and CO3 header will not be taken by the decompressor. We assume \(\sigma_{S}[t]\) evolves as a \(d_{S}\)-th order _Markov_ model with dynamic \(\mathcal{T}_{S}(\sigma_{S}[t]|\sigma_{S}[t\!-\!1\!:\!t\!-\!d_{S}])\), which represents the probability distribution of the future header compressibility \(\sigma_{S}[t]\), conditioned on the history of header compressibilities \(\sigma_{S}[t\!-\!1\!:\!t\!-\!d_{S}]\). ### _Channel_ The channel between the compressor and the decompressor is not perfect. We denote the channel quality as \(\sigma_{H}[t]\) at the \(t\)-th slot, where larger \(\sigma_{H}[t]\) indicating better channel quality. We use \(\sigma_{T}[t]\) to denote the packet transmission status at the \(t\)-th slot, and use \(\sigma_{T}[t]=1\) and \(\sigma_{T}[t]=0\) to denote transmission success and failure, respectively. When the transmission fails, the header can not be decoded by the decompressor. In this work, we do not make specific assumptions on channel models. Instead, we only assume the channel quality \(\sigma_{H}[t]\) evolves as a \(d_{H}\)-th order _Markov_ process through the dynamic \(\mathcal{T}_{H}(\sigma_{H}[t]|\sigma_{H}[t\!-\!1\!:\!t\!-\!d_{H}])\), which is the probability distribution of future channel quality \(\sigma_{H}[t]\), conditioned on the history of channel qualities \(\sigma_{H}[t\!-\!1\!:\!t\!-\!d_{H}]\). The transmission status \(\sigma_{T}[t]\) depends on the channel quality \(\sigma_{H}[t]\) and the header type \(\alpha_{C}[t\!-\!1]\) through the dynamic \(\mathcal{T}_{T}(\sigma_{T}[t]|\alpha_{C}[t\!-\!1],\sigma_{H}[t])\), which can be explained as conditional distribution similarly. ### _Trans-layer Information_ The compressor makes decisions on headers to use and on whether to send feedback requests leveraging trans-layer Fig. 1: The system diagram of BD-ROHC. information, such as channel quality information (CQI), hybrid ARQ (HARQ) feedback and frequency of header context initialization. Usually these trans-layer information is managed by lower layer protocols, and not reported to higher layers. In our design, we assume the compressor extracts trans-layer information to facilitate its decision making. For simplification, we assume the compressor estimates the \(d_{D}\)-delayed channel quality \(\sigma_{H}[t\!-\!d_{D}]\) and transmission status \(\sigma_{T}[t\!-\!d_{D}]\) from the trans-layer information, which are denoted as \(z_{H}\) and \(z_{T}\) through the probabilistic \(\mathcal{O}_{H}(z_{H}[t]\sigma_{H}[t\!-\!d_{D}])\) and \(\mathcal{O}_{T}(z_{T}[t]|\sigma_{T}[t\!-\!d_{D}])\), which can be explained as the distribution of \(z_{H}[t]\) and \(z_{T}[t]\) conditioned on delayed channel condition \(\sigma_{H}[t\!-\!d_{D}]\) and delayed transmission status \(\sigma_{T}[t\!-\!d_{D}]\), respectively. Again, we do not assume the model of this two probabilistic in this work. ### _Decompressor_ The goal of the decompressor is to decode the headers received from the channels. Whether the header can be successfully decoded by the decompressor depends on the transmitted header type \(\alpha_{C}\), the compressibility of the header \(\sigma_{S}\) (decided by the header source at the transmitter), transmission status \(\sigma_{T}\), and the state of the decompressor \(\sigma_{D}\). The decompressor works as a finite state machine (FSM) with \(W+2\) states, of which the state transition diagram can be found in Fig. 2 of [14]. We use \(\sigma_{D}=W+1\) to represent the "No Context" (NC) state, \(\sigma_{D}=W\) to represent "Repair Context" (RC) state, \(\sigma_{D}=0,1,...,W-1\) to represent "Full Context" (FC) state with confidence from high to low. At the beginning, the decompressor is at NC state. It is not able to decode CO3 or CO7 header since there is no context established. When successfully decoding an IR header, the decompressor establishes a context and transits to \(\sigma_{D}=0\) FC state. At FC state \(\sigma_{D}=l\) with \(l=0,1,...,W-2\), if transmission failure happens \(\sigma_{T}=0\) and the header is fully compressible \(\sigma_{S}=1\), the decompressor will transit to the lower level FC \(\sigma_{D}=l+1\) and claim a decoding failure. After \(W\) such failures, it will transit to RC state. However, when the header is not fully-compressible \(\sigma_{S}=0\), it will directly transit to RC state and claim decoding failure if transmission failure happens \(\sigma_{T}=0\) or the header is fully compressed \(\alpha_{C}=2\). At RC state, unless it successfully decode an IR or CO7 packet, i.e., \(\sigma_{T}=1\) and \(\alpha_{C}<2\), it will stay in RC and claim decoding failure. It is worth noting that the decompressor successfully decode the header if and only if its state transits to \(\sigma_{D}=0\). The detailed state transition is shown in Table I, where the \((i,j)\)-th entry is the condition on which the \(i\)-th state transits to the \(j\)-th state. A blank entry means the transition can never happen. ### _Compressor_ The compressor makes decisions on \(\alpha_{C}[t]\in\{0,1,2\}\) for the \(t\)-th packet, i.e., deciding which one of the IR, CO7 and CO3 headers should be used. In addition, it also decides whether to request a feedback at the \(t\)-th slot from the decompressor to facilitate future decision making. This request allows the compressor to fully observe the decompressor state, albeit with a time delay. We use \(\alpha_{F}[t]\) to denote the decision on whether request feedback, and denote request and not request as \(\alpha_{F}[t]=1,0,\) respectively. If the compressor send a request at the \(t\)-th slot, it will receive a feedback \(z_{D}[t+d_{D}]=\sigma_{D}[t]\) at the \((t\!+\!d_{D})\)-th slot, indicating the state of the decompressor at the \(t\)-th slot. In general, the compressor does not know the state of the decompressor. It relies on the trans-layer information \(z_{T}[t]\) and \(z_{S}[t]\) as partial observations to make decisions. It can also use the feedback \(z_{D}[t]=\sigma_{D}[t-d_{D}]\) to support the decision making if it requested a feedback at the \((t\!-\!d_{D})\)-th slot. Note that longer headers are more likely to be successfully decoded considering the imperfection of channels and state transition of the decompressor but at the sacrifice of packet-efficiency as a result. The feedback can provide more information about the decompressor's states, but at the cost of additional communication resources. The decision making at the compressor is nontrivial considering these factors. ### _Summary of Notations_ We now using the following Table II to summarize the major notations and symbols used in our problem formulation. ## III POMDP Problem Formulation In this section, we formulate the BD-ROHC as a partially observable _Markov_ decision process (POMDP). In the context of RL [15], agent is the compressor, and the environment consists of the decompressor and the channels. Our goal is to find a policy for the compressor in the framework of POMDP. The BD-ROHC model is summarized in Fig. 2. A POMDP is a \(7\)-tuple \((\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\mathcal{Z},\mathcal{O},\gamma)\), in which: \(\mathcal{S}\) is the set of states; \(\mathcal{A}\) is the set of actions; \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta\mathcal{S},(\sigma, \alpha)\mapsto\mathcal{T}(\cdot|\sigma,\alpha)\) is the transition dynamic, where \(\Delta\mathcal{S}\) denotes the set of probability distributions defined on \(\mathcal{S}\); \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{ R}_{+},(\sigma,\alpha,\sigma^{\prime})\mapsto\mathcal{R}(\sigma,\alpha,\sigma^{ \prime})\) is a reward function, where \(\sigma^{\prime}\) is the next state; \(\mathcal{Z}\) is the sample space of the observations; \(\mathcal{O}:\mathcal{S}\rightarrow\Delta\mathcal{Z},\sigma\mapsto\mathcal{O} (\cdot|\sigma)\) is the conditional probability distribution of observation; \(\gamma\) is the discounting factors. Let \(d=\max(d_{S},d_{H},d_{D})\), we now define the POMDP for our BD-ROHC system: * The **state** variable \(\sigma[t]\in\mathcal{S}\) is defined as \[\sigma[t]= (\alpha_{C}[t\!-\!1\!:\!t\!-\!d\!-\!1],\alpha_{F}[t\!-\!1\!:\!t\!- \!d\!-\!1],\] (1) \[\sigma_{S}[t\!:\!t\!-\!d],\sigma_{D}[t\!-\!d],\sigma_{T}[t\!-\!d],\sigma_{H}[t\!-\!d])\] Naturally, the set of states is \(\mathcal{S}=\{0,1,2\}^{d+1}\times\{0,1\}^{d+1}\times\{0,1,1...,W+1\}\times\{ 0,1\}\times\mathcal{S}_{H}\). * The **action** variable \(\alpha[t]\in\mathcal{A}\) is defined as \[\alpha[t]=(\alpha_{C}[t],\alpha_{F}[t])\] (2) with \(\mathcal{A}=\{0,1,2\}\times\{0,1\}\). * The **partial observation** variable \(z[t]\in\mathcal{Z}\) is defined as \[z[t]=(z_{T}[t],z_{H}[t],z_{D}[t],\sigma_{S}[t\!:\!t\!-\!d])\] (3) with \(\mathcal{Z}=\{0,1\}\times\mathcal{Z}_{H}\times\{-1,0,1,...,W+1\}\times\{0,1 \}^{d+1}\). Note that if the compressor receives feedback at \(t\), then \(z_{D}[t]=\sigma_{D}[t-d]\), otherwise we use \(z_{D}[t]=-1\) to denote not receiving feedback. * The **observation probabilistic**\(\mathcal{O}(z[t]|\sigma[t])\) is defined as \[\mathcal{O}(z|\sigma[t])=\mathcal{O}_{T}(\bar{z}_{T}|\sigma_{T}| \sigma_{T}[t\!-\!d])\mathcal{O}_{H}(\bar{z}_{H}|\sigma_{H}[t\!-\!d])\] \[\times\mathbf{1}_{(\alpha_{F}[t\!-\!d]=0\wedge\bar{z}_{D}=-1)\lor( \alpha_{F}[t\!-\!d]=1\wedge\bar{z}_{D}=\sigma_{D}[t\!-\!d])}\] (4) \[\times\mathbf{1}_{\bar{\sigma}_{S}=\sigma_{S}[t\!:\!t\!-\!d]}\] where \(z=(\bar{z}_{T},\bar{z}_{H},\bar{z}_{D},\bar{v}_{S})\), and \(\mathbf{1}_{(\cdot)}\) is the indicator function returning \(1\) if the condition in "\((\cdot)\)" is satisfied, returning \(0\) otherwise. * The **transition dynamic**\(\mathcal{T}(\sigma[t\!+\!1]|\sigma[t],\alpha[t])\) is defined as the follows \[\mathcal{T}(\sigma[t],\alpha[t])=\mathcal{T}_{T}(\bar{\sigma}_{T} |\alpha_{C}[t\!-\!d],\sigma_{H}[t\!-\!d\!+\!1])\] \[\times\mathcal{T}_{S}(\bar{\sigma}_{S}|\sigma_{S}[t\!:\!t\!-\!d]) \mathcal{T}_{H}(\bar{\sigma}_{H}|\sigma_{H}[t\!-\!d])\] (5) \[\times P_{D}(\bar{\sigma}_{D}|\sigma_{T}[t\!-\!d\!+\!1],\alpha_{C}[t \!-\!d],\sigma_{D}[t\!-\!d])\] \[\times\mathbf{1}_{\bar{\alpha}_{C}=\alpha_{C}[t\!:\!t\!-\!d]} \mathbf{1}_{\bar{\alpha}_{F}=\alpha_{F}[t\!:\!t\!-\!d]}\] where \(s=(\bar{\alpha}_{C},\bar{\alpha}_{F},\bar{\sigma}_{S},\bar{\sigma}_{D},\bar{ \sigma}_{T},\bar{\sigma}_{H})\). * The **reward** function \(\mathcal{R}(\sigma[t],\alpha[t],\sigma[t\!+\!1])\) is defined as \[\mathcal{R}(\sigma[t],\alpha[t],\sigma[t\!+\!1])= \frac{L\mathbf{1}_{\sigma_{D}[t\!-\!d\!+\!1]\!-\!0}}{L+L_{\alpha_{C}[t \!-\!d]}}\!-\!\lambda\alpha_{F}[t\!-\!d\!-\!1]\] (6) where \(\lambda>0\) is a constant. The first term on the right hand side of eq. (6) accounts for the packet's data-efficiency. Recall that \(L\) and \(L_{i}\) are payload size and the header size corresponding to the action (header type) \(\alpha_{C}=i\), respectively. When decoding fails, the the packet's data-efficiency is \(0\). When decoding successes, i.e., \(\sigma_{D}=0\), the packet's data-efficiency is \(\frac{L}{L+L_{\alpha_{C}[t\!-\!d]}}\). The second term penalizes the feedback since it introduces additional communication costs. The reward at the \(t\)-th slot is simply denoted as \(r[t]=\mathcal{R}(\sigma[t],\alpha[t],\sigma[t\!+\!1])\). Note that in the POMDP, the agent (compressor) only has access to the partial observation \(z[t]\) (of the state \(\sigma[t]\)) instead of \(\sigma[t]\) itself. The agent has to rely on the history of observations \(z[t\!:\!0]\) and actions \(\alpha[t\!-\!1\!:\!0]\) to make decision \(\alpha[t]\). We denote the deterministic policy as \(\pi:\mathcal{F}\rightarrow\mathcal{A}\) Fig. 2: The block diagram of BD-ROHC in the context of RL. Agent is the compressor, and the environment consists of the decompressor, channel and header source. where \(\mathcal{F}\) is the history of the observations and actions, i.e., \(\forall t,(z[t\!:\!0],\alpha[t\!-\!1\!:\!0])\in\mathcal{F}\). Let \(\alpha^{\pi}[t]=(\alpha^{\pi}_{\mathcal{C}}[t],\alpha^{\pi}_{\mathcal{F}}[t])\) be the action taken under policy \(\pi\), then our goal is to find the policy \(\pi\) maximizing the accumulated discounted reward: \[\max_{\pi}\mathsf{E}\left[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(\sigma[t], \alpha^{\pi}[t],\sigma[t\!+\!1])\right]. \tag{7}\] In our formulation, the size of action space, state space and observation space are \(|\mathcal{A}|=6\), \(|\mathcal{S}|=(W+3)|\mathcal{S}_{H}|12^{d}\) and \(|\mathcal{Z}|=2(W+3)|\mathcal{Z}_{H}|2^{d}\), respectively. The complexity of solving a POMDP with dynamic programming (DP) [14] is exponential to \(|\mathcal{A}|\times|\mathcal{Z}|\) and linear to \(|\mathcal{S}|\), which becomes prohibitive when the scale of the problem becomes large, especially when \(d\) is large. ## IV Compressor Policy with Deep Q-learning A prior work [14] uses dynamic programming (DP) to solve the problem formulated in (7) which exhibits two shortcomings: 1) DP becomes computationally prohibitive when the scale of the problem becomes large due to large delay \(d\) and complex channel models. 2) DP requires the knowledge of transition dynamic \(\mathcal{T}\), which is often not available in many practical applications. As we shall see in this section, these two obstacles can be addressed by our proposed deep Q-learning methods. ### _Deep Q-learning for MDP_ Before moving onto deep Q-learning for POMDP, it is helpful to review the Q-learning for MDP where states are available. The key quantity plays in Q-learning is the Q-function. In an MDP, the Q-function \(Q^{\kappa}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) with a deterministic policy \(\kappa:\mathcal{S}\rightarrow\mathcal{A}\) is defined as \[Q^{\kappa}(\sigma,\alpha)=\mathsf{E}\left[\sum_{k=0}^{\infty}\gamma^{k}r[t+k] \;\Big{|}\;\sigma[t]=\sigma,\alpha[t]=\alpha,\kappa\right]. \tag{8}\] The Q-value (value of Q-function) can be explained as the expected discounted accumulated rewards starting from state \(s\) taking action \(a\) under policy \(\kappa\). The goal of Q-learning is to find the optimal Q-function (corresponding to the optimal policy \(\kappa\)) in a sense that \(Q^{\kappa}(\sigma,\alpha)\) is maximized for every \(\sigma\) and \(\alpha\). For most cases, there is no closed-form formulation of Q-function in terms of \(\kappa\). According to [15], the optimal policy satisfies \[\kappa^{*}(\sigma)=\arg\max_{\alpha\in\mathcal{A}}Q^{\kappa^{*}}(\sigma, \alpha). \tag{9}\] Applying the recursion of Q-function, and substituting eq. (9) into eq. (8), we have \[Q^{\kappa^{*}}(\sigma,\alpha)\!=\!\sum_{\sigma^{\prime}\in \mathcal{S}}\!\!\mathcal{T}(\sigma^{\prime}|\sigma,\alpha)\left(\mathcal{R}( \sigma,\alpha,\sigma^{\prime})\!+\!\gamma Q^{\kappa^{*}}(\sigma^{\prime}, \kappa^{*}(\sigma^{\prime}))\right)\] \[=\!\sum_{\sigma^{\prime}\in\mathcal{S}}\!\!\mathcal{T}(\sigma^{ \prime}|\sigma,\alpha)\!\!\left(\!\mathcal{R}(\sigma,\alpha,\sigma^{\prime}) \!+\!\gamma\max_{\alpha^{\prime}\in\mathcal{A}}Q^{\kappa^{*}}(\sigma^{\prime},\alpha^{\prime})\!\right)\!. \tag{10}\] which is the _Bellman_ equation where \(Q(\sigma,\alpha)\) can be viewed as the unknown table to be solved with finite dimension \(|\mathcal{S}|\!\times\!|\mathcal{A}|\). It has been proved that the Q-function (or the policy) is optimal if and only if the _Bellman_ equation is satisfied [15]. To solve the equation, the Q-function can be updated as follows \[Q(\sigma,\alpha)\gets r+\gamma\max_{\alpha^{\prime}\in \mathcal{A}}Q(\sigma^{\prime},\alpha^{\prime}) \tag{11}\] where \((\sigma,\alpha,r,\sigma^{\prime})\) denote the state, action, reward and the next state, respectively, sampled from trajectories (instantiations of the MDP). Eq. (11) is guaranteed to converge to the optimal \(Q^{\kappa^{*}}(\sigma,\alpha)\) according to [16]. When the state space and the action space both become large (infinite if \(\sigma\) and \(\alpha\) are continuous), we can use a neural network (NN) to represent the Q-function \(Q_{\theta}(\sigma,\alpha)\), where \(\theta\) are the weights of the NN. In this case, for each \((\sigma,\alpha,r,\sigma^{\prime})\), the sample loss can be written as \[g(\sigma,\alpha,r,\sigma^{\prime};\theta)=(Q_{\theta}(\sigma,\alpha)\!-\!r\;- \gamma\max_{\alpha^{\prime}\in\mathcal{A}}Q_{\theta}(\sigma^{\prime},\alpha^{ \prime}))^{2}. \tag{12}\] The update of \(\theta\) follows the mini-batch stochastic gradient descent (SGD): \(\theta\leftarrow\theta-\eta\nabla g(\sigma,\alpha,r,\sigma^{\prime})\), where \(\eta\) is the learning rate. Note that, deep Q-learning is not guaranteed to converge with non-linear NN model. Its convergence is still an open problem under exploration. ### _Double Deep Q-learning Implementation for the Compressor Policy in POMDP_ In the POMDP, state \(\sigma\) is not available, consequently we replace \(Q_{\theta}(\sigma,\alpha)\) with \(\tilde{Q}_{\theta}(h,\alpha)\), where \(h\in\mathcal{F}\) is the history of partial observations and actions. Despite lacking theoretical guarantee, it works well in many POMDP applications. #### Iv-B1 Deep Q-network Let \(f_{\theta}(\cdot)\) be the function represented by the deep Q-network (DQN). It takes truncated history \[\tilde{h}[t]\triangleq(z[t\!:\!t\!-\!d\!-\!d\!-\!d_{0}],\alpha[t\!-\!1\!:\!t\!- \!d\!-\!d_{0}]) \tag{13}\] as the inputs and has a \(|\mathcal{A}|\)-dimension output, in which the \(\alpha\)-th entry \(f_{\theta}^{\alpha}(\tilde{h}[t])=\tilde{Q}_{\theta}(\tilde{h}[t],\alpha)\). The constant integer \(d_{0}>0\) is used to adjust the history window size. Technically, all the histories of observations are informative to decision making. In DQN, we only keep the latest observations and actions, since the early ones have smaller impact on the current state. #### Iv-B2 Double deep Q-learning As mentioned previously, deep Q-learning often suffers from instability of convergence resulting in unsatisfactory performances. In this work, double DQN (DDQN) is adopted to mitigate this issue, whose block diagram is depicted in Fig. 3. In DDQN, there are two networks, namely current DQN and target DQN, parameterised with \(\theta\) and \(\theta^{\prime}\), respectively. The current DQN interacts with the environment (decompressor, channel and header source) to collect experience (finally stored in the episode memory \(\mathcal{M}\)). During the update of \(\theta\), the target network \(\theta^{\prime}\) remains unchanged, and participate in calculating the target \(r+\gamma\max_{\alpha^{\prime}\in\mathcal{A}}\tilde{Q}_{\theta^{\prime}}(h^{ \prime},\alpha^{\prime})\) and the following sample loss. \[\tilde{g}(h,\alpha,r,h^{\prime};\theta,\theta^{\prime})=(\tilde{Q}_{\theta}(h, \alpha)-r-\gamma\max_{\alpha^{\prime}\in\mathcal{A}}\tilde{Q}_{\theta^{\prime}}(h ^{\prime},\alpha^{\prime}))^{2}. \tag{14}\] After several updates of the current network \(\theta\), DDQN updates the target network \(\theta^{\prime}\) by copying \(\theta\). The detailed algorithm for DDQN is shown in Algorithm 1. We adopt nn \(\epsilon\)-greedy strategy such that the agent can explore the action spaces in early training stages. At the beginning, the agent has large probability \(\epsilon\) close to \(1\) to take random actions. The value of \(\epsilon\) decays with rate \(\gamma_{\epsilon}\). After a duration, the agent will follow the current policy with high probability. Note that there is a \(d\)-slot delay between the compressor and the decompressor. For this reason, while the compressor transmits \(\alpha_{C}[t]\), the decompressor just receives \(\alpha_{C}[t-d]\). We denote the episode memory as \(\mathcal{M}\), which is essentially a first-in-first-out (FIFO) with constant size. It is worth mentioning that the proposed DDNQ method finds the policy with trajectory samples obtained from interacting with the environments, and does not require any prior knowledge of the transition dynamic or observation probabilistic. In addition, the proposed method can handle state, action and observations from large finite or even continuous spaces. ## V Experimental Results We start with the general settings of the experiments. During the training, the number of episode is set as \(M=3,000\), and \(T=10,000\) packets are transmitted within each episode. Both the current and target DQN are \(4\)-layer dense NN, in which the hidden layer has width \(2,048\). * **Headers**: The length of IR header (\(\alpha_{C}=0\)), CO7 header (\(\alpha_{C}=1\)) and CO3 header (\(\alpha_{C}=2\)) are set as \(L_{0}=60\), \(L_{1}=15\) and \(L_{2}=1\), respectively. The header source \(\sigma_{S}[t]\) is modelled as a _Markov_ process with order \(d_{S}=1\) and transition dynamic \(\mathcal{T}_{S}(\sigma_{S}[t]|\sigma_{S}[t-\text{i}])\). In the simulation, \(\mathcal{T}_{S}(\sigma_{S}[t]=1|\sigma_{S}[t-\text{i}]=0)=1\) and \(\mathcal{T}_{S}(\sigma_{S}[t]=0|\sigma_{S}[t-\text{i}]=1)=0.1\). * **Decompressor**: The decompressor follows the model in Sec. II-D with the maximum number of allowed consecutive decoding failures as \(W=5\). * **Evaluation metric**: We evaluate "transmission efficiency", which is defined as total length of corrected received payloads over total length of the packet: \[\text{transmission efficiency}=\frac{\sum_{t=1}^{T}\mathbf{1}_{\sigma_{D}[t+1]=1} L}{\sum_{t=1}^{T}(L+L_{\alpha_{C}[t]})}.\] (15) We also evaluate "feedback rate", which is defined as the number of feedback requests over the total number of transmitted packets: \[\text{feedback rate}=\frac{\sum_{t=1}^{T}\alpha_{F}[t]}{T}.\] (16) * **Benchmarks**: Given existing works can not handle situations with long feedback delays and unavailable transition dynamics, we propose the following "keep transmitting" (KT) algorithm as the benchmark. KT requests feedback randomly with certain probability (such that feedback rate is controlled), and chooses header based on the latest feedback from the decompressor. KT keeps using the same header based on the last feedback until new feedback is received. ### _Results under Gilbert-Elliot Channel Model_ First, we consider the well known Gilbert-Elliot channel model [11]. It is parameterized with the average duration of "bad" states \(l_{B}\) and the probability of "bad" state \(\epsilon_{B}\). The "bad" state is represented as \(\sigma_{H}[t]=0\) and "good" state is represented as \(\sigma_{H}[t]=1\). The transition dynamic \(\mathcal{T}_{H}(\sigma_{H}[t]|\sigma_{H}[t-\text{i}])\) has values \(\mathcal{T}_{H}(\sigma_{H}[t]=1|\sigma_{H}[t-\text{i}]=0)=\frac{1/l_{B}}{1/ \epsilon_{B}-1}\) and \(\mathcal{T}_{H}(\sigma_{H}[t]=0|\sigma_{H}[t-\text{i}]=1)=1/l_{B}\). The transmission status has dynamic \(\mathcal{T}_{T}(\sigma_{T}[t]=1|\sigma_{H}[t]=1)=\beta_{1}\) and \(\mathcal{T}_{T}(\sigma_{T}[t]=1|\sigma_{H}[t]=0)=\beta_{0}\). Through this subsection, we set \(l_{B}=5\). The trans-layer information is summarized as the estimates \(z_{H}[t]\) and \(z_{T}[t]\) through model \(\mathcal{O}_{H}(z_{H}[t]|\sigma_{H}[t-d_{D}])\) and \(\mathcal{O}_{T}(z_{T}[t]|\sigma_{T}[t-d_{D}])\), respectively. In our simulation, \(\mathcal{O}_{H}(z_{H}[t]=\sigma_{H}[t-d_{D}]|\sigma_{H}[t-d_{D}])=1-\epsilon_ {H}\), and \(\mathcal{O}_{T}(z_{T}[t]=\sigma_{T}[t-d_{D}]|\sigma_{T}[t-d_{D}])=1-\epsilon_ {T}\), where \(\epsilon_{T}\) and \(\epsilon_{H}\) denote the estimation error probability of transmission status and channel Fig. 3: The training paradigm of DDQN. The current DQN interacts with the environment (channel and decompressor) to collect experience. During the update of \(\theta\), the target network \(\theta^{\prime}\) remains unchanged, and participate in calculating the target conditions, respectively. It is worth to clarify that channel condition \(\sigma_{H}[t]\) indicates how good the channel is at the \(t\)-th slot, while \(\epsilon_{B}\) parameterize the probabilistic model of \(\sigma_{H}[t]\). Fig. 4 shows the results of the proposed RL and KT under different channel quality \(\epsilon_{B}\). It can be observed from the figure that the performance gap between the two methods becomes more obvious as feedback rate decreases. Since the problem under low feedback rate is more challenging, the carefully designed RL method can show more advantages without surprise. The proposed RL has better performance than KT with all different channel qualities. In the experiment, \(\epsilon_{T}=0.1\), \(\epsilon_{H}=0.1\), \(d=4\), \(\beta_{1}=0.9\), \(\beta_{0}=0.1\) and \(L=20\). Fig. 5 shows the performance of both methods under different feedback delay \(d\). It can be observed that the proposed RL method outperforms KT, and both methods performs worse as the feedback delay \(d\) becomes larger, since the feedback becomes less informative. In the experiment, \(\epsilon_{B}=0.2\), \(\epsilon_{T}=0.1\), \(\beta_{1}=0.9\), \(\beta_{0}=0.1\) and \(L=20\). Fig. 6 shows the performances of both methods with different transmission status estimation error probability \(\epsilon_{T}\). It can be observed that the proposed RL method has better performance when \(\epsilon_{T}\) is smaller. The performance of KT does not change along with \(\epsilon_{T}\), because it can not use trans-layer information without knowledge of the model. In the experiment, \(\epsilon_{B}=0.2\), \(\epsilon_{H}=0.1\), \(d=4\) and \(L=20\). In Fig. 7, we made observations similar to Fig. 6. In the experiment, \(\epsilon_{B}=0.2\), \(\epsilon_{T}=0.1\), \(d=4\), \(\beta_{1}=0.9\), \(\beta_{0}=0.1\) and \(L=20\). Fig. 8 shows the impact of payload size \(L\) on the transmission efficiency. It can be observed from the figure that larger payload size results in a high trans Fig. 4: Performance of the proposed RL and KT under different channel quality parameter \(\epsilon_{B}\). The proposed RL outperforms KT, and the performance gap becomes more obvious as feedback rate decrease. (Gilbert-Elliot model) Fig. 5: Performance of the proposed RL and KT under different feedback delay \(d\). The proposed RL method outperforms KT, and both methods performs worse as the feedback delay \(d\) becomes larger since the feedback becomes less informative. (Gilbert-Elliot model) that the payload size is not related to any transition dynamic and observation probabilistic. For any instantiation of the process, increase \(L\) always results in high transmission efficiency. The limit of the transmission efficiency is \(\sum_{t=1}^{T}\mathbf{1}_{\sigma_{D}[t+1]=1}/T\) according to eq. (15). In the experiment, \(\epsilon_{B}=0.2\), \(\epsilon_{T}=0.1\), \(\epsilon_{H}=0.1\), \(\beta_{1}=0.9\), \(\beta_{0}=0.1\) and \(d=4\). Fig. 9 shows performance of the proposed RL method with DQN with different depths. From the figure, we can observe that the DQN with \(4\) layers perform better than the one with \(2\) layers, as the later one is too simple thus lack of representation capability. However, when the DQN has 6 layers, the performance becomes worse surprisingly. It is likely that the training of DQN becomes more unstable due to the increasing sensitivity resulting from deeper models. Fig. 10 demonstrates how fast the proposed RL can adapt to new environment. The orange and green curves show the rewards from the \(2,800\)-th to the \(3,000\)-th episode during the Fig. 8: Performance of the proposed RL under different payload size \(L\). Transmission efficiency is higher with larger payload. (Gilbert-Elliot model) Fig. 6: Performance of the proposed RL and KT under different transmission status estimation error probability \(\epsilon_{T}\). (Gilbert-Elliot model) Fig. 7: Performance of the proposed RL and KT under different channel condition estimation error probability \(\epsilon_{H}\). (Gilbert-Elliot model) Fig. 10: Rewards during training with varying environments. In the experiment, \(\epsilon_{B}=0.1\) before the \(3,000\)-th episode, and changes to \(0.2\) and \(0.5\) at the \(3,001\)-th and \(3,101\)-th episode, respectively. (Gilbert-Elliot model) Fig. 9: Convergence of the training process with different number of layers. The curves are smoothed with a window of size \(50\). (Gilbert-Elliot model) training with \(\epsilon_{B}=0.2\) and \(\epsilon_{B}=0.5\), respectively. The blue curve shows the reward with varying \(\epsilon_{B}\). Specifically, \(\epsilon_{B}=0.1\) before the \(3,000\)-th episode, \(\epsilon_{B}=0.2\) between the \(3,001\)-th episode and \(3,100\)-th episode, \(\epsilon_{B}=0.5\) after the \(3,101\)-th episode. From the figure we observe that the proposed DQN can adopt to new environment quickly. The experiment setting is same to Fig. 4. Fig. 11 shows a comparison between [14] and the proposed RL. We can observe from the figure that [14] outperforms the proposed RL with accurate knowledge of model dynamics. However, when the knowledge is inaccurate, it performs worse than the proposed RL. In the experiment, \(\epsilon_{B}=0.5\), \(\epsilon_{T}=0.4\), \(\epsilon_{H}=0.4\), \(d=4\), \(\beta_{1}=0.7\), \(\beta_{0}=0.3\) and \(L=20\). ### _Results under Hidden Markov Channel Model_ We now apply a hidden _Markov_ channel model [17] to test the performance of the proposed RL and KT methods. The model starts from the physical layer wireless channel with _Rayleigh_ model, \[\sigma_{H}[t]=\sqrt{A_{I}[t]^{2}+A_{Q}[t]^{2}} \tag{17}\] where \(A_{I}[t]\) and \(A_{Q}[t]\) are in-phase and quadrature components of the channel, respectively. \(A_{I}[t]\) and \(A_{Q}[t]\) are independent \(d_{H}\)- order _Markov Gaussian_ process. The transition dynamic is described by a \(d_{H}\times d_{H}\) covariance matrix whose \((i,j)\)-th entry is \(\rho^{|i-j|}\). Here \(\rho\) is a parameter to adjust the correlation of consecutive samples. The transmission status \(\sigma_{T}[t]\) can be expressed as \[\sigma_{T}[t]=\mathbf{1}_{PrA[t]>U[t]} \tag{18}\] where \(P_{T}\) is transmitting power, and \(U[t]\) servers as a threshold follows standard _Gaussian_ distribution independently at every time slot. The observation \(z_{H}[t]\) is \[z_{H}[t]=\sigma_{H}[t-d]+n_{H}[t-d] \tag{19}\] where \(n_{H}[t-d]\) is an additive white _Gaussian_ noise (AWGN) with zero-mean and \(\omega_{H}^{2}\)-variance. We assume \(z_{H}\) results from channel estimation and channel reciprocity. The observation \(z_{T}\) is defined the same way with Gilbert-Elliot in Sec. V-A. Through this subsection, we set \(d_{H}=4\), \(d=8\) and \(L=20\). Fig. 12 shows the performances of both the proposed RL and KT methods with different transmitting power \(P_{T}\). From the figure we observe that the propose RL methods outperforms KT in all cases, and their performance gap is relatively larger when feedback rate is lower. In the experiment, we set \(\rho=0.5\) and \(\omega_{H}^{2}=1\). Fig. 13 shows the performances of both the proposed RL and KT methods with different channel correlation \(\rho\). From the figure we observe that the performance of RL is better than KT in all cases, but degrades when \(\rho\) becomes larger. In the experiment, we set \(P_{T}=2\) and \(\omega_{H}^{2}=1\). Fig. 11: Compared with [14] with and without prior knowledge of model dynamics. (Gilbert-Elliot model) Fig. 12: Performance of the proposed RL and KT under different transmitting power \(P_{T}\). (Hidden _Markov_ model) Fig. 13: Performance of the proposed RL and KT under different channel correlation \(\rho\). (Hidden _Markov_ model) performs better than KT in all cases. In the experiment, we set \(P_{T}=2\) and \(\rho=0.5\). ## VI Conclusion Existing works on bi-directional robust header compression (BD-ROHC) with dynamic programming (DP) are difficult to implement for large scale system due to prohibitive computational complexity. Moreover, dynamic programming ROHC controls rely on prior knowledge of the underlying model parameters, which is often unavailable practically. In this paper, we propose a novel RL framework which addresses these issues at the same time. We adopt a double deep Q-network (DDQN) framework, whose input dimension is scalable to the system model. Our training of the DDQN relies on information obtained from interacting with channel and compressor, which can adaptively learn and acquire useful knowledge of the model dynamics implicitly. Experimental results demonstrate strong and robust performance of our proposed paradigm for different system models. Future work may consider more complex environment with multi-agent reinforcement learning.
2302.14854
Phase Field Modeling of Dictyostelium Discoideum Chemotaxis
A phase field approach is proposed to model the chemotaxis of Dictyostelium discoideum. In this framework, motion is controlled by active forces as determined by the Meinhardt model of chemical dynamics which is used to simulate directional sensing during chemotaxis. Then, the movement of the cell is achieved by the phase field dynamics, while the reaction-diffusion equations of the Meinhardt model are solved on an evolving cell boundary. This task requires the extension of the usual phase-field formulation to allow for components that are restricted to the membrane. The coupled system is numerically solved by an efficient spectral method under periodic boundary conditions. Numerical experiments show that our model system can successfully mimic the typically observed pseudopodia patterns during chemotaxis.
Yunsong Zhang, Herbert Levine, Yanxiang Zhao
2023-02-28T18:52:27Z
http://arxiv.org/abs/2302.14854v1
# Phase Field Modeling of Dictyostelium Discoideum Chemotaxis ###### Abstract A phase field approach is proposed to model the chemotaxis of Dictyostelium discoideum. In this framework, motion is controlled by active forces as determined by the Meinhardt model of chemical dynamics which is used to simulate directional sensing during chemotaxis. Then, the movement of the cell is achieved by the phase field dynamics, while the reaction-diffusion equations of the Meinhardt model are solved on an evolving cell boundary. This task requires the extension of the usual phase-field formulation to allow for components that are restricted to the membrane. The coupled system is numerically solved by an efficient spectral method under periodic boundary conditions. Numerical experiments show that our model system can successfully mimic the typically observed pseudopodia patterns during chemotaxis. keywords: Phase field model; Chemotaxis; Dictyostelium discoideum; + Footnote †: journal: Journal of XXX ## 1 Introduction Many cells have an internal "compass", which enables them to navigate through various environments. This "compass" detects the gradients in chemical concentrations, the rigidity of extracellular matrix, cellular adhesion sites, fluidic shear stress, etc. Interestingly, extensive studies on how such a "compass" is realized have led to "taxis-mania, focusing on chemotaxis, durotaxis, mechanotaxis, haptotaxis, plithotaxis and so on. In this work, we specifically concentrate on chemotaxis, which plays an extensive role in many physiological processes [1]. For example, primordial cells are capable of figuring out their way to proper locations by sensing chemical clues, thus correctly forming the organs. Chemotaxis is also essential for immune responses and wound healing. In addition to these normal physiological processes, the pathology of numerous diseases, such as cancer metastasis and inflammatory disorders, is believed to be related to chemotaxis [3; 2]. Dictyostelium discoideum, a type of amoeboid cell, is a popular model system for the study of chemotaxis. These cells rely on chemotaxis to find nutrition. When suffering from starvation, they are capable of chemotaxing in response to cAMP gradients in order to aggregate and enhance their chances for survival. Dicty chemotactic behavior is very similar to that of human leukocytes. Conceptually, chemotaxis can be divided into motility, directional sensing, and polarity [4; 5; 6]. According to experiments [6; 7; 8], motility involves periodic extensions and retractions of pseudopods - temporary actin-filled protrusions of the cell membrane. Directional sensing refers to the process by which cells sense the chemical gradients and adjust their direction. Polarity refers tp the reorganization of the cell interior to favor moving in a fixed direction. Once polarized, protrusions mainly extend from the cell anterior, regardless of whether a chemical gradient exists or not. ### Directional sensing: LEGI-BEN and Meinhardt models Detailed experimental investigations have uncovered many important features of chemotaxis [9]. First of all, the actin cytoskeleton in motile cells exhibits many characteristics of an excitable medium, such as the presence of propagating waves in the cell membrane [10; 11; 12; 13].This fact has suggested that an excitable network, which composes a simple activator-inhibitor system, may help explain the spontaneous migration of these cells. Then, to include the cell's response to external gradients, one can modify the activator-inhibitor system by adding a steering bias: higher concentrations of chemoattractts will lower the threshold of excitability, thus causing more excitation. Over time, cells with this bias will tend to move along the directions towards higher chemoattractant density. Chemotactic cells can display surprising sensitivity, responding to a chemical gradient as small as 1%. Such sensitivity can be captured by a biased excitable network [14]. Although successful in qualitatively explaining spontaneous cell motion in the absence of chemoattractants as well as directed motion in the presence of chemoattractants, the biased excitable network (BEN) approach still misses some features of realistic chemotactic behavior. One such feature is the adaptability of cells to external chemical clues. Since higher concentrations of chemoattractants lower the threshold for excitable behavior according to a biased excitable network, the cells would be predicted to become "hyper-excited" if they are exposed to a spatially uniform increase in chemoattractant concentration In fact, this does not occur and after transient responses to uniform increments in chemoattractants, the excitability returns to baseline levels. To account for this behavior, a local-excitation, global-inhibition (LEGI) mechanism was proposed [15, 16]. According to this mechanism, chemoattractants give rise to the release of a slowly diffusing activator which is accompanied by a rapidly diffusing inhibitor. Thanks to the regulation by the global inhibitor, the level of excitation returns to a threshold level. Thus, one natural strategy is to directly combine the LEGI mechanism and the biased excitable network into a hybrid LEGI-BEN model [17]. One can however use a different model which already contains the two essential components of excitation and adaptation. More than a decade ago, Meinhardt proposed a three-component model [18], which shares a similar conceptualization with the LEGI-BEN approach. Here, a biased excitable network, which includes a slowly diffusing activator and a fast inhibitor, is further regulated by an extra even faster global inhibitor. The total quantity of the activator is held approximately constant over time by this global inhibitor, thus preventing a large increase in excitable behavior. For the sake of mathematical simplicity, we choose to use the Meinhardt model in our modeling efforts, whereas the LEGI-BEN model can also in the future be embedded in our phase field model, if needed. The governing equations of Meinhardt model are: \[\frac{\partial a}{\partial t} =D_{a}\nabla^{2}a+r_{a}\frac{s(\mathbf{r},t)(a^{2}b^{-1}+b_{a})} {(s_{c}+c)(1+s_{a}a^{2})}-r_{a}a, \tag{1.1}\] \[\frac{\partial b}{\partial t} =\frac{r_{b}}{|\Gamma(b)|}\oint a\mathrm{d}x-r_{b}b,\] (1.2) \[\frac{\partial c}{\partial t} =D_{c}\nabla^{2}c+b_{c}a-r_{c}c. \tag{1.3}\] Here, \(a,b\) and \(c\) are respectively the local activator, the global inhibitor, and the local inhibitor. By referring to \(c\) as a global inhibitor, we mean \(D_{b}\gg D_{c}\), so that the inhibiting effects of \(b\) can spread over the whole interface in almost no time, thus regulating the total quantity of the activator. Therefore we can assume the concentration of \(b\) to be uniform everywhere on the membrane, which leads to the replacement of partial differential equation by an ordinary differential equation with a nonlocal source. This system exhibits nice bifurcating patterns [18], which have been successfully compared in experimental findings [19]. A local excitation bifurcates into a pair of competing daughter bursts of excitations, which travel in opposite directions. One of these daughter bursts will win out over the other, which vanishes. A new bifurcation will occur once the "loser" dies. The decisive factor for the competition between the pair of bifurcations is the spatial factor \(s(\mathbf{r},t)\) in the excitation of the activator: \[s(\mathbf{r},t)=(1+d_{r}\xi)(1+C_{\mathrm{chem}}f(\mathbf{r},t)), \tag{1.4}\] where \(f(\mathbf{r},t)\) is a function of the spatial clue related to the concentrations of chemoattractant. The factor \((1+d_{r}\xi)\) represents the effect of stochastic fluctuations, with \(\xi\) taken as white noise and \(d_{r}\) being the fluctuation strength. In this work, we assume the existence of a chemoattractant source \(\mathbf{r}_{0}\), and simply set \(f(\mathbf{r},t)\) as: \[f(\mathbf{r},t)=1-\frac{\mathrm{dist}(\mathbf{r},t)-\mathrm{dist}_{\mathrm{ min}}(t)}{\mathrm{dist}_{\mathrm{max}}(t)-\mathrm{dist}_{\mathrm{min}}(t)} \tag{1.5}\] where \(\mathrm{dist}(\mathbf{r},t)\) represents the distance between any point \(\mathbf{r}\) and the chemoattractant source \(\mathbf{r}_{0}\), while \(\mathrm{dis}_{\mathrm{max}}(t)\) and \(\mathrm{dis}_{\mathrm{min}}(t)\) respectively represent the maximal and minimal distance at time \(t\), between the chemoattractant source \(\mathbf{r}_{0}\) and all points on the cell membrane. It is evident that \(f(\mathbf{r},t)\) varies between 0 and 1, monotonically decreasing with the distance from the chemoattractant source \(\mathbf{r}_{0}\), along the cell membrane. The parameter \(C_{\mathrm{chem}}\) regulates the strength of the bias on the threshold of the activator's excitability. It turns out that the values of \(C_{\mathrm{chem}}\) as small as 0.01, can still significantly affect the bifurcation patterns in space and time. This fact is consistent with the chemotactic cells' sensitivity to chemical gradients as mentioned above. Statistically, excitation bursts propagating toward favorable positions in the chemical gradients are more likely to survive and continue to bifurcate, thus fostering the directed navigation of chemotactic cells. ### Phase field model framework In the past few decades, phase field models have emerged as one of the most successful methods for studying interfacial problems; see the two review articles [22; 23] and the references therein. In the phase field model framework, a phase field function \(\phi\) is introduced, assigning a value (say, 0) for one phase, and another value (say, 1) for the other. In the interfacial region, the phase field \(\phi\) rapidly but smoothly transitions from 0 to 1. The interface is tracked by the 1/2-level set during the morphological evolution. The main advantage of the phase field approach is that it can allow for the computation of the temporal evolution of arbitrary morphologies and complex microstructures without explicitly tracking interfaces. In the realm of biology, cell shape dynamics and cell migration processes have been simulated by using phase field models. In [25], a quantitative model for cell shape and motility dynamics was constructed based on the original phase field concept [26]. An auxiliary field is introduced to distinguish the cell's interior (\(\phi=0\)) from the exterior (\(\phi=1\)). The dynamics of the cell are governed by equations that couple this field to the actual physical degrees of freedom, and the diffuse layer separating the interior from the exterior marks the membrane location. This cell motility model is part of a larger set of recent theoretical studies that have attempted to model cell migration. For example, some studies have attempted to calculate the "flow" of the actin cytoskeleton in a one-dimensional [27; 28; 29] or fixed two-dimensional cell geometry [30]. In some works, the cell boundary was allowed to change according to a phenomenological function of protrusion rate [31; 32] while other approaches implemented physical forces along the cell membrane, obtained cell shape and speed, but ignored actin flow and detailed adhesion mechanisms [25; 33]. Yet others examined adhesion dynamics and cell-substrate coupling while ignoring cell deformations [34] or focused on the dynamics of the leading edge [35]. Ziebert, Aranson and their coworkers studied the cell shape dynamics by coupling a vector field model of the actin filament network with the cell shape [36]. Finally, a more comprehensive model for cell migration was presented in [37] which couples actin flow with discrete adhesion sites and deformable cell boundaries. Other interesting patterns such as periodic migration [38] or circular motion [39; 40] have also been studied using phase field framework. ## 2 Phase field model of Chemotaxis ### Pseudopodia: Chemical dynamics on a phase field membrane Our goal in this paper is to couple the aforementioned directional sensing system to a computational model of the resultant motion. With the intricate spatio-temporal patterns of the Meinhardt model in hand, our immediate challenge is how to make this happen on the membrane of a dynamically evolving cell described by a phase field. To our best knowledge, there are very few published attempts to study similar problems. For example, Nelson et al. applied a hybrid computational framework to couple the Meinhardt model with cell movement [19]. There, the movement of the cell is achieved using a level set method [24], while the reaction-diffusion equations of the Meinhardt model are approximated on an evolving cell boundary using an arbitrary Lagrangian-Eulerian surface finite element method (ALE-SFEM). In our phase field model, we can achieve the same effect in a much simpler manner. To accomplish this, we modified the approach used to couple bulk reaction-diffusion systems to phase field cells [25; 39]. Specifically, instead of a factor of \(\phi\) to limit reaction to the interior, we use \(g(\phi(\mathbf{r}))=\frac{\epsilon}{2}|\nabla\phi|^{2}\) to restrict concentrations to the membrane (see below). In addition, we found it necessary to insert diffusion in the normal direction of the phase field interface, so that reaction-diffusion processes in different layers synchronize with each other. Our revised equations for the membrane Meinhardt system are: \[\tau_{0}\frac{\partial(g(\phi)a)}{\partial t}+\nabla\cdot(g(\phi) a\mathbf{v})=D_{a}\nabla_{\parallel}\cdot\left(g(\phi)\nabla_{\parallel}a \right)+D_{\perp}\nabla_{\perp}\cdot\left(g(\phi)\nabla_{\perp}a\right)\] \[+g(\phi)\left(\frac{s(\mathbf{r},t)(a^{2}b^{-1}+b_{a})}{(s_{c}+c )(1+s_{a}a^{2})}-r_{a}a\right), \tag{2.1}\] \[\tau_{0}\frac{\partial b}{\partial t}=r_{b}\frac{\int g(\phi)a \mathrm{d}\mathbf{r}}{\int g(\phi)\mathrm{d}\mathbf{r}}-r_{b}b,\] (2.2) \[\tau_{0}\frac{\partial(g(\phi)c)}{\partial t}+\nabla\cdot(g(\phi )c\mathbf{v})=D_{c}\nabla_{\parallel}\cdot\left(g(\phi)\nabla_{\parallel}c \right)+D_{\perp}\nabla_{\perp}\cdot\left(g(\phi)\nabla_{\perp}c\right)+g(\phi )\left(b_{c}a-r_{c}c\right). \tag{2.3}\] where \(\mathbf{v}\) is the interface velocity equal to \(-\partial_{t}\phi\frac{\nabla\phi}{|\nabla\phi|^{2}}\). As already mentioned, the global inhibitor \(b\) immediately spreads over the whole membrane, and it satisfies an ordinary differential equation instead of a partial differential equation. The term \(g(\phi(\mathbf{r}))=\frac{\epsilon}{2}|\nabla\phi|^{2}\) is only nonzero in the interfacial region so that the reaction-diffusion dynamics only occur near the interface, and \(D_{\perp}\) refers to the diffusion we add in the normal direction of the membrane. More specifically, \(\nabla_{\parallel}\) and \(\nabla_{\perp}\) read: \[\nabla_{\parallel}=\begin{bmatrix}n_{y}^{2}&-n_{x}n_{y}\\ -n_{x}n_{y}&n_{x}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}\end{bmatrix},\quad\nabla_{\perp}=\begin{bmatrix}n_{x}^{2}&n_{x}n _{y}\\ n_{x}n_{y}&n_{y}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}\end{bmatrix}, \tag{2.4}\] in which the normal vector \(\mathbf{n}=[n_{x},n_{y}]^{T}\) can be calculated by \(\mathbf{n}=-\frac{\nabla\phi}{|\nabla\phi|}\). In practice, \(\nabla_{\perp}\) has to be much larger than the other diffusion coefficients in the chemical systems. The detailed parameters used in our model simulations are listed in Table 2.1. ### Chemotaxis dynamics of Dictyostelium discoideum We model the Dictyostelium discoideum cell as a 2d region with a fixed area \(A_{0}\). The evolving shape of the cell membrane is determined by the competition of several forces, including surface tension, bending force, the pressure that constrains the cell area, the chemical protrusive force which is proportional to the density of local activator \(a\), and the effective friction due to the interaction between cell membrane and the substrate. All of them are formulated under the phase field framework, as follows. Given the surface energy in phase field formulation [41]: \[E_{\rm ten}=\gamma\int_{\Omega}\left(\frac{\epsilon}{2}|\nabla\phi|^{2}+\frac{ 1}{\epsilon}G(\phi)\right)\mathrm{d}\mathbf{r}, \tag{2.5}\] in which \(\gamma\) is the surface tension, \(\epsilon\) is the phase field parameter controlling the width of the cell membrane (the width of phase field interface), and \(G(\phi)=18\phi^{2}(1-\phi)^{2}\) is a double well potential with minima at \(\phi=0\) and \(\phi=1\), the surface tension is derived by taking the variational derivative of the surface energy [25], \[\mathbf{F}_{\rm ten}=\frac{\delta E_{\rm ten}}{\delta\phi}\frac{\nabla\phi}{ \epsilon|\nabla\phi|^{2}}=\frac{\gamma}{\epsilon}\left(-\epsilon\nabla^{2} \phi+\frac{1}{\epsilon}G^{\prime}(\phi)\right)\frac{\nabla\phi}{|\nabla\phi|^ {2}}. \tag{2.6}\] Similarly, given the bending energy in phase field formulation [41]: \[E_{\rm bend}=\frac{\kappa}{2}\int_{\Omega}\frac{1}{\epsilon}\left(\epsilon \nabla^{2}\phi-\frac{1}{\epsilon}G^{\prime}(\phi)\right)^{2}\mathrm{d} \mathbf{r},\] with \(\kappa\) being the bending rigidity, we obtain the bending force [25]: \[\mathbf{F}_{\rm bend}=\frac{\delta E_{\rm bend}}{\delta\phi}\frac{\nabla\phi}{ \epsilon|\nabla\phi|^{2}}=\frac{\kappa}{\epsilon^{2}}\left(\epsilon\nabla^{2} -\frac{1}{\epsilon}G^{\prime\prime}\right)\left(\epsilon\nabla^{2}\phi-\frac{ 1}{\epsilon}G^{\prime}\right)\frac{\nabla\phi}{|\nabla\phi|^{2}}. \tag{2.7}\] The area force is given as a soft penalty on cell area: \[\mathbf{F}_{\rm area}=M_{\rm area}\left(\int_{\Omega}\phi\mathrm{d}\mathbf{r} -A_{0}\right)\frac{\nabla\phi}{|\nabla\phi|}, \tag{2.8}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Parameter & Value & Parameter & Value \\ \(D_{a}\) & 8e-3 & \(r_{a}\) & 0.2 \\ \(D_{c}\) & \(1.8D_{a}\) & \(r_{b}\) & 0.3 \\ \(\nabla_{\perp}\) & 1.0 & \(r_{c}\) & 0.13 \\ \(s_{a}\) & 5e-4 & \(b_{a}\) & 0.1 \\ \(\tau_{0}\) & 0.01 & \(b_{c}\) & 0.05 \\ \(\tau_{0}\) & 0.01 & \(s_{c}\) & 0.2 \\ \(dr\) & 0.02 & \(C_{\rm chem}\) & 0.02 \\ \hline \end{tabular} \end{table} Table 2.1: Parameters in the Meinhardt model coupled with a phase field membrane. in which \(M_{\rm area}\) is the penalty constant. We assume the protrusion force is simply proportional to the density of the local activator \(a\) on the membrane. Given the fact that the activator's concentration may change over several magnitudes, we added a saturating restriction to the force \(|{\bf F}_{\rm chem}|\propto\tilde{\alpha}=\max(10,a)\), \[{\bf F}_{\rm chem}=-\alpha\tilde{a}\frac{\nabla\phi}{|\nabla\phi|}, \tag{2.9}\] in which \(\alpha\) is the strength of the chemical protrusion force. One could alternatively use a sigmoidal function to achieve the same effect. A friction force due to the interaction between the cell and the substrate (such as adhesion, attachment and detachment of the cell from the substrate) is introduced which is proportional to the local speed: \({\bf F}_{\rm fr}=-\tau{\bf v}\). The force balance at quasi-steady state \[{\bf F}_{\rm tot}={\bf F}_{\rm ten}+{\bf F}_{\rm bend}+{\bf F}_{\rm area}+{ \bf F}_{\rm chem}+{\bf F}_{\rm fr}=0\] yields \[{\bf v}=-\frac{1}{\tau}{\bf F}_{\rm fr}=\frac{1}{\tau}({\bf F}_{\rm ten}+{\bf F }_{\rm bend}+{\bf F}_{\rm area}+{\bf F}_{\rm chem}).\] Finally using the transport equation of the phase field \(\phi\) along the velocity field \({\bf v}\): \(\frac{\partial\phi}{\partial t}+{\bf v}\cdot\nabla\phi=0\), we obtain the following equation for \(\phi\): \[\tau\frac{\partial\phi}{\partial t}= -\kappa\left(\nabla^{2}-\frac{G^{\prime\prime}(\phi)}{\epsilon^{ 2}}\right)\left(\nabla^{2}\phi-\frac{G^{\prime}(\phi)}{\epsilon^{2}}\right)+ \gamma\left(\nabla^{2}\phi-\frac{G^{\prime}(\phi)}{\epsilon^{2}}\right)\] \[-M_{\rm area}\left(\int\frac{\epsilon}{2}|\nabla\phi|^{2}+\frac{ 1}{\epsilon}G(\phi){\rm d}{\bf r}-P_{0}\right)|\nabla\phi|+\alpha\tilde{a}| \nabla\phi|. \tag{2.10}\] Physically, this chemotaxis dynamics of \(\phi\) implies that the friction force on the cell is balanced by the chemical protrusion force, which is transmitted from the substrate onto the cell via adhesion complexes. ## 3 Numerical Simulations In this section, we present several numerical experiments of cell chemotaxis dynamics. The numerical method described in the Appendix is adopted to solve the chemotaxis dynamics (2.10) coupled with the Meinhardt dynamics (2.1)-(2.3). In all of the simulations, we take \(L_{x}=L_{y}=10,N_{x}=N_{y}=2^{8},N_{t}=60\) and \(\Delta t=5e-4\). The interfacial width of \(\phi\) is fixed as \(\epsilon=10h_{x}\), where \(h_{x}=2L_{x}/N_{x}\) is the grid spacing in the \(x\) direction. For initial data, we take \(\phi^{0}\) as a disk with center at origin and radius \(r=4\): \[\phi^{0}(\mathbf{x})=0.5+0.5\tanh\left(\frac{r-\mathrm{dist}(\mathbf{x},\mathbf{0 })}{\epsilon/3}\right), \tag{3.1}\] in which \(\mathrm{dist}(\mathbf{x},\mathbf{0})\) stands for the Euclidean distance between \(\mathbf{x}\) and the origin. We further take \[a^{0}(\mathbf{x})\equiv 0,\ b^{0}(\mathbf{x})\equiv 0.01,\ c^{0}(\mathbf{x}) \equiv 0.\] Unless otherwise specified, the chemoattractant source is located at \(\mathbf{r}_{0}=(0,-40)^{T}\) and the strength of the bias is \(C_{\mathrm{chem}}=0.02\). ### Meinhardt dynamics on the membrane of a fixed cell In this example, we test the Meinhardt dynamics on the membrane of a fixed phase field cell as given in (3.1). The numerical simulation is presented in Figure 3.1 from which the bifurcating patterns are clearly observed on the cell membrane. In this simulation, several time snapshots are Figure 3.1: The Meinhardt system coupled with the membrane of a fixed cell: The six subfigures are snapshots at different times. A pattern bifurcation occurs from top left to top middle, followed by competition between the two branches. One branch defeats the other in the top right, and continues to bifurcate into two branches in the bottom left, etc. The white dashed circle indicates the fixed round cell, on which the red indicates the high concentration of the local activator \(a\). taken at \(t=10.0,11.0,13.0,14.0,16.8,18.0\). A bifurcation occurs at \(t=10.0\). After a short time period, two branches are formed at \(t=11.0\). One branch defeats the other at \(t=13.0\). Then the bifurcation repeatedly recurs. In each subfigure, the black dashed circle represents the cell membrane. The yellow dashed lines indicate the small box in which the Meinhardt equations are solved (see Appendix for the details about the small box). ### Meinhardt dynamics on the membrane of a free cell We now allow the cell to deform and move by set the cell solving the phase field equation (2.10) together with the Meinhardt system (2.1)-(2.3). A set of numerical results is presented in Figure 3.2. In this figure, the top row is the four snapshots at \(t=20,30,40,50\), on which the yellow color indicates a high concentration of activator \(a\). The bottom row shows the cell trajectories for times up to \(t=20,30,40,50\). Our simulations turn out to be generally consistent with observed chemotactic behavior. Pseudopods are randomly generated in the cell membrane, while the biasing effects towards the direction of the chemoattractant source \(\mathbf{r}_{0}\) accumulate over Figure 3.2: Top row: The phase field model of cell movement driven by the Meinhardt reaction-diffusion process on the membrane. The four subplots are snapshots at \(t=20,30,40,50\) respectively. In each subplot, the cell membrane is visualized by the \(1/2\)-level set of \(\phi\) (the white dashed curve), on which the red color indicates high concentration of activator \(a\) in the Meinhardt model. Bottom row: trajectory of the simulated cell towards chemoattractant source \(\mathbf{r}_{0}\) (\(\mathbf{r}_{0}=[0,-40]^{T}\), which is not shown in the subplots) at time \(t=20,30,40,50\). The cell curve is colored by a colormap to indicate the concentration of activator \(a\). The red curve represents the trajectory of the simulated cell, while the purple and black arrows respectively represent the direction of source and cell’s center-of-mass velocity. time, which eventually leads to the cell's translation along favorable directions. The trajectory of our simulated cell has demonstrated high efficiency in the cell's navigation. Moreover, even when the chemoattractant source \(\mathbf{r}_{0}\) is suddenly moved, a strong adaptability of our simulated cell is also observed in Figure 3.3. In this simulation, the chemoattractant source \(\mathbf{r}_{0}\) is located at \(\mathbf{r}_{0}=(0,-40)^{T}\) over time \([0,40]\). During this time period, the cell moves towards \(\mathbf{r}_{0}\) similarly to that shown in Figure 3.2. At \(t=40\), \(\mathbf{r}_{0}\) is suddenly changed to \(\mathbf{r}_{0}=(40,0)^{T}\). The cell can quickly adjust the direction and move towards the new location over time \([40,80]\), still with a slight biasing effect. At \(t=80\), a new location \(\mathbf{r}_{0}=(0,40)^{T}\) is assigned, and the cell changes direction and moves towards the north. For the sake of clear display, the cells are plotted in a window of \([-15,25]\times[-25,15]\), without showing the locations of chemoattractant source. ### Chemotaxis index To quantify the efficiency of cells' navigation, we measured the chemotaxis index (CI) of our simulated cells over time. The chemotaxis index is defined by the ratio between the distance the simulated cell has traveled in the direction toward the chemoattractant source \(\mathbf{r}_{0}\) and the total Figure 3.3: Trajectory of the simulated cell when the source \(\mathbf{r}_{0}\) suddenly relocates. All conditions are the same as those in Figure 3.2, except that the chemoattractant source is changed to a new location every 40s. distance it has traveled: \[\text{CI}=\frac{\sum_{n}\left\langle\mathbf{x}_{\text{center}}^{n+1}-\mathbf{x}_{ \text{center}}^{n},\frac{\mathbf{r}_{0}-\mathbf{x}_{\text{center}}^{n}}{\| \mathbf{r}_{0}-\mathbf{x}_{\text{center}}^{n}\|}\right\rangle}{\sum_{n}\left \langle\mathbf{x}_{\text{center}}^{n+1}-\mathbf{x}_{\text{center}}^{n}, \mathbf{x}_{\text{center}}^{n+1}-\mathbf{x}_{\text{center}}^{n}\right\rangle}.\] In Figure 3.4, we plot the CI for three simulated cells with various \(C_{\text{chem}}=0.01,0.03,0.05\) of the strength of the bias. The top row, from left to right, are the trajectories of three cells up to time \(T=40\), with different values of \(C_{\text{chem}}=0.01,0.03,0.05\), respectively. The cells are plotted in the box \([-20,20]\times[-30,10]\), with the chemoattractant source located at \(\text{r}_{0}=[0,-40]^{T}\). The bottom row, from left to right, are the CI of the three cells up to time \(T=40\), with different values of \(C_{\text{chem}}=0.01,0.03,0.05\), respectively. In each plot of the CI, CI curve is in blue, with the values indicated by the left \(y\)-axis. The solid and dashed orange curves (values indicated by the right \(y\)-axis) are the cumulative distance travelled by the cell in the direction toward \(\mathbf{r}_{0}\), and the cumulative total distance, respectively. Our result shows that when the strength of the bias is weaker (\(C_{\text{chem}}=0.01\)), the cell wanders more randomly along the trajectory toward the chemoattractant source; while when the strength of the bias becomes stronger (\(C_{\text{chem}}=0.05\)), the cell's moving direction is more straightforward. On the other hand, even for the case with a weaker strength of the bias \(C_{\text{chem}}=0.01\), our simulation still shows a high value for CI (CI becomes close to \(0.6\) at \(t=40\)), indicating efficient cell navigation. For the case with a stronger strength of the bias \(C_{\text{chem}}=0.05\), the CI can reach an even higher value, close to \(0.9\). ## 4 Discussion and outlook Here we have presented a new computational approach to the problem of chemotactically driven motion in Dictyostelium. Our results are consistent with experimental findings and recapitulate results obtained by the level set method [19]. While both the level set method and phase field model can couple the Meinhardt dynamics with membrane evolution, thereby successfully reproducing the pseudopod morphology, our method enjoys the fact that the coupling of the phase field with the Meinhardt dynamics is much more straightforward, by simply incorporating \(g(\phi)=\frac{\epsilon}{2}|\nabla\phi|^{2}\) in the Meinhardt equations (2.1-2.3). In addition, although it has been claimed that an evolving cell boundary solved by (ALE-SFEM) plus the level set approach has the advantage of efficiency [19; 20], such a computational strategy suffers from a very complicated implementation. Indeed, the level-set-modeled cell profile has to "communicate" with the evolving cell boundary at every single time step. More specifically, two set of meshes need to be introduced, the finite element mesh for the cell membrane update, and level set mesh for the update of the level set function. In each time step, one needs to project the finite element mesh points onto level set mesh points by using nearest-neighbor point, in order to update level set function; then use the level set mesh points to form a new finite element mesh, on which the Meinhardt is updated. Since the new finite element mesh may fail to be equidistributed, a step of re-gridding the finite element mesh is often. In contrast, in our phase field framework, when the cell movement and membrane reaction-diffusion system are combined using the same implicit tracking language, no further "communication" between mechanical and chemical systems is needed, as they are solved on the same uniform mesh (see numerical details in Appendix). Finally, in addition to computational efficiency and formulaic complexity, our phase field framework has an obvious advantage when it becomes necessary to couple intra-cellular flow and focal adhesions into the model [37]. In fact, it is definitely worth exploring chemotaxis with a more bio Figure 3.4: Chemotaxis indexes for three cell trajectories. Top row: Three cell trajectories for different strengths of chemical protrusion \(\alpha=0.3,0.4,0.6\). Bottom row: The corresponding chemotaxis indexes (CI) over time \([0,60]\). As indicated by CI, we note that the greater \(\alpha\) becomes, the more directly (and the faster) the cell moves towards the chemoattractant source \(\mathbf{r}_{0}\). physically complete model, i.e. with all the effects of bulk reaction-diffusion dynamics, cytoplasmic flow, Meinhardt patterns, focal adhesions, and membrane forces. Quite a few other interesting features, apart from bifurcating pseudopods, were observed in experiments more than a decade ago [21]. For example, when Dictyostelium discoideum cells adhere to the substrate, they exert opposing pole forces that are orders of magnitude higher than required to overcome the resistance from their environment. Also, the strain energy exerted by migrating Dicty on the substrate is (almost) quasi-periodic and can be used to identify different stages of the cell motility cycle. Moreover, the period displays an inversely proportional relation with cell velocity. In recent work by Copos et al. [44], a simple mechanochemical model of 2D (in the vertical plane) cell motility was used to study the periodic changes in cell length and the related spatiotemporal dynamics of traction forces. Our phase field model can provide a platform to carefully study these features in future work. ## 5 Acknowledgements H. Levine's work is supported by the NSF, grants Nos. PHY-1935762 and PHY-2019745. Y. Zhao's work is supported by a grant from the Simons Foundation through Grant No. 357963 and NSF grant DMS2142500. ## Appendix In this appendix, we present in detail the numerical algorithm to solve the coupled system (2.1)-(2.3) and (2.10). We take the computational domain \(\Omega=[-L_{x},L_{x})\times[-L_{y},L_{y})\). Periodic boundary conditions are used for the coupled system. A uniform grid \(\Omega_{h}\) is generated over \(\Omega\) by taking \(h_{x}=\frac{2L_{x}}{N_{x}}\) and \(h_{y}=\frac{2L_{y}}{N_{y}}\). The grid points are given as \((x_{i},y_{j})=(-L_{x}+(i-1)h_{x},-L_{y}+(j-1)h_{y})\). Given initial data \((\phi^{0},a^{0},b^{0},c^{0})\), we aim to find \((\phi^{n},a^{n},b^{n},c^{n})\) for \(n=1,2,\cdots,N_{t}\) with \(N_{t}=\frac{T}{\Delta t}\). For solving the equation of \(\phi\) in (2.10), we adopt a semi-implicit Fourier spectral method. More specifically, we discretize the equation as \[\tau\frac{\phi^{n+1}-\phi^{n}}{\Delta t}= -\kappa\Delta^{2}\phi^{n+1}-\kappa\nabla^{2}\frac{G^{\prime}( \phi^{n})}{\epsilon^{2}}+\kappa\frac{G^{\prime\prime}(\phi^{n})}{\epsilon^{2 }}\left(\nabla^{2}\phi^{n}-\frac{G^{\prime}(\phi^{n})}{\epsilon^{2}}\right)+ \gamma\left(\nabla^{2}\phi^{n+1}-\frac{G^{\prime}(\phi^{n})}{\epsilon^{2}}\right)\] \[-M_{\text{area}}\left(\int\frac{\epsilon}{2}|\nabla\phi^{n}|^{2 }+\frac{1}{\epsilon}G(\phi^{n})\mathrm{d}\mathbf{r}-P_{0}\right)|\nabla\phi^ {n}|+\alpha\tilde{a}|\nabla\phi^{n}|.\] This discretization can be rewritten as \[\left(\frac{\tau}{\Delta t}+\kappa\nabla^{4}-\gamma\nabla^{2}\right)\phi^{n+1}= \text{RHS}(\phi^{n}),\] which can be efficiently solved by the Fourier spectral method. Next, we consider the numerical method for the Meinhardt system (2.1)-(2.3). For the sake of numerical stability, the Meinhardt equations (2.1)-(2.3) are replaced by: \[\tau_{0}\frac{\partial(\tilde{g}(\phi)a)}{\partial t}+\nabla\cdot (\tilde{g}(\phi)a\mathbf{v})=D_{a}\nabla_{\parallel}\cdot\left(g(\phi)\nabla_{ \parallel}a\right)+D_{\perp}\nabla_{\perp}\cdot\left(g(\phi)\nabla_{\perp}a\right)\] \[+\tilde{g}(\phi)\left(\frac{s(\mathbf{r},t)(a^{2}b^{-1}+b_{a})}{ (s_{c}+c)(1+s_{a}a^{2})}-r_{a}a\right),\] (A.1) \[\tau_{0}\frac{\partial b}{\partial t}=r_{b}\frac{\int\tilde{g}( \phi)a\mathrm{d}\mathbf{r}}{\int\tilde{g}(\phi)\mathrm{d}\mathbf{r}}-r_{b}b,\] (A.2) \[\tau_{0}\frac{\partial(\tilde{g}(\phi)c)}{\partial t}+\nabla \cdot(\tilde{g}(\phi)c\mathbf{v})=D_{c}\nabla_{\parallel}\cdot\left(g(\phi) \nabla_{\parallel}c\right)+D_{\perp}\nabla_{\perp}\cdot\left(g(\phi)\nabla_{ \perp}c\right)+\tilde{g}(\phi)\left(b_{c}a-r_{c}c\right),\] (A.3) in which \(\tilde{g}(\phi)=\frac{1}{\epsilon}G(\phi)\). The replacement of \(g\) by \(\tilde{g}\) is reasonable due to the fact that in the Ginzburg-Landau functional (2.5), the term \(g(\phi)=\frac{\epsilon}{2}|\nabla\phi|^{2}\) plays identical role as \(\tilde{g}(\phi)=\frac{1}{\epsilon}G(\phi)\) at the system equilibrium [42, 43]. Note that we replace all the terms of \(g(\phi)\) by \(\tilde{g}(\phi)\) except for those in the parallel and perpendicular diffusion terms. We do this is because by taking \(g(\phi)=\frac{\epsilon}{2}|\nabla\phi|^{2}\) together with the parallel and perpendicular gradient operators (2.4), the diffusion terms can be significantly simplified. Explicitly, for the parallel and perpendicular diffusion terms in the equation of \(a\), \[\nabla_{\parallel}\cdot(g(\phi)\nabla_{\parallel}a) =\begin{bmatrix}n_{y}^{2}&-n_{x}n_{y}\\ -n_{x}n_{y}&n_{x}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}\end{bmatrix}\cdot\begin{pmatrix}g(\phi)\begin{bmatrix}n_{y}^{2}& -n_{x}n_{y}\\ -n_{x}n_{y}&n_{x}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}a\end{bmatrix}\end{bmatrix}\] \[=\frac{\epsilon}{2}\Bigg{[}\begin{pmatrix}n_{y}^{2}\partial_{x}- n_{x}n_{y}\partial_{y}\end{pmatrix}\Big{(}(\partial_{y}\phi)^{2}\partial_{x}a-( \partial_{x}\phi\partial_{y}\phi)\partial_{y}a\Big{)}\] \[\qquad\qquad\qquad+\Big{(}-n_{x}n_{y}\partial_{x}+n_{x}^{2} \partial_{y}\Big{)}\Big{(}-(\partial_{x}\phi\partial_{y}\phi)\partial_{x}a+( \partial_{x}\phi)^{2}\partial_{y}a\Big{)}\Bigg{]},\] \[\nabla_{\perp}\cdot(g(\phi)\nabla_{\perp}a) =\begin{bmatrix}n_{x}^{2}&n_{x}n_{y}\\ n_{x}n_{y}&n_{y}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}\end{bmatrix}\cdot\begin{bmatrix}g(\phi)\begin{bmatrix}n_{x}^{2}&n_ {x}n_{y}\\ n_{x}n_{y}&n_{y}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}a\\ \partial_{y}a\end{bmatrix}\end{bmatrix}\] \[=\frac{\epsilon}{2}\begin{bmatrix}n_{x}^{2}&n_{x}n_{y}\\ n_{x}n_{y}&n_{y}^{2}\end{bmatrix}\begin{bmatrix}\partial_{x}\\ \partial_{y}\end{bmatrix}\cdot\begin{bmatrix}(\partial_{x}\phi)^{2}\partial_{x} a+(\partial_{x}\phi\partial_{y}\phi)\partial_{y}a\\ (\partial_{x}\phi\partial_{y}\phi)\partial_{x}a+(\partial_{y}\phi)^{2}\partial_ {y}a\end{bmatrix}\] \[=\frac{\epsilon}{2}\Bigg{[}\Big{(}n_{x}^{2}\partial_{x}+n_{x}n_{ y}\partial_{y}\Big{)}\Big{(}(\partial_{x}\phi)^{2}\partial_{x}a+(\partial_{x} \phi\partial_{y}\phi)\partial_{y}a\Big{)}\] \[\qquad\qquad\qquad+\Big{(}n_{x}n_{y}\partial_{x}+n_{y}^{2}\partial _{y}\Big{)}\Big{(}(\partial_{x}\phi\partial_{y}\phi)\partial_{x}a+(\partial_ {y}\phi)^{2}\partial_{y}a\Big{)}\Bigg{]}.\] To numerically discretize the above two terms, firstly we evaluate \((\partial_{x}\phi,\partial_{y}\phi)\) using Fourier spectral method, and calculate \((n_{x},n_{y})\) as \[n_{x}=\frac{\partial_{x}\phi}{\sqrt{(\partial_{x}\phi)^{2}+( \partial_{y}\phi)^{2}+\epsilon_{0}}},\ n_{y}=\frac{\partial_{y}\phi}{\sqrt{( \partial_{x}\phi)^{2}+(\partial_{y}\phi)^{2}+\epsilon_{0}}},\] in which \(\epsilon_{0}\) is a sufficiently small constant (say, \(\epsilon_{0}=1e-8\)) to avoid dividing zero. Secondly, we evaluate \[\Big{(}\partial_{x}((\partial_{x}\phi)^{2}),\partial_{y}((\partial _{x}\phi)^{2})\Big{)},\Big{(}\partial_{x}(\partial_{x}\phi\partial_{y}\phi), \partial_{y}(\partial_{x}\phi\partial_{y}\phi)\Big{)},\Big{(}\partial_{x}(( \partial_{y}\phi)^{2}),\partial_{y}((\partial_{y}\phi)^{2})\Big{)}\] using Fourier spectral method. Thirdly, the first and second derivatives of \(a\) are evaluated by central difference: \[\partial_{x}a\approx\frac{a_{i+1,j}-a_{i-1,j}}{2h_{x}},\ \partial_{y}a \approx\frac{a_{i,j+1}-a_{i,j-1}}{2h_{y}},\ \partial_{xx}a\approx\frac{a_{i+1,j}-2a_{ij}+a_{i-1,j}}{h_{x}^{2}},\] \[\partial_{xy}a\approx\frac{a_{i+1,j+1}-a_{i-1,j+1}-a_{i+1,j-1}+a_{ i-1,j-1}}{4h_{x}h_{y}},\ \partial_{yy}a\approx\frac{a_{i,j+1}-2a_{ij}+a_{i,j-1}}{h_{y}^{2}}.\] Inserting all evaluations above back, we obtain the numerical approximation of \(\nabla_{\parallel}\cdot(g(\phi)\nabla_{\parallel}a)\) and \(\nabla_{\perp}\cdot(g(\phi)\nabla_{\perp}a)\). The advection term \(\nabla\cdot(\tilde{g}(\phi)a\mathbf{v})\) in the equation of \(a\) is approximated by a central difference scheme: \[\nabla\cdot(\tilde{g}(\phi)a\mathbf{v})\approx\frac{\tilde{g}( \phi_{i+\frac{1}{2},j})a_{i+\frac{1}{2},j}v_{i+\frac{1}{2},j}^{x}-\tilde{g}( \phi_{i-\frac{1}{2},j})a_{i-\frac{1}{2},j}v_{i-\frac{1}{2},j}^{x}}{h_{x}}\] \[\qquad\qquad\qquad+\frac{\tilde{g}(\phi_{i,j+\frac{1}{2}})a_{i,j+ \frac{1}{2}}v_{i,j+\frac{1}{2}}^{y}-\tilde{g}(\phi_{i,j-\frac{1}{2}})a_{i,j- \frac{1}{2}}v_{i,j-\frac{1}{2}}^{y}}{h_{y}}\] in which \(\mathbf{v}=[v^{x},v^{y}]^{T}=-\partial_{t}\phi\frac{\nabla\phi}{|\nabla\phi|^{2}}\) and is calculated by taking \(\partial_{t}\phi\approx\frac{\phi^{n+1}-\phi^{n}}{\Delta t}\), and \(\nabla\phi\) evaluated by Fourier spectral approximation. The time derivative \(\frac{\partial(\tilde{g}(\phi)a)}{\partial t}\) is approximated by forward Euler scheme, \[\frac{\partial(\tilde{g}(\phi)a)}{\partial t}\approx\frac{\tilde{g}(\phi^{n+1} )a^{n+1}-\tilde{g}(\phi^{n})a^{n}}{\Delta t}.\] Finally, with all terms discretized in the equation of \(a\), we get an update on \(a\): \(a^{n}\to a^{n+1}\). The equation (A.3) can be solved numerically in a similar manner. The equation (A.2) is an ODE, so we can adopt an efficient fourth-order Runge-Kutta method (RK4) to solve it. Since the phase field cell \(\phi\) moves around in the computational domain \(\Omega\) and may near the edge, we do not solve the phase field equation and the Meinhardt equations in the entire domain. We only solve these equations in a smaller box of size \(1.75L_{x}\times 1.75L_{y}\) near the cell. This box is re-centered if the cell is close to one of its four boundaries: if \(\phi\geq 0.5\) within \(\frac{N_{x}}{16}\) (or \(\frac{N_{y}}{16}\)) pixels of the boundary, the box is shifted \(\frac{N_{x}}{4}\) (or \(\frac{N_{y}}{4}\)) pixels away from the boundary. We treat the small box as having periodic boundary conditions, which is appropriate as we keep the cell from too closely approaching the edge of \(\Omega\).
2309.09746
Archives on astronomy from the 1950s
Information on archives from the 1950s of 15 astronomical observatories is provided beginning with a list of correspondence and other information related to astronomy of the Copenhagen University Observatory in the 1950s. The Appendix contains information from the 14 other observatories about their archives from those years, most of them having no archive at all. Public links are given to most of the files. - Print of the present list and the Danish astronomy archive itself will be placed at the Rigsarkivet, the Danish National Archives.
Erik Høg
2023-09-18T13:21:06Z
http://arxiv.org/abs/2309.09746v1
2015.12.03 2018.2.25: [http://www.astro.ku.dk/~erik/xx](http://www.astro.ku.dk/~erik/xx) inserted here, but the list stored in the Rigsarkivet in November 2016 has references to the invalid dropbox!!! ###### Abstract Information on archives from the 1950s of 15 astronomical observatories is provided beginning with a list of correspondence and other information related to astronomy of the Copenhagen University Observatory in the 1950s. The Appendix contains information from the 14 other observatories about their archives from those years, most of them having no archive at all. Public links are given to most of the files. - Print of the present list and the Danish astronomy archive itself will be placed at the _Rigsarkivet,_ the Danish National Archives. ## 1 Introduction Information on archives from the 1950s of 15 astronomical observatories is provided beginning with a list of correspondence and other information related to astronomy of the Copenhagen University Observatory in the 1950s. An Appendix contains information from the 14 other observatories about their archives from those years, most of them having no archive at all. Public links are given to some of the files. It was a surprise to find that nine out of the first ten observatories inquired had no archive at all from those years, only Copenhagen had a substantial archive which is presented below. Five other observatories were then asked and it turned out that they all had an archive. **Content:** Section 2: The correspondence between Julie Vinter Hansen and Bengt Stromgren on 42 pages was used in writing the article, my memoirs, Hog (2015) as described in the section. This work led to location of the other correspondence registered here. Section 3: The correspondence of Copenhagen Observatory 1947-59 is presently placed at the Kroppedal Museum. It measures in total thickness about 74 cm, with 10 pages per mm this means about 7000 letters and other correspondence of administrative and scientific character. Sections 4 and 5: The correspondence A.1-12 with Peter Naur is related to the first _Baltic Meeting_ Hog (2015a) which took place in Lund in September 1957. The letters A.13-15 are about Erik Hog. Section 6: More on the 1950s, interviews with Peter Naur and Erik Hog. Appendix: "Archives of 18 observatories". ABSTRACT: In early 1996, the Copenhagen Observatory moved from Broorfelde and \(\emptyset\)stervold to the Rockefeller Complex at Juliane Maries Vej 30, in Copenhagen together with geophysicists and the Danish Space Research Institute. Before the move, the archives at \(\emptyset\)stervold were registered at the initiative of the director professor Henning E. Jorgensen (1938-2010) who began as a young student of astronomy in September 1956. The registrant and papers were then ready to be sent to the Science Archive in Aarhus, according to recent information from Claus Thykier (*1939), founder and leader of the Ole Romer Museum, now named Kroppedal Museum, who supported this work of registration. In connection with the move, copies of this correspondence, _incomplete and all in Danish_, came into my hands about 1996, I do not remember how. The originals have presently, April 2015, not been found. I have received negative answers from the archive in Aarhus and from the Rigsarkivet, the Danish National Archives. They are being searched at the Kroppedal Museum by Lene Skodborg and collaborators. For my memoirs, I have extracted in English translation how carefully my education was discussed and arranged by Bengt Stromgren, Julie Winter Hansen and my mentor Peter Naur. Study of the stability of the new meridian circle in Broorfelde became my task and that led me deep into astrometry. The article Hog (2015) contains my memoirs about 1946-58. Memoirs about the years 1958-80 are available in Hog (2014). My fellow student and also astronomer Svend Laustsen has written for his children in Laustsen (2015) using the same correspondence. The correspondence on 42 pages, B.1-42, numbered chronologically by me, has been scanned into four pdf files. Page: _B.1-9: Feb. 1951 - jan. 1953 [http://www.astro.ku.dk/~erik/xx/Corrulie1953.pdf_](http://www.astro.ku.dk/~erik/xx/Corrulie1953.pdf_) 1. Julie Winter Hansen til Bengt Stromgren 5 feb 1951 2, 3, 4-5, 6, 7, 8. JVH til BS 9. JVH til BS 31 jan. 1953 _B.10-19: Maj 1953 - dec. 1953 [http://www.astro.ku.dk/~erik/xx/Corrulie1953.pdf_](http://www.astro.ku.dk/~erik/xx/Corrulie1953.pdf_) 10-11. JVH til BS 22 maj 1953 12, 13-14, 15. JVH til BS 16-17. BS til JVH 18-19. JVH til BS 18 Dec 1953 B. _20-29: Jan. 1954 - marts 1956 [http://www.astro.ku.dk/~erik/xx/Corrulie1954.pdf_](http://www.astro.ku.dk/~erik/xx/Corrulie1954.pdf_) 20-21. BS til JVH, p.1 is missing 2 Jan 1954 22, 23. JVH til BS 24. BS til JVH 25, 26, 27. JVH til BS 28-29. BS til JVH 11 marks 1956 _B.30-42: Marts 1956 - maj 1958 [http://www.astro.ku.dk/~erik/xx/Corrulie1956.pdf_](http://www.astro.ku.dk/~erik/xx/Corrulie1956.pdf_) 30-31. JVH til BS 27 Marts 1956 32, 33, 34, 35, 36, 37. JVH til BS 38-39. BS til JVH 40. JVH til BS ## 3 Correspondence Copenhagen Observatory 1947-59 Listed by Erik Hog and Lars Occhionero on 29 April 2015. The total thickness of about 74 cm and assuming 10 pages per mm, indicates about 7000 letters and other correspondence of administrative and scientific character. The correspondence is placed at the Kroppedal Museum, near Copenhagen, in a box 40x40x70 cm. The box contains 15 letter box files labeled alphabetically A, B,..., V-O containing letters from 1947-59. The label and thickness in cm of the letters in each box are: A 3, B 5, C 5, D-E 5, F-G 5 = 23 cm H-J 6, K-L 5, M 3, N 5, O-P 5 = 24 cm Q-R 7, Sa-Sl 5, Sm-S 6 T-U 3 V-O 6 = 27 cm Total thickness = 74 cm **The file H-J with Otto Heckmann:** The correspondence with Otto Heckmann is contained in 13 letters between 24.04.1947 and 17.02.1959. Letter nr. 12 is from Stromgren, dated 21.08.1953, a few days before his departure to USA as director of Yerkes Observatory, and it is his last letter to OH. Letter nr. 13 is from Anders Reiz to OH. Thus, no letter about the Baltic meetings has been found among the 13. The correspondence with Erik Hog is found on 32 pages dated from 29.07.1953 to 17.04.1959. **Note:** The original letters of the 42 pages listed in section 2 could not be found in spite of careful search. **Phone numbers:** Kroppedal Museum: Kroppedals Alle 3, 2630 Taastrup. Phone:4330 3000 Lars Occhionero, intern fastnet 113, mobil 2624 2868 Erik Hog: 4449 2008 ## 4 Correspondence with Peter Naur during 1957 _A.1-9: About the first Baltic Meeting:_ _[http://www.astro.ku.dk/~erik/xx/CorrNaur1957.pdf_](http://www.astro.ku.dk/~erik/xx/CorrNaur1957.pdf_) 1. Julie Vinter Hansen to Otto Heckmann 2. Naur to Heckmann * [3] Heckmann to Naur * [4] Heckmann to Syldenkerne and Naur * [5] Heckmann to Naur * [6] Naur to Heckmann * [7] Heckmann to Naur * [8] Heckmann to Naur * [9] Naur to Heckmann ## 5 Correspondence with Peter Naur during 1958 _A. 10-12: About the visit by Peter Naur to Hamburg in January 1958 A.13-15: About Erik Hgg [http://www.astro.ku.dk/~erik/xx/CorrNaur1958.pdf_](http://www.astro.ku.dk/~erik/xx/CorrNaur1958.pdf_) 10. Astronomisches Colloquium auf der Hamburger Sternwarte, Jan. und Feb. 1958 11. Naur to Heckmann after Naur's colloquium on 11 Jan. 1958 12. Naur to Dieckvoss 13. Recommendation to Hgg, in Danish 14. Letter to J. Oort sending Hgg's report on automatic measurement 15. Letter Hgg to Naur, 3 pages in Danish ## 6 More on the 1950s Interviews with Peter Naur and Erik Hgg were recorded, in Danish, in 2009 for an historical research project. They are available with audiofile and transcribed at the Kroppedal Museum for research purposes. (File.zip of 52 MB for Naur at link in my mail to Aaserud on 28.04.2015.) **Acknowledgements:** The author is grateful to the following persons for information and support: Anthony Brown, Lars Buus, Bengt Edvardsson, Christine Etienne, Michael Geffert, an anonymous librarian of the Paris Observatory, Wolfram Kollatschny, Jan Lub, Palle Lykke, Francois Mignard, Lars Occhionero, Javier Montojo Salazar, Gregory Shelton, Frederic Thevenin, Axel Wittmann, Norbert Zacharias and all the persons acknowledged in Hgg (2015a).
2309.11109
Self-supervised Domain-agnostic Domain Adaptation for Satellite Images
Domain shift caused by, e.g., different geographical regions or acquisition conditions is a common issue in machine learning for global scale satellite image processing. A promising method to address this problem is domain adaptation, where the training and the testing datasets are split into two or multiple domains according to their distributions, and an adaptation method is applied to improve the generalizability of the model on the testing dataset. However, defining the domain to which each satellite image belongs is not trivial, especially under large-scale multi-temporal and multi-sensory scenarios, where a single image mosaic could be generated from multiple data sources. In this paper, we propose an self-supervised domain-agnostic domain adaptation (SS(DA)2) method to perform domain adaptation without such a domain definition. To achieve this, we first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches. Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics. The experimental results on public benchmarks verify the effectiveness of SS(DA)2.
Fahong Zhang, Yilei Shi, Xiao Xiang Zhu
2023-09-20T07:37:23Z
http://arxiv.org/abs/2309.11109v2
# Self-supervised Domain-agnostic Domain Adaptation for Satellite Images ###### Abstract Domain shift caused by, e.g., different geographical regions or acquisition conditions is a common issue in machine learning for global scale satellite image processing. A promising method to address this problem is domain adaptation, where the training and the testing datasets are split into two or multiple domains according to their distributions, and an adaptation method is applied to improve the generalizability of the model on the testing dataset. However, defining the domain to which each satellite image belongs is not trivial, especially under large-scale multi-temporal and multi-sensory scenarios, where a single image mosaic could be generated from multiple data sources. In this paper, we propose an self-supervised domain-agnostic domain adaptation (SS(DA)\({}^{2}\)) method to perform domain adaptation without such a domain definition. To achieve this, we first design a contrastive generative adversarial loss to train a generative network to perform image-to-image translation between any two satellite image patches. Then, we improve the generalizability of the downstream models by augmenting the training data with different testing spectral characteristics. The experimental results on public benchmarks verify the effectiveness of SS(DA)\({}^{2}\). Domain Adaptation, Contrastive Learning, Semantic Segmentation, Self-supervised Learning. ## I Introduction It is well known that satellite images taken from different locations, with different sensors, or at different times, generally exhibit large spectral variations due to differences in atmospheric conditions, viewing angles, illumination conditions, and so on. As a result, a supervised image processing-based model will suffer from a performance decay when it is applied on scenarios where unseen spectral characteristics of the images exist. A promising approach to tackle this problem and improve the generalizability of the model is to adapt the model trained on existing annotated data to the domains of different data sources. Such an idea is usually termed as domain adaptation (DA). A basic prerequisite for applying conventional DA approach is that the well-defined domain knowledge should be acquired, i.e., knowledge about how to separate the data into multiple domains, in which a certain level of intra-domain homogeneity and inter-domain heterogeneity is fulfilled. In specific cases, such domain definition is naturally given according to the sensory, spatial or temporal information, especially when the study area is limited. However, the domain definition becomes non-trivial when we are aiming at global-scale applications. Fig. 1 exemplifies single image mosaics generated from multi-temporal multi-sensory sources to overcome the limited swath width, cloud coverage, or other limitations. As a result, different parts of the mosaic could have different spectral characteristics, which makes it difficult to well define domains. Although data harmonization techniques have been proposed to mitigate this problem, their performances on large-scale complicated scenes still remain limited [1]. With such considerations, we aim at developing a general DA approach for satellite imagery independent of downstream applications, without relying on any predefined domain separation rule. We term this as domain-agnostic DA. Inspired by [2], we develop a deep learning-based image-to-image translation (I2I) [3] method and apply it as data augmentation to the downstream task-specific models. In this paper, we propose a self-supervised domain-agnostic domain adaptation (SS(DA)\({}^{2}\)) method to achieve such an I2I without a prior domain definition. * We elaborate and investigate the domain-agnostic domain adaptation problem and propose a general I2I approach to tackle such a problem. * We integrate the contrastive learning techniques [4] into the adversarial learning pipeline and enable the I2I between two arbitrary image patches without explicitly modeling their domain characteristics. * The experimental results show that the proposed SS(DA)\({}^{2}\) approach can outperform the state-of-the-art I2I-based DA approach without a domain definition. ## II Related Works ### _Image-to-image Translation_ I2I methods aim at learning a mapping function that can map the images sampled from the source domain to the target domain, ensuring that the mapped images have similar distributions with the target data. Earlier methods are mostly hand-crafted, with the goal to reduce the visual differences between the source and the target images. Such methods Fig. 1: Examples of satellite image mosaics. (From PlanetScope data.) include histogram matching [6], graph matching [7] and other standardization-based approaches such as histogram equalization [6] and gray world [8]. To better exploit the distinctions between different data sources, data-driven methods and deep learning techniques has been widely explored to improve the adaptiveness and robustness of the I2I approaches. Theses approaches are mostly based on the CycleGAN architecture [3], where two generators and two discriminators are trained to map the source data to the target domain and vise versa. With the development of style transfer techniques, Adaptive Instance Transformation (AdaIn) [5, 9] is developed to model the style or spectral characteristics of the input images, saving the need of training separated generative and discriminative models for both the source and target domains. To tackle more general cases, especially when the training and testing data are sampled from multiple domains, StandardGAN [10] designs a network model that can perform I2I between two arbitrary domains by training multiple style encoders for each domain. In DAug [2], a more simplified model is proposed, where the style characteristics of different domains are modeled as trainable feature vectors, which further reduce the computational burdens. Other multi-domain I2I models include starGAN [11] and starGANv2 [12]. ### _Self-supervised Learning_ Self-supervised learning (SSL) is a branch of unsupervised learning, where the training data is automatically labeled by exploiting their inward relationships [13]. Among all different categories of SSL approaches, contrastive learning has demonstrated great potential by measuring the similarity between sample pairs. During the measurement learning process, Noise Constrastive Estimation (NCE) [14] or InfoNCE [15] objectives will be optimized so that different augmented views of a single sample are more similar than views from different samples. ## III Methods The overall architecture of the proposed SS(DA)\({}^{2}\) is illustrated in Fig. 2. Fig. 2: Illustration of the overall architecture and the proposed contrastive adversarial loss. The inputs to the the network are two randomly sampled images img A and img B, where img A is further augmented twice and derives img A1 and img A2. Then, img A1 and img B will be encoded and merged to generate the translated images img A2B and img B2A, according to AdaIn [5]. Self-reconstruction and cycle consistency loss are applied to ensure the extracted features and the translated images maintain the structural and content information. Adversarial loss is utilized to enhance the genuineness of the translated images. Contrastive adversarial loss is the key loss function that enables the style transfer. More details about it will be discussed in Sec. III-F. ### _Problem Formulation_ Here we first formulate the domain-agnostic I2I problem. Given a training dataset \(\mathcal{D}_{train}\) and a testing dataset \(\mathcal{D}_{test}\). We assume both \(\mathcal{D}_{train}\) and \(\mathcal{D}_{test}\) consist of multiple domains, yet the domain assignment knowledge is missing. Given two randomly sampled images \(I_{A}\) and \(I_{B}\) from \(\mathcal{D}_{train}\cup\mathcal{D}_{test}\), the goal of I2I is to generate an image \(I_{A\to B}\), ensuring that its content or spatial geometry is identical to \(I_{A}\), while style or spectral characteristic is similar to \(I_{B}\). SS(DA)\({}^{2}\) contains a generator and a discriminator \(F\), where the generator consists of an encoder \(E\) and a decoder \(D\). \(E\) maps a sampled image to a feature vector: \(x=E(I)\), while \(D\) decodes a feature vector to generate an image: \(I=D(x)\). To conduct the self-supervised learning, \(I_{A}\) will be augmented twice to two different views, including \(I_{A_{1}}\) and \(I_{A_{2}}\) by random resizing, cropping, and gaussian blurning. The input to SS(DA)\({}^{2}\) during each training step is a batch of randomly sampled \(I_{A_{1}}\), \(I_{A_{2}}\) and \(I_{B}\). The overall loss functions of SS(DA)\({}^{2}\) are given as: \[\mathcal{L}_{gen}=\lambda_{1}\mathcal{L}_{rec}+\lambda_{2}\mathcal{L}_{adv}^{ gen}+\lambda_{3}\mathcal{L}_{cyc}+\lambda_{4}\mathcal{L}_{per}+\lambda_{5} \mathcal{L}_{con}^{gen}, \tag{1}\] \[\mathcal{L}_{dis}=\lambda_{2}\mathcal{L}_{adv}^{dis}+\lambda_{5}\mathcal{L}_ {con}^{dis}. \tag{2}\] Here \(\mathcal{L}_{gen}\) and \(\mathcal{L}_{dis}\) correspond to the losses of the generator and the discriminator, respectively. They will be optimized alternatively during training. In the remaining part of this section, we will introduce each loss item in details. ### _Self-reconstruction Loss_ The self reconstruction loss is applied to ensure that the extracted feature of an image can be used to reconstruct itself. It can be formulated as: \[\mathcal{L}_{rec}=L_{1}(D(x_{A_{1}}),I_{A_{1}})+L_{1}(D(x_{B}),I_{B}), \tag{3}\] where \(L_{1}(\cdot)\) is the smooth \(l_{1}\) loss function. ### _Adversarial Loss_ Adversarial loss is applied to make sure the translated images perceptually similar the the genuine ones. First, the translated images are achieved by decoding the features from two input sources according to AdaIn [5]: \[\begin{split} x_{A_{1}\to B}&=AdaIn(x_{A_{1}},x_{B}), \\ x_{B\to A_{1}}&=AdaIn(x_{B},x_{A_{1}}),\\ I_{A_{1}\to B}&=D(x_{A_{1}\to B}),\\ I_{B\to A_{1}}&=D(x_{B\to A_{1}}).\end{split} \tag{4}\] The adversarial loss is then applied on \(I_{A_{1}\to B}\) and \(I_{B\to A_{1}}\): \[\begin{split}\mathcal{L}_{adv}^{dis}&=(F(I_{A_{1} })-0)^{2}+(F(I_{B})-0)^{2}\\ &\quad+(F(I_{A_{1}\to B})-1)^{2}+(F(I_{B\to A_{1}})-1)^{2}. \end{split} \tag{5}\] \[\mathcal{L}_{adv}^{gen}=(F(I_{A_{1}\to B})-0)^{2}+(F(I_{B\to A_{1}})-0)^{2}. \tag{6}\] By minimizing \(\mathcal{L}_{adv}^{dis}\), the discriminator learns to distinguish the generated images from the real ones. By minimizing \(\mathcal{L}_{adv}^{gen}\), the generator learns to cheat the discriminator to believe that the generated images are real. To this end, optimizing them alternatively will improve the quality of the generated images. ### _Cycle Consistency Loss_ The idea of cycle consistency loss is originally proposed in [3]. It aims to maintain the original structural and content information in the translated images. First, the generated \(I_{A_{1}\to B}\) and \(I_{B\to A_{1}}\) will be used to reconstruct \(I_{A_{1}}\) and \(I_{B}\) according to AdaIn: \[\begin{split} x_{A_{1}\to B\to A_{1}}&=AdaIn(E(I_{A_{1} \to B}),E(I_{B\to A_{1}})),\\ x_{B\to A_{1}\to B}&=AdaIn(E(I_{B\to A_{1}}),E(I_{A_{1} \to B})).\end{split} \tag{7}\] Then, the differences between the reconstructed and the original images will be minimized: \[\begin{split}\mathcal{L}_{cyc}&=L_{1}(D(x_{A_{1} \to B\to A_{1}}),I_{A_{1}})\\ &\quad+L_{1}(D(x_{B\to A_{1}\to B}),I_{B}).\end{split} \tag{8}\] ### _Perceptual Loss_ Perceptual loss [16]\(\mathcal{L}_{per}\) is used to reduce the high-level perceptual differences between reconstructed and original images. It can be formulated as: \[\begin{split}\mathcal{L}_{per}=&||E_{per}(D(x_{A_{1} }))-E_{per}(I_{A_{1}})||_{2}^{2}\\ &+||E_{per}(D(x_{B}))-E_{per}(I_{B})||_{2}^{2}\\ &+||E_{per}(D(x_{A_{1}\to B\to A_{1}}))-E_{per}(I_{A_{1}})||_{2}^{ 2}\\ &+||E_{per}(D(x_{B\to A_{1}\to B}))-E_{per}(I_{B})||_{2}^{2}, \end{split} \tag{9}\] where \(E_{per}\) is a VGG based loss network [17] pretrained on the ImageNet dataset. ### _Contrastive Adversarial Loss_ Contrastive adversarial loss is the core supervision signal that makes the translated images \(I_{A_{1}\to B}\) and \(I_{B\to A_{1}}\) have similar style to \(I_{B}\) and \(I_{A_{1}}\), respectively. To implement this, we train the discriminator \(F\) as a similarity measurement based on contrastive learning, and meanwhile optimize the generator in a contrastive and adversarial manner, so as to cheat \(F\) to believe the generated \(I_{A_{1}\to B}\) is similar to \(I_{B}\). Specifically, the contrastive adversarial loss for the generator \(\mathcal{L}_{con}^{gen}\) and the discriminator \(\mathcal{L}_{con}^{dis}\) can be formulated as: \[\begin{split}\mathcal{L}_{con}^{dis}=-\frac{1}{N}\sum_{i=1}^{N} log\,\frac{e^{S(I_{A_{1}}^{i},I_{A_{2}}^{i})}}{\sum_{j=1}^{N}e^{S(I_{A_{1}}^{j},I_{A_{2}}^{j})} +e^{S(I_{A_{1}}^{j},I_{A_{1}\to B}^{j})}},\end{split} \tag{10}\] and \[\begin{split}\mathcal{L}_{con}^{gen}=-\frac{1}{N}\sum_{i=1}^{N} log\,\frac{e^{S(I_{A_{1}\to B}^{i},I_{B}^{i})}}{\sum_{j=1}^{N}e^{S(I_{A_{1}\to B}^{j},I_{B}^{j})} +e^{S(I_{A_{1}\to B}^{j},I_{A_{2}}^{j})}}.\end{split} \tag{11}\] Here \(N\) is the batch size, and \(S(\cdot,\cdot)\) is a similarity metric derived from the discriminator \(F\), calculated as the cosine similarity of \(F\)'s last layer features. As illustrated in the bottom part of Fig. 2, since \(I^{i}_{A_{1}}\) and its corresponding \(I^{i}_{A_{2}}\) (or img A1 and img A2 in Fig. 2) are derived from the same image with only spatial data augmentation (i.e., no color distortion applied), they should have similar styles or spectral characteristics. By training the discriminator to measure them as highly similar pair, the discriminator will implicitly learn to measure the domain-level similarity. Accordingly, by training the generator towards being able to cheat the discriminator to believe that \(I^{i}_{A_{1}\to B}\) is similar to its corresponding \(I^{i}_{B}\), the translated \(I^{i}_{A_{1}\to B}\) will be more and more similar to \(I^{i}_{B}\) in terms of the spectral characteristic. In this way, we realize the I2I between multiple domains without explicitly defining the domain assignments of input patches. ## IV Experiments ### _Experimental Settings_ To evaluate the performance of SS(DA)\({}^{2}\), we set building segmentation as the downstream task. Two public benchmarks including Inria [18] and DeepGlobe dataset [19] are used for training and testing, respectively. Inria dataset provides aerial images for \(10\) cities with a coverage of \(810\)\(km^{2}\) and a resolution of \(0.3\)\(m\). In our experiments, the data annotated with binary building masks from the first \(5\) cities are used. During training, the images are downsampled by a factor of \(2\) and cropped to \(256\times 256\) patches with a stride of \(128\). In total there are \(72,000\) training patches. DeepGlobe dataset provides the satellite data of \(4\) cities, including \(24,586\) patches in total with size \(650\times 650\). The pan-sharpened RGB images with a resolution of \(0.31\)\(m\) are used in our experiments. The provided 16-bit images for each city are converted to 8-bit by cutting out the top \(2\%\) brightest pixels in each channel. Among them, \(200\) randomly selected patches of each city are used for testing, while the others are used for training the I2I networks. As shown in Tab. I, we compare the proposed methods with \(6\) comparative approaches. Among them, Baseline is a vanilla segmentation model where no I2I is applied. Hist. Equ. standardizes the training and testing data based on histogram equalization [6]. HSV [20], Gamma [20] and RHM [21] are data augmentation based methods that improve models' generalizability by shifting the spectral characteristics of the training data. DAug [2] is a multi-source domain adaptation approach that can perform I2I between images sampled from \(2\) arbitrary domains, given their domain assignments. ### _Implementation Details_ For training the I2I network, we adopt a four-block architecture for both the discriminator and the encoder of the generator. Here each block contains a stack of a 2D convolution, an instance normalization, and a max pooling layer, followed by a ReLU activation function. The numbers of channels of these four blocks are 256, 128, 64 and 32. We adopt a Unet [22] architecture for the decoder of the generator, The batch size is set to \(8\), and the learning rate is set to \(0.01\) initially, following a polynomial learning rate decay with a power of \(0.95\). The training process lasts for \(100,000\) iterations. For fair comparison, we use the same network architecture and the same training setting when re-implementing the DAug method [2]. The loss weight in Eq. 1 is set to \(\lambda_{1}=50,\lambda_{2}=5,\lambda_{3}=50,\lambda_{4}=1\) and \(\lambda_{5}=1\). For the downstream semantic segmentation model, we use a Unet architecture with a ResNet50 [23] backbone. The batch size is set to \(8\). The learning rate is set to \(0.001\) initially, followed by a polynomial learning rate decay with a power of \(0.9\). For comparative approaches including RHM, DAug and SS(DA)\({}^{2}\), as they all require a reference patch or domain id to perform the I2I, we randomly sample a reference patch from the testing cities for each of them, and perform the I2I with a probability of \(0.5\) during the training phase. ### _Quantitative Results_ The quantitative comparative results are listed in Tab. I. As can be observed, all the model-based data augmentation approaches can improve over the baseline model in terms of the averaged IoU results. And it turns out that a certain augmentation-based approach could be able to tackle certain types of the spectral shift, e.g., Gamma performs quite well on Khartoum, and HSV has a large improvement on Vegas against the baseline. In contrast, histogram equalization based approach produces poor results, which indicates the large shifts between training and testing data are difficult to be handled by simple standardization approaches. On the other hand, deep learning based I2I methods including DAug and SS(DA)\({}^{2}\) generally have more stable improvements against the baseline regarding their performances on different cities, which proves the superiority of learning-based model in dealing with the complex multi-source domain adaptation setting. The proposed SS(DA)\({}^{2}\) can outperform DAug in terms of the results on Khartoum, and the averaged results over the four cities, despite the lack of domain assignment. This indicates the robustness of the proposed method, and further demonstrates that the intra-domain discrepancy can be well tackled by patch-level self-supervised I2I. ### _Qualitative Results_ The qualitative results of SS(DA)\({}^{2}\) are listed in Fig. 3. As can be seen, images in the first five columns are with relatively consistent styles, since they are aerial images and do not suffer from atmospheric distortions. In contrast, the last four columns from DeepGlobe dataset share quite different spectral characteristics to the others, which demonstrates the existence of domain shifts and highlights the difficulty to perform adaptation among them. According to the visualization results, the translated images in the same column generally have similar styles, while the images in the same row share similar spatial contents. This perceptually verifies that the proposed SS(DA)\({}^{2}\) can successfully capture the spectra-related differences in two input patches and transfer the style between them through self-supervised and adversarial learning. ## V Conclusion In this paper, we elaborate the importance of domain-agnostic DA in machine learning for large-scale real-world applications, and propose a self-supervised approach based on contrastive learning and adversarial learning techniques, which realizes the I2I between any pair of randomly sampled patches without pre-defined domain assignments. Experimental results show that the proposed method can outperform the other model-based or learning-based competitors and is able to generate perceptually high-quality images.